Research

Group uses of mobile devices

There is a very relevant interview of Jeff Axup in newsletter and discussion group Mo:Life (he also put it on his weblog). Jeff works on mobile technologies for backpackers, using ethnographic and participatory methods. Some pertinent excertps hereafter; The group usage of mobile devices (like cell phones) is an amazingly new and by-product of their massive use.:

Several recent research studies have shown a variety of examples of communal phone usage, including turn-taking, borrowing, and sharing of communication content. In addition to usage of devices by groups in-person, remote users also affect our individual use.

Jeff goes on describing how he envision the phone of the future:

If designed properly they will complement existing group goals and behaviours. They will enable us to communicate with networks of people in ways that were impossible or insufficiently usable before. To give a tangible example: backpackers currently communicate face to face, via physical message boards in hostels and to some degree via SMS, IM and phone calls. In the future they could be informed of interesting people they could talk to, form instantaneous, short-term communication channels while on tours, or tap into community-authored travel advice. People are inherently social, but we still lack the ability to easily communicate to groups in many circumstances where we would like to.

And his take on communication problem is also interesting:

We recently ran a study looking at a group of three people using a mobile discussion list prototype to search and rendezvous at an unknown location. We discovered a number of usability problems related to SMS discussion list usage including: multitasking during message composition and reading; speed of keyboard entry; excessive demand on visual attention; and ambiguity of intended recipients. More generally speaking, mobile devices still suffer from expensive wireless data connectivity, poor input devices and lack of contextual awareness. Mobile users still have difficulty easily communicating with groups, transferring information between their phones, and finding software to support their daily activities. Groups face challenges of visualizing their own behaviour, coordinating actions and communicating physical location and plans efficiently

Concerning "Web-based travel diaries are increasingly used to communicate location and travel experience to family and friends and soon picture-phones will integrate seamlessly with this.", that reminds me what my friend Anne Bationo analyses for her PhD thesis. She is working for telco operator France Telecom on travellers' narratives. She applies a user-centred approach to envision new instruments, to support travellers when performing their activities. A description of her work might be found in this paper: Travelling narrative as a multi-sensorial experience: A user centred approach of smart objects.

A Typology of spatial expressions

Ioannidou I. & Dimitracopoulou A.,Final Evaluation Report. Part II. Children in Choros & Chronos Project. Esprit/I3, 2001. The authors report on a study about how kids collaborated (2 teams: one on the field and the other in a 'control room') on a treasure hunt (a bit different than our CatchBob! thing). I found in this report an interesting typology of spatial expressions used in their quantitive analysis:

Topological referents: where positioning, orientation, or motion in space is determined via reference to objects located in space. Specifically we include expressions that refer to relations of objects (close to, in front of etc) in space. Intrinsic referents (projective or body-centered, or body-syntonic): where positioning, orientation, or motion in space are determined with regard to a specific viewpoint from which the objects are observed. Under this category we ranked expressions that result from the pupils’ own point of view -which according to Piaget and Inhelder (1967) is the source of simple projection-or from the pupils’ changing of view point (on our left, on your left respectively etc). Euclidean referents: where positioning, orientation, or motion in space is determined by using the metric system, making calculations and using coordinates. The spatial expressions under this category refer only the use of metric system and estimation of relative distance. Combination of referents: Where more than one of the above types of referents are used to determine one positioning or direction in space Context-bound referents: Where positioning, orientation or motion in space is determined in terms of a specific representation or environment (the computer screen or the real space). Context bound expressions were used mostly in within group communication and in several cases were accompanied by gestures (e.g they are here – shows the place on the screen. Down is considered context bound because it occurs from the two dimensional representation of space on the computer screen and defines the area on the lowermost side of the screen. Context bound referents were primarily exchanged in within group discourse where pupils could see each other and mediate his/her talking with gestures and reference to the experience the group was sharing. Context-bound – intrinsic referents: Where positioning, orientation and motion in space are determined with regard a specific point of view but they also include references to idiosyncratic elements of the environment in which they are produced. Expressions like “the store room is here” are reported under this category because the pupil shows with a gesture a position in space taking as referential point its own point of view.

Sequential analysis

Today I am learning the use of GSEQ, a software for the analysis of interaction sequences as desrcibed in this very interesting book: Observing Interaction : An Introduction to Sequential Analysis by Roger Bakeman, John M. Gottman. Sequential analysis is a technique that might be of interest to study timed-events like CatchBob! messages (as well as the logfiles). I found the very smart notion of transitional probability that I will use to study the probability of conditional actions in CatchBob communication processes. Like for instance whether the probability of having a direction request more frequent in teams with the awareness tool.

Usability and collaborative aspects of augmented reality

Usability and collaborative aspects of augmented reality by Morten Fjeld, in Interactions Volume 11 , Issue 6 November + December 2004. Some excerpts I find relevant:

In the design process of an AR application, a series of questions related to human-computer interaction (HCI) call for attention: Who are the users and what are their needs? How can a system be designed to work effectively and efficiently for these users? How is effectiveness and efficiency measured in AR applications? Do users prefer an AR system or an alternative tool to go about their work? And finally, with what kind of tasks and what kind of alternative tools should the usability of AR applications be tested? (...) The need for studies evaluating the effect of computerized tools on human cooperation and communication is well justified and documented in the first paper, prepared by Billinghurst, Belcher, Gupta, and Kiyokawa [that's indeed a good paper that shows an evaluation of an AR table]

I think the same goes for studying locative media from an usability and collaborative point of view. My only concern here is that the evaluation they propose is a bit limitated. They just take into account frequency of events and differences among different conditions. There are plenty of other methods ranging from quantitative (as proposed in the evaluations described in this paper or with different types of statistical techniques like multilevel modeling or sequential analysis) to qualitative (ethnography, cognitive anthropology à la Hutchins, french ergonomics...).

Unit of analysis in CSCW: individual or group?

My notes taken from Kenny, D. A., Kashy, D. A., Bolger, N. (1998) Data analysis in social psychology. In D. Gilbert, S. Fiske, & G. Lindzey (eds.) Handbook of social psychology , vol. 1, pp. 233-251. Boston: McGraw-Hill.p233

  • majors strides have been made in the analysis of data in which persons interact with or rate multiple players
  • nonindependence of observations is a serious issue that is often just ignored then ANOVA is limited
  • replacing ANOVA by structural equation modeling OR multilevel modeling on certain kind of variables
  • which unit of analysis should be chosen: individual or group? If person is used as unit of analysis, the assumptions of independence is likely to be violated because persons within groups may influence one another (Kenny and Judd, 1986). Alternatively, if group (= couple, team, organization...) is used, the power of the statistical tests is likely to be reduced because there are fewer degress of freedom than there are in the analysis that uses person as the unit of analysis.
  • concerning the independent variable (IV) A, there is three cases: nested (when groups are assigned to levels of the IV such that every member of a given group has the same score on A with some groups at one level of A and other groups at other levels of A), crossed (when A varies within the group with some member in one level of A and other group members in another level of A) and mixed (both nested and crossed). [I often use nested variables like in my masters thesis then I will develop on this now]
  • for nested IV: there is a method to measure the nonindependence of the data using the intraclass correlation. Group effects occur if the scors of individual within a group are more similar to one another than are the scores of individuals who are in different groups. The intraclass correlation can be viewed as the amount of variance in the persons' scores that is due to the group, controlling for the effects of A. When the intraclass correlation is not large and total sample size and the group size are small, power is very low. Using the ANOVA: this correlation is equal to (n= number of persons per group): (mean square for groups within A - mean square for individual within groups within A)/(mean square for groups within A + (n-1)*mean square for individual within groups within A)
  • Summary: safer to make group as unit of analysis and so it is then necessary to collect data from a sufficient number of groups. general guideline: if there is nonindependence, then group must be used as the unit of analyis; if there is independence, the individual may be the unit of analysis. The usual standard for "sufficient power" is having and 80 percent chance of rejecting the null hypothesis.

Hopefully my data are nested then I can use ANOVA but with this discussion on which kind of unit of analysis I may choose. However, multilevel modeling might be useful and can be applied here. Here is an interesting resource to compute this index.

Human Computer Interaction research journals

This morning I tried to list the most important HCI research journals in my domaine (general + CSCW + mobile), with the impact factor:

3D sounds and virtual environment

Using 3D sound as a navigational aid in virtual environments by R. Gunther, R. Kazman and C. MacGregor. Behaviour & Information Technology, Volume 23, Number 6 / November-December 2004

As current virtual environments are less visually rich than real-world environments, careful consideration must be given to their design to ameliorate the lack of visual cues. One important design criterion in this respect is to make certain that adequate navigational cues are incorporated into complex virtual worlds. In this paper we show that adding 3D spatialized sound to a virtual environment can help people navigate through it. We conducted an experiment to determine if the incorporation of 3D sound (a) helps people find specific locations in the environment, and (b) influences the extent to which people acquire spatial knowledge about their environment. Our results show that the addition of 3D sound did reduce time taken to locate objects in a complex environment. However, the addition of sound did not increase the amount of spatial knowledge users were able to acquire. In fact, the addition of 3D auditory sound cues appears to suppress the development of overall spatial knowledge of the virtual environment.

Why do I blog this? Because I believe that sound awareness is important in VE. This paper provides good references.

Do not put too much faith in mock-ups but...

A relevant column in ACM's Interactions magazine by Lars Erik Holmquist about the use of mock-ups and prototypes in interaction design. His claim is that it is certainly fruitful in participatory design (where users are brought in very early in the design phase) but "there is a danger with putting too much faith in what is, after all, only a shadow of the real thing". He clarifies his point using the cargo-cult metaphor

What is the difference between the positive and negative uses of representations? (...) cago-cult is a certain form of religious movements started to spring up in the Melanesian islands in the South Pacific. These religions thought that the goods—the cargo—that started to arrive on ships and planes had a divine origin, or more specifically that it came from their ancestors.(...) The Melanesians reasoned that if they could build exact replicas of the white man's artifacts, they would receive the same benefits. What they failed to realize was of course that their replicas, made from bamboo and straw, while superficially similar to the real thing did not capture the essence of the original artifacts.(...) We can define cargo cult design as creating a representation without sufficient knowledge of how it actually would work, or presenting the representation while not acknowledging such knowledge.

Then there is a nice discussion of the concept of representations:

In a design process, representations are a physical embodiment of something that otherwise would only exist as an abstraction. Without getting deep into the epistemological definition, we can say they are the embodiment of knowledge. But mock-ups and prototypes represent knowledge in different ways.(...) However, to give any kind of reliable information, the representation must give a realistic impression of the intended end product. If the representation is based on insufficient knowledge of real-world factors, presenting it to potential customers or testing it with prospective users will not make much sense

He concludes about to use mock-ups and prototypes:

When presenting a mock-up or prototype, the interaction designer should always ask:

  1. Am I fooling myself? Do I really have enough knowledge of the technology and the users to gain valuable insight from this representation, and will it help me to construct the "real thing"?
  2. Am I fooling the layman? Is there a risk that people mistake the representation for the real thing, and thus believe that I have solved problems that I have not?

But the interaction designer should also see the value in representations as generators. Even when the knowledge that goes into a representation seems questionable or even irrelevant, it can still be valuable, as long as the results are treated responsibly. There is value in toying with and the possibilities of technology and being inspired by them; prototypes that may not seem useful can give rise to many unexpected ideas and eventually form the basis of successful products. With the concept of generators comes an explorative attitude to the development of interactive artifacts. Interaction designers should be encouraged to take representations, prototypes and mock-ups of all kinds as starting points for exploration—but never accept them at face value.

CatchBob analysis documentation

A rough list of how I am analysing CatchBob data:

excel files:- results.xls: paht, time, refresh... client parsing - Results_map_annotations.xls: map annotations on the tabletPC - results_drawings.xls: drawn paths

1) Map annotations

Absolute number of messages in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

% number of messages/total in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

Frequency of messages/time in both conditions: total, position, direction, signal. strategy, off-task, acknowledgement and correction Variance analysis to check the differences (+normality+ homosedasticity)

***** correlation between number of position messages and number of direction messages **** **** split the groups (post-hoc) in 2 (50/50 or 40/20/40 depending on the repartition) and check if the group who annotate a lot make less errors IF yes: annotation is good, if not: the awareness tool makes people asleep! *****

2) Time analysis

Time= time spent to find bob (till YOU WON)

Histogram(time), normality Time spent in both conditions Variance analysis: NoAT seemed to have more time to write messages (and then more position messages).

3) Errors analysis

Errors = sum of the number of errors made by A to draw B and C's paths Histogram(errors), normality Errors made in both conditions Variance analysis: NoAT make less errors : anova(awareness~errors) Covariance analysis: try to include the time in the model: anova(awareness~errors * time) The comparison of those two model (with or without time taken into account) is not significant

*****Try a model adding workload, disconnection or path length or bob's position********

4) Path length

Our real dependant variable Path length = sum of individual's path length among a group Histogram(length), normality Length made in both conditions Variance analysis: anova(awareness~length) !!!!!!! Multi Level Modelling !!!!!!! Analysis at the group level: data are not independent *****Try to create new model: covariate with: Time, workload, bob's position, disconnection****

5) Workload

Workload = NASA TLX evaluation Histogram(workload), normality Workload made in both conditions Variance analysis: anova(awareness~workload) !!!!!!! Multi Level Modelling !!!!!!! Analysis at the group level: data are not independent *****Try to create new model: covariate with: Time, length, bob's position, disconnection*****

6) Verbalization after the game ...

7) Various correlations Pearsons or Spearman or Kendall: it might often be Spearman/Kendall since the data are not linear

- correlation between number of position messages and number of direction messages - number of messages (total, position, direction...) and path length (groupe/individual?) - errors and number of messages (total, position...) for the 2 conditions and for each - errors and path length (more errors when path is longer?) - number of refresh (AT) and number of errors - intragroup correlation of number of messages

8) Division of labor - indexes: task div, backtracking, overlap - Do the teams with synchronous awareness tools develop different problem solving strategies than those with asynchronous awareness tools? Different division of labor ?

9) Other questions - How does the frequency of coordination acts (explicit or implicit) vary over time? Are these request more frequent at the beginning of a task or do they increase at specific phases in terms of problem solving strategy? - intragroup correlation of number of refresh (AT)

10) Other techniques to explore?? - Sequential analysis: I need to find some literature to create my models then - Multilevel modeling - Cluster analysis

Ideas for CachBob2

Constraints for CatchBob2:

  • a more abstract task, less spatial; with inference like people drawing conclusions from what they see/find in various contexts
  • more strategy discussion during the game, less planning/strategy before the game
  • a bit more appealing for gamers: more rewards (like displaying the object...)

Tangible Computing at eTech

Even though I could not make it to eTEch 2005, there are plenty of ways to be aware of what happen there. Chris Heathcote and Matt Jones was one of the talk I would have attended. Both are very interesting people I just know from their blogs. Those guys work at Nokia and their 'recordable and distributable' talk is about Tangible Computing. Their slides could be downloaded here (.ppt). You can also find notes about their presentation here and here. Thank you for quoting P&V! Well, let's talk about the content. They first sketch the picture we have today: Ubiquitous computing is here, not evenly distributed, computers are everywhere, starting to talk to each other. The point is that after WIMP and overlapping windows all went downhill in terms of man-computer interaction. Users needs need new ways of controlling and understanding our digital world. The problem is that digital interactions are intangible because there are no natural affordances (see Gibson). BUT, there are NEW METHODS of interaction based on situations/touch/embodied interactions (= being in the world). THEN, they put a wide bunch of picture of Paul Dourish. They quote Dourish because the guy is one of the embodied interaction gurus. Dance Dance Revolution is a good example. Thus they present various new interfaces (Tablet PC., Smart furniture, All seeing eyes (EyeToy, AR, Human PacMan), Passive information displaye (Smart Object). They also mention NFC, near-field communications, something which I strongly believe in.

But then lies the important question: "what can we do with all that stuff?" It's all about gluing stuff together. If you are a regular reader of Chris' anti-mega, you have certainly stumbled across this glue thing. Basically, it's because computers make it easy to take inputs and manipulate.

I think they gave a proper and relevant presentation of today's situation, we have all this stuff: computers, devices, mobile things and on top of that we have webservices, interoperability, applications layers... so let's use those 2 layers to glue all our computer gizmos. I would just advocate for taking the end-user into account, trying to get what he/she does/needs/wants/dreams of...

CatchBob map in terms of \"Places\"

Here is an attempt of the CatchBob map where I just left the "Places", that is to say, zones that are meaningfull in terms of socio-cultural interactions in the CatchBob context like buildings, corridors... Of course I did not took into account lots of factors (like rooms) because of the low accuracy of our location tool. I am going to use this to analyse my data and observe the errors in path drawn by the participants.

Estimating Interrater Reliability

I am into estimating interrater reliability lately. Here is a good summary :A Comparison of Consensus, Consistency, and Measurement Approaches to Estimating Interrater Reliability by Steven E. Stemler

This article argues that the general practice of describing interrater reliability as a single, unified concept is at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different statistical methods for computing interrater reliability can be more accurately classified into one of three categories based upon the underlying goals of analysis. The three general categories introduced and described in this paper are: 1) consensus estimates, 2) consistency estimates, and 3) measurement estimates. The assumptions, interpretation, advantages, and disadvantages of estimates from each of these three categories are discussed, along with several popular methods of computing interrater reliability coefficients that fall under the umbrella of consensus, consistency, and measurement estimates. Researchers and practitioners should be aware that different approaches to estimating interrater reliability carry with them different implications for how ratings across multiple judges should be summarized, which may impact the validity of subsequent study results.

Impromptu meeting with stefano

I had a very enthusiastic meeting with stefano (impromptu and 300Mb worth I would say). Apart from updating me on his future projects, we had a look at some of my data. The most interesting thing (so far) is that the absence of the location awareness tool enhances the number of messages the participants sent (the total number of messages, the position/direction/strategical messages). What is striking: with a less informative media (without the awareness tool), they sent more messages (coordination keys would say Herbert Clark): they not only sent messages about their position but also about their direction as well as strategical information. That means that the users anticipated something: they had to send more information otherwise the space of interpretation for the others would be too small. That's why they sent messages about their direction + strategy: the others can then better decice what to do. From Sperber and Wilson points of view, it's all about relevance: participans picked up and sent facts that they perceived as relevant for the task/their purposes!

Research meeting

Today at the lab, we had a meeting with Jean-Baptiste Haué. The project we might carry out with him is about how people maintain a representaiton of their partners' intents while collaborating. His work is very interesting, mostly ethnographically oriented (french ergonomics/HCI researcher are mostly into this area); he collaborated with EDF and Nissan plus some neat lab in the US. I am looking forward discovering more how he analysed his data (some fruitful tools were shown during the presentation). One of his publications I've put on my reading list: McCall, J.C.; Achler, O.; Trivedi, M.M.; Haue, J.-B.; Fastrez, P.; Forster, D.; Hollan, J.D.; Boer, E. (2004) "A Collaborative Approach for Human -Centered Driver Assistance Systems",, IEEE Conference on Intelligent Transportation Systems

Phd meeting

Just unsorted thoughts and todo stuff:

  • Players without the location of their partners communicate more about their position (same information as the awareness tool) as well as about other information not conveyed by the tool (direction and strategy). Then how can we measure the effects of this additional communication?
  • In order to investigate the mode (text, graph), the position (site/onsite) and the intent (announcement/question/order), I should not work on the total number of messages but on the frequency (for instance Number of order / Total)
  • Code the map drawn by the player with symbolic names (Coupole, Couloir CM-A, Couloir CM-B...)

CatchBob usage statistics

I did some rough statistics lately about how catchbob users' manage to communicate through the freehand drawing interface. I code all those messages using a simple coding scheme that focus on the content (position, direction, signal strength, strategy, offtask and acknowledgement), the mode (textual or graphical), the position on te map (site specific or not) and the intent (announcement, question or order). The presence of the location awareness tool have a positive effect on the total number of messages, the position/direction/strategical messages. There is also more announcement and questions messages in the condition with the awareness tool.