Research

[Research] How to study mutual modeling in a mobile game

where we are: we have catchbob! a collaborative and mobile game platform running on ipaq and pcs. we want to use it as a testbed to study collaborative processes, namely mutual modeling and division of labor mutual modeling = mutual belief/mutual knowledge/mutual understanding = awareness (evidence) + assumption/induction = representation of the partner cognitive state (knowledge, goals, purposes, intents, understanding).

group modeling = representation of the group itself: task-level features (e.g. group representation of the problem state) and interaction-level features (e.g. who is more active...)

usual method to study mutual modeling 1. ask questions (on-task interviews), explicitly during the collaboration process. Methodological BIAS: participants hence pay more attention to their partners than they would normally do. 2. find behavioral cues during the game: for instance if A goes in the same place B already visited or if C performed already done actions (when it is not relevant to perform the action several times). the point is to find signs that a participant did something because he misunderstood what the others did or wanted him to carry out. 3. ask question after the game: replay of specific episodes, self-confrontation (one participant or focus group). The material that shoulkd be shown could be as rich as possible (path, users' actions, users' interactions...) so that they can elaborate on it and explain their behavior. We can also ask them to DRAW stuff (paths, interactions location...). BIAS: (inherent of stimulated recall) what participants will explains is different from what really happened. 4. during the game: it is possible to assign specifi roles to each participants or either to give them different piece of knowledge.

-> solution 2 and 3 are the most interesting we are working on a catchbob! replay tool and log parser to meet this end. the replay tool provides an interface to navigate through the game, to edit and mix the history of collaborative episodes/interactions.

replay possibilities (ask participants why they found the interaction episode interesting, why they did that...): - selective/all: the experimenter OR the subjects choose specific episodes to comment. - individual/group replay - replay/mixed replay: the groups's episode/another group episode (ask them if they recognize each episodes as part of the group history or not and which elements of the replay determined their answers) - replay mode: mono/stereo. in the mono mode, A sees his/her own action and B+C sees A's action. In the stereo, they all see the actions carried out by the three persons.

episode selection: how to select relevant episodes, criteria for interesting episodes selection: rich/poor interaction: conflict solving, explanation, "i got it episodes", silence/lost episodes

Lu ici:L’erreur fondamentale : à la base de la tendance à inférer des causalités internes plutôt qu’externes (= les gens sont responsables de ce qu’ils font et de leur sort). Voici 4 interprétations :

1- besoin de contrôle (sinon phénomène de résignation acquise). 2- éliminer le hasard des causes possibles de l’événement (monde juste → on obtient ce que l’on mérite, travaux de Lerner à ce sujet). 3- modèle culturel dominant dans cultures occidentales → centrée sur individu = privilégie facteurs internes. 4- variations sub-culturelles (les mieux insérés attribuent de la valeur aux explications)

L’erreur fondamentale est aussi modulée par d’autres biais :

Biais d’auto-complaisance : succès = causes internes / échec = causes externes : préservation de l’estime de soi. Mais aussi conceptions plus cognitives : les gens s’attendent à la réussite de ce qu’ils entreprennent. Evénements attendus → attributions internes ; événements inattendus → attributions externes.

Biais acteur-observateur : l’acteur perçoit plus souvent son propre comportement comme une réponse à la situation. Acteurs sont centrés sur la situation alors que observateurs le sont plus sur l’individu. L’acteur s’explique en termes de raisons alors que l’observateur explique en termes de causes.

[MyResearch] Replay tools in video games

I am entering the very field of video games replay tool so that I can extract idea for my own Interaction Analysis in CatchBob!. (which btw has now a google ranking of 67). Here is BWchart meant to analyse the list of actions from Starcraft and Warcraft. Thos games seems to have a community interested in replay tool. Here are few snapshots of this tool:

This replay tool is used by teams to improve their strategies Variables taken into account:

Actions Per Minute = (the number of actions recorded in the replay file / game duration in seconds) * a minute. The first 80 seconds of the game are discarded.

All actions that the BW engine needs to actually replay the game and only those actions. Those actions can be mouse clicks or keyboard based. For example, actions that are recorded:

-selection of a unit or building -moving a unit or lifted terran building -telling a unit to attack, to stop, to hold position, to patrol -telling an scv to mine -casting a spell (psi storm, plague, restore, etc) -training a unit (ex: you have a CC selected and you press 's' to train an SCV) -researching a technology -upgrading anything (terran infantry weapons, armor, etc) -summoning an archon -evolving an hydralisk into a lurker -building an add-on (machine shop, com sat, control tower, etc) -morphing a creep colony into a sunken or spore colony -all hotkeys -etc,etc

[Research] Google on your Goggles?

Howard Rheingold in the feature about cool heads-up display.

Heads-up displays, first invented for fighter aircraft more than a decade ago, have morphed into "augmented reality" goggles that project an informational overlay over the real world as the goggle-wearer navigates through it. Information about people, places, and things can be seen only by the wearer, displayed automatically or on command. (...) the Information and Navigation Systems Through Augmented Reality (INSTAR) prototype enables the driver to see a transparent arrow floating in space in real time, signaling exactly where to turn en route to the designated destination, or warning of a pedestrian or oncoming bus.

[Research] FOAF/Semantic Web event in September?

With Roby, we are considering writing something about our french web aggregator rss4you for this FOAF/Semantic Web Workshop.

This workshop on FOAF, social networking and the Semantic Web provides a first chance to discuss the unusual combination of perspectives - academic and scientific, engineering, social, legal and business - drawn together by these trends. The workshop aims to bring together for the first time researchers interested in the effects, analysis and application of social networks on the (Semantic) Web as well as practitioners building applications and infrastructure. The workshop will also try to give a snapshot of current developments, as well as setting a roadmap for the future of both FOAF and social networking - especially in the context of the Semantic Web.

[MyResearch] Collaborative Virtual Environment Interactions' Analysis

Users' interactions with the environment (virtual, mobile...) foster the production of a wide range of data. Thanks to multi-user technologies, complex interactions occur. It is possible to aggregate the interaction data of the logfiles into a set of high-level of indicators. This is useful for environment designers.

Interaction Analysis Process (done by an interaction analysis tool): - data selection - data aggregation and processing - production of X indicators/variables that indicates "something" (like for instance the different variables in catchbob: performance, process, qualitative stuff...) - use these indicators: to self-confront the users with it (to foster verbalization) or to the teacher in a cscl environment...

[MyResearch] Sport Analysis Software

Via boing boing: an interesting news about "software that can identify the significant events in live TV sports broadcasts will soon be able to compile programmes of highlights without any help from people.". The point is to pick out the key events from a game, that can take place at predictable locations.

Ahmet Ekin, a computer scientist from the University of Rochester in New York, may be close to solving that problem. He has designed software that looks for a specific sequence of camera shots to work out whether a goal has been scored.

For example, player close-ups often indicate a gap in play when something important has happened, and slow-motion footage is another useful cue. If Ekin's software sees a sequence of player close-ups combined with shots of the crowd and pictures in slow motion that lasts between 30 and 120 seconds, it decides that a goal has been scored, and records the clip in the highlights

[MyResearch] phd thesis phasing

1. CatchBob! Briefing: players explains their strategies (5minutes) Video GAME lots of data, MM indexes ? Coffee break Self-confrontation to the replay Video, Open > Semi-structuredto see if interesting task and replay/methods are OK

2. CatchBob ! 2: experiments, with a new task 3. CatchBob ! 3 ? 4. Formal Model a. predictive model (simulation) : symbolic (hard) or a virtual agent (no real cognitive validity but plausible behavior) b. descriptive model (ontology) c. analysis model : detection of critical elements (for instance : A was not aware of B’s position, then an agent should inform A about B’s position). 5. Apply this model in another context : to D.Lanier’s work or a VRML world that replicates the EPFL campus

[MyResearch] Study about Understanding Virtual Team Development

Understanding Virtual Team Development: An Interpretive Study by Suprateek Sarker and Sundeep Sahay.

In this paper, we develop an understanding of how virtual teams develop over time by inductively studying communication transactions of 12 United States-Canadian student virtual teams involved in ISD. Our analysis is based upon two influential streams of social science research: (1) interaction analysis, which aided in the examination of the micro-processes of communication among members of a virtual team, and (2) structuration theory, which provided a meta-framework to help link the microlevel communication patterns with the more macro-structures representing the environmental context as well as the characteristics of teams over time. Based on our interpretation of the communication patterns in the virtual teams, we propose a theoretical model to describe how virtual teams develop over the life of a project, and also attempt to clarify how the concepts of communication, virtual team development, and collaboration are related.

[MyResearch] Finding a task for a locative media experiment

I am trying to figure out a task/scenario for a locative media experiment. The PLAYLab has an interesting approach:

We have a rigourously theoretical framework, as well as a speculative and design oriented appraoch when designing gaming applications. Observations of live role-playing games provide inspiration and an excellent platform for thinking about new gaming interfaces that are both ubiquitous and tangible.

The "narrative" thing does not interest me that much, but the idea of using live RPG observations as starting point is relevant.

[Research] Getting your socks wet: Augmented Reality Environmental Science

Getting your socks wet! is a cool mobile learning project.

As simulations go from the desktop to portable devices, we hope to harness the unique affordances of handhelds including: (1) portability –can take the computer to different sites and move around within a site; (2) social interactivity – can exchange data and collaborate with other people face to face; (3) context sensitivity– can gather data unique to the current location, environment, and time; (4) connectivity – can connect handhelds to data collection devices, other handhelds, and to a common network; (5) individuality – can provide unique scaffolding that is customized to the individual’s path of investigation. A handheld learning environment might capitalize on this ability to bridge real and virtual worlds resulting in augmented reality simulations, simulations that layer virtual context on top of the real-world.