Mutual Modeling: representation of the partner cognitive state, namely inferences an individual make about his/her partner's goals, purposes, intents, understanding.
And I would call 'mutual modeling acts' the interactions that aims at understanding what the partners is up to, will do, aims at or get from the situation.
I would like to establish a typology of 'Mutual Modeling act' thanks to a qualitative analysis of the data collected from CatchBob (annotations written on the tablet PC screen + verbalisation during the group confrontation to the replay). I expect 2 kinds of acts, but it's hard to operate them as variables:
- explicit mutual modeling acts: it is easy because things can occur in dialogues or annotations; the easiest occurences would be like "I understood that you/he/she/they wanted to...", "I did not get that you/he/she/they", "you/he/she/they did not understand that...". There could also be some less clarified stuff.
- implicit mutual modeling acts: here the real challenge! Tracking this will be difficult but I will use specific indexes drawn from game events like 'spatial overlap': calculated by counting the number of rooms both partners searched.
Update: gosh I was certainly tired when I wrote the second part. I think I mistook 2 things here: the mutual modeling act (like the explicit acts described previously) and the outcome of a bad mutual modeling (path overlap for instance or actions redundancy) during the task. Then the question of the implicit acts of mutual modeling is left open... It might be all the actions carried out by an individual to express his/her goal and the future things he/she will undertook... mmmh I still have to find example to operate this in catchbob!