Space, cognition, interaction 3: person and artifacts relationships

This is the third blogpost of a serie that concerns my thoughts about the topic “Space, cognition, interaction” that I address in my dissertation . Step 3 is about the person and artifacts relationships (see step 1 and step 2). Another topic the literature about spatiality addresses is the relationship between people and artifacts located in the vicinity of the participants of a social interaction. Indeed, when a speaker talks about an object to his hearer, they are involved in a collaborative process termed referential communication (Krauss and Weinheimer, 1966). As a matter of fact, the practice of pointing, looking, touching or gesturing to indicate a nearby object mentioned in conversation is essential to human conversation; it is called deictic reference. This spatial knowledge can be used for mutual spatial orientation. Schober (1993) points out that it is easier to build mutual orientations toward a physical space (versus a shared conceptual perspective) because the addressee’s point of view is more easily identified in the physical world. There has been very little research focusing on referential communication in virtual space. Computer approach, like “What You See Is What I See” has been designed in order to support this process but studies show that such tools are not as powerful as deictic hand gestures (Newlands et al., 2002). The authors found fewer deictic acts in computer-mediated interaction; a possible reason for that can be the lack of adequate tools. Researchers, for example, attested that it is actually more difficult to see where avatars are pointing in a 3D virtual environment compared to the real world (Fraser et al., 2000). Consequently, if we think about the role of mutual location-awareness (MLA), knowing the location of others can allow one to make sense of deictic acts and promote referential communication. By projecting oneself to the known partner’s location, one can infer meaning from the deictic references.

Moreover, how the spatial environment is used in abstract cognition is a fundamental issue addressed in cognitive psychology (Kirsh and Maglio, 1994; Kirsh, 1995). These authors explain to what extent space between objects and people is used as a resource in problem solving. According to them, actions like pointing, writing things down, manipulating artifacts or arranging the positions and orientations of nearly objects are examples of how people encode the state of a process or simplify perception. Studies in virtual environments have shown similar results concerning the use of tools in space (Biocca et al., 2001). Biocca explores how people organized virtual tools in an augmented environment. Users had to repair a piece of equipment in a virtual environment. The way they used virtual tools showed patterns of simplifying perception and object manipulation (for instance by placing reference material like clipboard well within the visual field on their right). MLA should then be seen as another set of resources to augment cognitive processes such as memorization or problem solving.

What is also interesting with regard to human activity is the notion of social navigation (Dourish and Chalmers, 1994), which refers to situations in which a user’s navigation through an information space is guided and structured by the activities of others within that space. Social navigation can be defined as “navigation towards a cluster of people or navigation because other people have looked at something” (Munro et al., 1999, p. 3). This refers to the notion of “social space” inferred from the traces left in the environment (virtual orphysical) by people’s activity. As a matter of fact, we all leave signals insocial space that can be decoded by others as traces of a previous use: fingerprints, crowds, footsteps, graffiti, annotations and so on. From these cues, other persons can infer powerful things: others were here, this was popular, where can I find something, and so forth. This process takes place in both virtual and physical settings through recommender/voting systems or collaborative filtering. The most known example of such filtering is the Amazon’s recommendation system, which gives us pointers on books that may interest us based on others’ previous purchases.

References:

Biocca, F., Tang, A., Lamas, D., Gregg, J., Gai, P., & Brady, R. (2001): How do users organize virtual tools around their body in immersive virtual and augmented reality environments? Technical Report: Media Interface and Network Design Laboratories, East Lansing, MI.

Dourish, P. & Chalmers, M., (1994): Running Out of Space: Models of Information Navigation. In Proceedings of (HCI'94): Human Computer Interaction, Glasgow. New York: ACM Press.

Fraser, M., T. Glover, I. Vaghi, S. Benford, C. Greenhalgh, J. Hindmarsh and C. Heath (2000): Revealing the Realities of Collaborative Virtual Reality. Collaborative Virtual Environments. In Proceedings of Collaborative Virtual Environments (CVE 2000), San Francisco, CA, New York: ACM, pp. 29-37.

Kirsh, D., & Maglio, P. (1994). On distinguish between epistemic from pragmatic action. Cognitive Science, 18, 513-549.

Kirsh, D. (1995). The Intelligent Use of Space. Artificial Intelligence, 73(1-2), 31-68.

Krauss, R. M., & Weinheimer, S. (1966). Concurrent feedback, confirmation, and the encoding of referents in verbal communication. Journal of Personality and Social Psychology, 4(3), 343-346.

Munro, A.J., Höök, K., & Benyon, D. (1999). Footprints in the Snow. In A. Munro, K. Höök and D. Benyon (Eds.) Social Navigation of Information Space (pp.1-14). London: Springer.

Newlands, A., Anderson, A., Thomson, A., & Dickson N. (2002). Using Speech Related Gestures to Aid Referential Communication in Face-to-face and Computer-Supported Collaborative Work. In Proceedings of the First congress of the International Society for Gesture Studies, University of Texas at Austin, June 5 - 8, 2002.

Schober, M. F. (1993). Spatial perspective-taking in conversation. Cognition, 47, 1-24.