Supporting Ethnographic Studies of Ubiquitous Computing in the Wild by Crabtree,M. Benford,S. Greenhalgh,C. Tennent,P. Chalmers,M., in Proc. ACM Designing Interactive Systems (DIS 2006). In this paper, the authors draw upon four recent studies to show how ethnographers are replaying system recordings of interaction alongside existing resources such as video recordings to understand interactions and eventually assemble coherent understandings of the social character and purchase of ubiquitous computing systems. Doing this, they aim at identifying key challenges that need to be met to support ethnographic study of ubiquitous computing in the wild.
One of the issue there is the fact that ubicomp leads to distribute interactions in a wide range of applications, devices and artifacts. This foster the need for ethnographers to develop a coherent understanding of the traces of the activity: both external (audio and video recordings of action and talk) and internal (logfiles, digital messages...). Additional problems for ethnographers are: the fact that users of ubiquitous systems are often mobile, often interact with small displays, and with invisible sensing systems (e.g. GPS) and the interaction is often distributed across different applications and devices. The difficulty then lays in the reconciliation of fragments to describe the accountable interactional character of ubiquitous applications
I like that quote because it expresses the innovation here: the articulation between known methods and what they propose:
"Ubiquitous computing goes beyond logging machine states and events however, to record elements of social interaction and collaboration conducted and achieved through the use of ubiquitous applications as well. (...) System recordings make a range of digital media used in and effecting interaction available as resources for the ethnographer to exploit and understand the distinctive elements of ubiquitous computing and their impact on interaction. The challenge, then, is one of combining external resources gathered by the ethnographer with a burgeoning array of internal resources to support thick description of the accountable character of interaction in complex digital environments. "
The article also describes requirements for future tools but I won't discuss that here, maybe in another post, reflecting our own experience drawn from Catchbob. Anyway, I share one of the most important concern they have:
The ‘usability’ of the matter recognizes that ethnographic data, like all social science data, is an active construct. Data is not simply contained in system recordings but produced through their manipulation: through the identification of salient conversational threads in texts logs, for example, through the extraction of those threads, through the thickening up of those threads by synchronizing and integrating them with the contents of audio logs and video recordings, and through the act of thickening creating a description that represents interaction in coherent detail and makes it available to analysis
Why do I blog this? This paper describes a relevant framework of methods that I use even though I would argue that my work is a bit more quantitative, using mixed methods (ethnographical and quantitative) with the same array of data (internal and external). It's full of relevant ideas and insights about that and how effective tools could be designed to achieve this goal.
What is weird is that they do not spend too much time on one of the most powerful usage of the replay tool: using it as a source for post-activity interview with participants. This is a good way to have external traces to foster richer discussion. In CatchBob! this proved to be very efficient to gather information from the users' perspective (even though it's clearly a re-construction a posteriori). This method is called "self-confrontation" and is very common in the french tradition of ergonomics (the work of Yves Clot or Jacques Theureau, mostly in french).
Besides, there are some good connection with what we did and the problems we had ("the positions recorded on the server for a player are often dramatically different from the position recorded by the GPS on the handled computer.") or:
the use of Replayer also relies on technical knowledge of, e.g., the formats of system events and their internal names, and typically requires one of the system developers to be present during replay and analysis. This raises issues of how we might develop tools to more directly enable social science researchers to use record and replay tools themselves and it is towards addressing these and related issues that we now turn.