The paper "Staging and Evaluating Public Performances as an Approach to CVE" (Steve Benford, Mike Fraser, Gail Reynard, Boriana Koleva and Adam Drozd The Mixed Reality Laboratory, Nottingham), claims that staging public performances can be a fruitful approach to CVE research. The authors describe four experiments in 4 contexts (four different location based games used a art/public performance).
For each, we describe how a combination of ethnography, audience feedback and analysis of system logs led to new design insights, especially in the areas of orchestration and making activity available to viewers.
Among many methods of conducting research (proof implementation as proof of concept, "demo or die", controlled experiment in laboratorym theory backed up with mathematical proof...), they propose to put technology out of the lab and create an "event" (vow event-based research ;)
And don't forget ! CSCP stands for Computer Supported Cooperative PLAY
This is also a nice paper in the sense that it provides idea for analyzing mobile collaboration:
Ethnographic studies rely on a variety of data including field notes, photographs and video. As noted above, capturing social interaction in CVEs (i.e. collaborative virtual environment) on video is a difficult task. Resources are often limited so that only one or two viewpoints can be captured, and current analysis tools do not handle multiple synchronized viewpoints at all well. Detailed analysis of sessions that involve tens of participants is even more difficlt. In short, it can be time consuming, expensive and frustrating work to analyse videos of sessions in CVEs. Analysis of system logs is also more problematic than it need be. At present, there is no agreed format for log data and no readily available suites of analysis tools. (...) tools are required to automatically analyse CVE recordings in order to provide researchers with guidance as to where potentially interesting events have taken place. We have therefore recently developed an scene extraction tool for automatically analyzing 3D recordings. Our current implementation determines interesting scenes based upon the proximity of participants (although it could be extended to account for other factors such as orientation, audio activity, or the identities of key characters). First, it uses a clustering algorithm to group participants on a moment-by-moment basis. It then looks at changes in clusters over time in order to determine on-going scenes. Figure 9 shows an example of its output. In this case, we are looking at a GANTT chart representation of the key scenes in chapter 1 of Avatar farm (determined with a proximity threshold of 15 meters the cut-off point for audio communication). Time runs from left to right and the different colours distinguish scenes that were occurring in different virtual worlds. The tool allows the viewer to overlay the paths of different participants through the structure. We see two participants (Role 2 and Role 3) in our example. We propose that tools such as this can assist researchers in analyzing activity in CVEs by enabling them to more easily home in on potentially interesting social encounters. Sharing across the CVE community our final observation concerns the sharing of data between researchers. In order to maximize the use of recordings, it will be necessary to share them between different researchers. As these techniques mature, the CVE community needs to agree on common formats for recordings so that we can establish shared repositories of recordings of different events in CVEs.