User Experience

Kevin Kelly on "street use"

Following William Gibson's quote, Kevin Kelly now has a blogpage about "Street use":

This site features the ways in which people modify and re-create technology. Herein a collection of personal modifications, folk innovations, street customization, ad hoc alterations, wear-patterns, home-made versions and indigenous ingenuity. In short -- stuff as it is actually used, and not how its creators planned on it being used. As William Gibson said, "The street finds its own uses for technology." I welcome suggestions of links, and contributions from others to include in this compendium. -- KK

Some examples (shovel pan and dashboard oven):

Why do I blog this? It seems that Michel de Certeau is very trendy lately. I already quoted Lucie Girard who summarized de Certeau's work:

Michel de Certeau’s social philosophy was based on the notion of détournement and collage (...) What was at stake for him was the way people use some readymade objects, the way they organize their private space, their office, or their working-place, the way they “practice” their environment and all public space available to them (shopping malls, town streets, airports and railway stations, movie theatres, and the like). By so doing, Certeau focused his reflection on the ordinary “practices” of every man and woman in his/her everyday life.

AOL data release and data mining freaks

It seems that data mininer researchers/hackers had been crazy about the the recent AOL release of tons of data. This "A chance to play with big data" blogpsot gives some hint about it:

Second, the new AOL Research site has posted a list of APIs and data collections from AOL.

Of most interest to me is data set of "500k User Queries Sampled Over 3 Months" that apparently includes {UserID, Query, QueryTime, ClickedRank, DestinationDomainUrl} for each of 20M queries. Drool, drool!

Update: Sadly, AOL has now taken the 500k data set offline. This is a loss to academic research community which, until now, has had no access to this kind of data.

There's also a NYT column about it:

A list of 20 million search inquiries collected over a three-month period was published last month on a new Web site (research.aol.com) meant to endear AOL to academic researchers by providing several sets of data for study. AOL assigned each of the users a unique number, so the list shows what a person was interested in over many different searches.

The release of the data shines a light on how much information people disclose about themselves, phrase by phrase, as they use search engines.

Digital Kids can't warding off ennui

Some results from a Los Angeles Times/Bloomberg poll are worth reading:

a large majority of the 12- to 24-year-olds surveyed are bored with their entertainment choices some or most of the time, and a substantial minority think that even in a kajillion-channel universe, they don't have nearly enough options. (...) A signature trait of those surveyed is a predilection for doing several things at the same time (...) Young people multi-task, they say, because they are too busy to do only one thing at a time, because they need something to do between commercials or, for most (including 64% of girls 12 to 14), it's boring to do just one thing at a time. (...) Throughout Hollywood, the race is on to develop entertainment that captures the attention of this distracted generation (...) Despite the technological advances that are changing the way entertainment is delivered and consumed, good, old-fashioned word of mouth — with a tech twist, thanks to text messaging — continues to be one of the most important factors influencing the choices that young people make. (...) Yet a surprisingly high number of teenage boys (58%) and even more teenage girls (74%) said they were offended by material they felt disrespected women and girls.

The part about continuous partial attention is interesting too:

"It's like being in a candy store," said Gloria Mark, a UC Irvine professor who studies interactions. between people and computers. "You aren't going to ignore the candy; you are going to try it all."

Mark, who has studied multi-tasking by 25- to 35-year-old high-tech workers, believes that the group is not much different from 12- to 24- year-olds, since both groups grew up with similar technology. She frets that "a pattern of constant interruption" is creating a generation that will not know how to lose itself in thought.

"You know the concept of 'flow'?" asked Mark, referring to an idea popularized by psychologist Mihaly Csikszentmihalyi about the benefits of complete absorption and focus. "You have to focus and concentrate, and this state of flow only comes when you do that Maybe it's an old-fogy notion, but it's an eternal one: Anyone with great ideas is going to have to spend some time deep in thought."

InfoViz on PSP

Via ARTcade, this Chromo thing is intriguing (by Protein®). It's meant to be "a colour clock that helps your body understand what time it is". The thing is that they released a prototype for the PSP:

PSP Chromo™ is a colour clock for the PlayStation Portable that helps your body understand what time it is. Based on the original Chromo™ concept, the PSP version adds a subtle indicator for time to the main menu background.

Why do I blog this? it's a curious way to turn a gaming device into an ambient display.

New 3D printing practices

An intriguing article in the WSJ recently addressed new practices related to 3D printers (By William Bulkeley). After briefly describing the process (shooting of plastic particles and glue, or an ultraviolet or laser, passing over a liquid resin bath, hardening a layer of plastic in a computer-generated shape.)

"Now the technology is reaching ordinary consumers -- even young ones. SolidWorks, a U.S. unit of Dassault Systemes SA, a French maker of design software, plans to start up a new business called Cosmic Modelz that will allow kids to use the technology to create their own customized action-figures. Children can design a figure using SolidWorks' Cosmic Blob software on their home PCs, then go to a Web site run by 3D printer-maker Z Corp. and order their figures to be "printed" for $25 to $50. It will be kind of an electronic version of the Build-a-Bear Workshop concept where children create customized teddy-bears. (...) Some designers use 3D printing as a communications tool (...) A number of U.S. companies say they use "3D faxing" to send designs to 3D printers at factories in Asia so manufacturing engineers have a clearer idea of what they're supposed to build. (...) At Walt Disney Co.'s Pixar Animation Studios, animators used a Z Corp. machine to make 250 models of "Toy Story" characters for a museum display. "

Why do I blog this? because new usages of 3D printing that pop up here and there are more and more intriguing (the advent of a spime world?). Well, when you have in mind applications that allow kids to 3D print characters, it's a landmark for sure!

Kids "cyber literacy"

The BBC on kids "cyber literacy"

Computer literacy is increasingly seen as an essential skill for children. But what is the best age to introduce them to computers and does it give them a head-start? (...) Worldwide research on very young children and their use of IT is limited, but one recent report from Sheffield University in the UK called Digital Beginnings makes for interesting reading.

For instance by the age of four, 45% of children have used a mouse to point and click, 27% have used a computer on their own at home, rising to 53% for six-year-olds, and 30% have looked at websites for children at home.

The Child Computer Interaction Group (ChiCI) studies the dynamic relationship between children and computers and feel that children should not start using computers too early in their development.

ChiCI's Janet Read says: "My own opinion is that 18 months isn't a good age. It's a little bit ridiculous to think of an 18-month-old child sat in front of a traditional computer. That's not to say there might not be technologies that are adapted to them in the future, but the traditional keyboard, box, monitor and mouse doesn't seem to fit a child very well."

Janet Read says: "I wouldn't say that children who use computers would definitely get a head start. Some of these children would have been pushed in front of a computer like they would be pushed in front of a TV and so they're getting either the wrong sort of stimulation or no stimulation of any value, because it's quite easy to be entertained on a computer and not necessarily gain any value."

Museolab: museolab technology testing

Lyon's future museum called "Musée des Confluences" (architecture by Coop Himmelblau) has a research structure called Museolab that aims at inventing, experimenting and validating technologies and services that would improve museum visitors' experience (better interacting and understanding an exhibit). Museolab will then test the technologies that will then be validated at the Museum. What they are working on is pretty close to nowadays trends: personalization according to a certain visitor's profile, learning devices based on the visitors' paths and actions, use of RFID tags...

One of the intriguing project they have is called "La Malle à Objets": using smaller versions of object exhibited in the museum, people can drop it close to a device that would give them information about it. I am definitely not an expert of museum technologies but it's interesting to see how tangible interfaces also pervades in this kind of settings.

Norman on Study first, design second or vice versa

Also in the last ACM interaction issue, Donald Norman is conscientiously shifting from his past stance ("study first, design second") to a pragmatic take: "for many projects the order is design, then study". And this, for different reasons:

Once a project is announced, it is too late to study what it should be—that's what the announcement was about. If you want to do creative study, you have to do it before the project's launch. (...) Most projects are enhancements of preexisting projects. Why do we have to start studying the users all over again? Haven't we already learned a lot about them? (...)

Our continual plea for up-front user studies, field observations, and the discovery of true user needs is a step backwards: This is a linear, inflexible process inserted prior to the design and coding stages. We are advocating a waterfall method for us, even as we deny it for others. Yes, folks. By saying we need time to do field studies, observations, rapid paper prototypes and the like, we are contradicting the very methods that we claim to be promoting.

So what's the point?

Field studies, user observations, contextual analyses, and all procedures that aim to determine true human needs are still just as important as ever—but they should all be done outside of the product process. This is the information needed to determine what product to build, which projects to fund.

Usability testing is like Beta testing of software. It should never be used to determine "what users need." It is for catching bugs, and so this kind of usability testing still fits the new, iterative programming models, just as Beta testing for software bugs fits the models. (...) UI and Beta testing are meant simply to find bugs, not to redesign.

So let's separate the field and observational studies, the conceptual design work, and the needs analyses from the actual product project. We need to discover what users need before the project starts; for once started, the direction has already been determined. We need to embrace rapid, iterative methods.

Why do I blog this? This is of interest to me because I faced similar issues when working with game designers: articulating the field studies / usability tests with game design process is often tough and what Norman is describing should be taken into account. Besides, to certain people there is often confusion between usability test/field studies and it's pertinent to see how Norman clarifies that.

Mobile navigation support for pedestrians

The last issue of ACM Interactions is specially devoted to "gadgets". Though I won't argue about this topic name, there is an interesting paper about mobile navigation supported entitled "Mobile navigation support for pedestrians: can it work and does it pay off?" by Manfred Tscheligi, Reinhard Sefelin. If there is already a number of devices and services that support navigational tasks for drivers, the market of similar applications for pedestrians is still nascent. The authors think there is a possibility for this sort of services to work "if three prerequisites are fulfilled: consideration and integration of landmarks as a means of navigation, more and real consideration of the context of use, provision of content that goes beyond navigational information.".

The part that I am mostly interested in is the one about the fact that designers should take the user's context into consideration:

Whereas drivers of cars are mostly occupied with only one task, pedestrians usually have to complete different secondary tasks. Often the navigation is a secondary task, while the user’s primary task is the exploration of a city or of a museum. Moreover, we cannot always expect that users in complex environments (railway stations, airports, hospitals) will be able to use a mobile phone or PDA with their hands. They might be carrying luggage, which would prevent them from using a mobile phone without a hands-free set; they might not want to attract the attention of other people; or they might need their hands to use a device such as crutches. (...) Another aspect related to the users’ context is the fact that tourists and hikers often want to carry paper-based guides in their hands, which precludes the use of electronic devices

Why do I blog this? because this kind of study brings back to Earth some of the people who are designing totally tech-driven services that no pedestrian users will be able to use because of their lack of context-relevance.

Awareness and Interruptions

Dabbish, L., Kraut, R. (2004). Controlling Interruptions: Awareness Displays and Social Motivation for Coordination, in Proceedings of the 2004 ACM conference on Computer supported cooperative work. 2004, ACM Press: Chicago, IL. p. 182-191. The paper addresses the notion of awareness with an interesting angle: how would awareness displays might interrupt and then impact people's activity (leading to performance problems. The authors used a very simple game to investigate whether "team membership influences interrupters' motivation to use awareness displays and whether the informational-intensity of a display influences its utility and cost".

Results indicate interrupters use awareness displays to time communication only when they and their partners are rewarded as a team and that this timing improves the target's performance on a continuous attention task. Eye-tracking data shows that monitoring an information-rich display imposes a substantial attentional cost on the interrupters, and that an abstract display provides similar benefit with less distraction.

This study has direct implications for design:

To balance the tradeoff between the amount of information presented and the incentive to use that information, electronic communications systems could regulate the awareness information they provide based on an interrupter’s inferred motivation to use that information. For example, in designing a corporate instant messaging client, one could apply these results by presenting a workload awareness display of a target’s activities only to people internal to the user’s project or company, and no such display to people outside the company.

Currently, the “away” and “busy” messages which various instant messaging clients use are too temporally coarse to provide sufficient information for synchronizing interruptions. (...) Displaying information about a remote collaborator’s workload helps both parties if that information is in an easy to process format and the potential interrupter has incentive to be polite.

Why do I blog this? because my research is about studying how certain awareness tools (bringing mutual-location awareness) influence collaboration in terms of producing a mutual intelligibility. Taking into account interruptability might be an issue, however, in the activities I studies, it's less continuous so interruptions are less important.

Appropriate tangible interactions

Lately, I've been thinking a lot about tangible interactions (because of the wii and certain projects here and there). Wired News also addressed that issue, focusing on some very important questions:

But do such physical motion-sensing controllers really signal the beginning of an emerging trend?

"The big question is whether folks can design compelling games using them," said MacIntyre. "Motion-sensing controllers really capture people's imaginations, but no matter how mundane traditional game controllers are, they have the advantage of precision and lots of simultaneous channels of input, whereas the others can only sense a smaller number of relatively crude and imprecise channels. The former makes for great demos because anyone can pick it up, but the games often lack depth because it's hard to support skillful play. The latter, on the other hand, are hard to learn but support expert play really well." (...) "Look at the two tennis games -- AR Tennis and the Wii tennis game. I don't think either make for good games for folks who want to play for many hours; AR Tennis is using a tiny screen that you have to hold still and not move too fast, and the Wii game doesn't appear to let you do much more than swing. It doesn't track position, just motion, so you wouldn't be able to move your character or control things like volleys. Perhaps sports games are not the right target, since such games make people want to 'play the sport,' and require lots of input. (Dance Dance Revolution), for example, is quite good and is based on the four foot buttons. (The game's developers) manage to simulate the essence of dancing and even let people appropriate the game for 'real' dancing.

Why do I blog this? because tangible interactions still need to be explored in terms of their use/the grammar of actions that would be appropriated for engaging users in playful and usable interactions.

Cords, wires outlets and furnitures

The WSJ has a curious piece about how of cords, devices and outlets can be managed by furniture designers:

As more consumers buy gadgets like cellphones and MP3 players that need frequent recharging, manufacturers are offering new ways to manage the tangle of cords, devices and outlets. Their solution: A handful of makers are equipping nightstands and coffee tables with dedicated storage spaces to hide cords and electronics from view, and building power strips right into the furniture. (...) Just as users have had to avoid spilling drinks around the computer, bringing technology to everyday coffee tables and nightstands could create another challenge. "Until they make laptops and cellphones that are waterproof, we will need to be careful," says Mr. Behar.

And electronics makers advise against recharging devices -- a process that generates heat -- in an enclosed space. Some furniture makers have addressed the issue: The eNook, for one, has ventilation holes on the sides.

Why do I blog this? it's interesting to take the context into account: new devices reshape the environment: furnitures are then redesigned accordingly. This is than important to think about what would be the place/room of the future due to these sort of changes.

SignalSpace: a Networked Gestural Sound Interface

Project Ambient is a compilation of UC Irvine grad school projects. It's focused on the design of ambient display that would go beyond the dichotomy of peripheral and focal using the "foveal" meaphor": embedding interactions physically in space. One of the project I like in this list is SignalPlay by Amanda Williams, Eric Kabisch:

When computation moves off the desktop, how will it transform the new spaces that it comes to occupy? How will people encounter and understand these spaces, and how will they interact with each other through the augmented capabilities of such spaces? We have been exploring these questions through a prototype system in which augmented objects are used to control a complex audio 'soundscape.' The system involves a range of objects distributed through a space, supporting simultaneous use by many participants. We have deployed this system at a number of settings in which groups of people have explored it collaboratively. Our initial explorations of the use of this system reveal a number of important considerations for how we design for the interrelationships between people, objects, and spaces.

SignalPlay is a sensor-based interactive sound environment in which familiar objects encourage exploration and discovery of sound interfaces through the process of play. Embedded wireless sensors form a network that detects gestural motion as well as environmental factors such as light and magnetic field. Human interactions with the sensors and with each other cause both immediate and systemic changes in a spatialized soundscape. Our investigation highlights the interplay between expected object-behavior associations and new modes of interaction with everyday objects

More about that in this paper: SignalPlay: Symbolic Objects in a Networked Gestural Sound Interface

Why do I blog this? because it addresses a phenomenon I am interested in, as a researcher: how pervasive computation transform spaces. It's also because it's connected with the blogject concept.

Google earth + sketch-up (2)

Tim O'Reilly posted his thoughts about the added value for Google of having bought Sketch-Up (the 3D modeling tool):

Google Maps has more public reach, but it seems to me that Google Earth will ultimately emerge as the real platform play. What's particularly interesting is how much activity there is in adding user-generated data. Especially interesting is the way that Google is trying to get users to build 3D models of buildings with sketchup. (...) It becomes clear that Google Earth is not just a data visualization platform. It's a framework on which hundreds of different data layers can be anchored. It's also clear that Google Earth is entering into the same territory as Second Life. It's so easy to imagine all of the alpha geek behavior on Second Life hitting the mainstream via people building real-world equivalents on Google Earth. And it's easy to imagine interoperability, with virtual worlds adopting KML, so that first and second life become interoperable and connected. (I was going to ask about the Google Earth/Second Life connection with sketchup as the connector, since it seems so obvious to me, but the first question from the audience beat me to it. It's impossible to miss this idea.)

Why do I blog this? because I am interested in foresight about digital entertainment and video-games usage. What O'Reilly describes here is the very cutting-edge trend that we discussed when I was at the Annenberg Center for Communication in April: the potentialities of using Google Earth and sketch-up for playful activities, which is obviously connected to some social MMORPG like SL. What would be the next practice (beyond modeling your/a house and putting it there'): creating alternative versions of the Earth? Modeling MMORPG environment in KML and then playing in them (sort of DIY MMORPG level modeling)? Using this a the new interface for a SIM-like game? Trading KML files on ebay?

Gestural behavior in virtual reality and physical space

With the now overlapping on-line persona and our presence in the physical world, lots of questions concerning the connections between both worlds remains unanswered. This is the research issue addressed by the Virtual Human Interaction Laboratory at Stanford University. Devsource has a good overview about it (via the Presence mailing list), starting with the questionable motto: "How does the world change when you have five arms?".

that researchers have learned that, when we build digital versions of one another, people tend to behave the same in virtual reality (VR) as they do in physical space, at least on a gestural level. His team has studied online communities and avatar-based games, analyzing patterns of interaction and comparing how they relate to the social world. With avatars, he says, the norms of conversation and nonverbal behavior are modeled on how people behave in physical space. But there's one interesting exception: "In games, taller and more beautiful avatars actually perform better."

Why do I blog this? Since I am interested in the relationships between spatial features and behavior, this is relevant; see for instance what Philip wrote about how proxemics is still pertinent in virtual space: Jeffrey, P.and Mark, G. (1998). Constructing Social Spaces in Virtual Environments: A Study of Navigation and Interaction. In: Höök, K.; Munro, A.; Benyon, D. (ed.): Workshop on Personalised and Social Navigation in Information Space, March 16-17, 1998, Stockholm (SICS Technical Report T98:02) 1998) , Stockholm: Swedish Institute of Computer Science (SICS), S. 24-38.

But there is more:

Bailenson [the lab director] offers one bit of practical advice for software developers who build "social" user interfaces. Anytime you have a UI that guides a person, especially with a human face, people tend to make the agent look more realistic than it behaves. And that, he says, causes problems in user expectations.

Spectators for video games?

NS about the notion of spectator in gaming:

he US professional computer gaming league has just signed a TV rights deal with cable company USA Network. Maybe it could be on the way to becoming as popular a specator sport as football and basketball in the US.

Why do I blog this? this is connected to my interest towards user experience of video games: there is really a trend that gaming is more than just interacting with a box: now with replay (in sport games for instance: you can replay your own game) and here with this trend it's more than that: showing to others people's game. I'd be interested to know more about who watch this, what they do out of it... and the practices related to that.

Awareness and Accountability in MMORPG

A very good read yesterday in the train: Moore, Robert J., Nicolas Ducheneaut, and Eric Nickell. (2006): "Doing Virtually Nothing: Awareness and Accountability in Massively Multiplayer Online Worlds." Computer Supported Cooperative Work, pp. 1573-7551

The paper acknowledge the fact that "despite their ever-increasing visual realism, today’s virtual game worlds are much less advanced in terms of their interactional sophistication". Through diverse investigations of MMORPG using video-based conversation analysis (grounded in virtual ethnography), they look at the social interaction systems in massively multiplayer virtual worlds and then propose guidelines for increasing their effectiveness.

Starting from the face-2-face metaphor (the richest situation in terms of social interaction, as opposed to geographically dispersed settings), they state that participants are able to access to certain observational information about what others are doing in order to interpret others’ actions and design appropriate responses. This lead to coordination (I personally used different framework to talk about that, for instance Herbert Clark's theory of coordination). In a face to face context, three important types of cues are available: "(1) the real-time unfolding of turns-at-talk; (2) the observability of embodied activities; and (3) the direction of eye gaze for the purpose of gesturing".

They then build their investigations around those three kind of cues that are less available in virtual worlds. This can be connected to the work of Toni Manninen like The Hunt for Collaborative War Gaming - CASE: Battlefield 1942). It also makes me thing about one of the seminal paper by Clark and Brennan about how different media modifies the grounding process (the establishment of a share understanding of the situation).

Clark, H. H., and Brennan, S. A. (1991). Grounding in communication. In L.B. Resnick, J.M. Levine, & S.D. Teasley (Eds.). Perspectives on socially shared cognition . Washington: APA Books.

Why do I blog this? I still have to go further in the details of each of these investigations but I was very interested in their work because: - the methodology is complementary with what I am doing in CatchBob to investigate mutual awareness and players' anticipation of their partners' actions. The interactionist approach here could be very valuable to apply in my context. I am thinking about deepening the analysis of the messages exchanged by players (the map annotations) to see how accountability is conveyed through the players drawings. - they do translate results from empirical studies intro concrete and relevant design recommendations (for instance: other game companies should probably follow There’s lead and implement word-by-word (or even character-by-character) posting of chat messages. Such systems produce a turn-taking system that is more like that in face-to-face, and they better facilitate the coordination of turns-at-chat with each other and with other joint game activities.)

Video game controller reconfigurability

In The VoodooIO Gaming Kit: A real-time adaptable gaming controller by Nicolas Villar, Kiel Mark Gilleade, Devina Ramduny-Ellis, Hans Gellersen (Proceedings of ACE 2006), the authors propose an interesting idea for innovating about game controllers:

Existing gaming controllers are limited in their end-user configurability. As a complement to current game control technology, we present the VoodooIO Gaming Kit, a real-time adaptable gaming controller. We introduce the concept of appropriable gaming devices, which allow players to define and actively reconfigure their gaming space, making it appropriate to their personal preference and gaming needs. (...) Ad hoc controller adaptation during game-play is the pinnacle of physical configuration in game controllers. Not only can the game controller be configured to suit a particular task for a given user but it can also be reconfigured while the user is still playing to meet any changes in task demand. (...) VooodooIO is a malleable platform for physical interaction, which allows users to construct and actively adapt the composition of their physical interface. Rather than being an interface construction kit for users, the platform is concerned with enabling and exploring the ability of the physical interface to be customized and reconfigured after its deployment into use.

A pertinent affordance for real-time modification of the game controller is that controls can be arranged to depict the intended use-sequence:

Why do I blog this? this is a very innovative idea for expanding on the idea of game controllers that would be more user-centered. Besides, the paper is very complete and shows a proof of concept using World of Warcraft; the usage study is also welcome! I like this idea of DIY gamepad, it's really part of the trend (DIY games, players' participation in the design process...)