Tangible/Intangible

Playdocam: games with your webcam

Via internet actu, the Playdocam seems to be an interesting device:

PlaydoCAM™ transforms your ordinary web camera into a motion-tracking gaming device and places you at the centre of a unique online gaming experience. Many PlaydoCAM™ games are under development for both single and multiplayer action.

For the widest audience possible PlaydoCAM™ is based on standard Flash and Shockwave technology and can be played directly in your web browser without the need of extra plug-ins or installations. playdoCAM™ is available for custom made entertainment and can also be used offline for a high quality fullscreen experience suitable for exhibitions, display window advertising and more.

Eyekanoid and Playdojam are pertinent examples of new game interactions (a la eye-toy).

Why do I blog this? I find interesting that the innovation in the video game industry is now more and more than just games. Using web-based/like applications (shockwave/flash, it's easier and cheaper than buying a development kit; distributing on the web is way cheaper than having an editor...). And the focus on tangible interaction is more and more present.

FT on wearable computing

The FT has a piece about wearable computing: The shirt that checks your heart, the hat that checks your brain (By Alan Cane). Even though it's very geeky, there is an interesting metaphor:

Professor Sandy Pentland of MIT’s prestigious Media Lab, one of the world’s leading experts on the topic, says that for “wearable computer” read “mobile phone”.

He argues: “The mobile phone is the first truly pervasive computing platform. The question is not: ‘is the wearable computer a gimmick?’ but whether it will be people’s primary computing platform and push all others decisively aside.”

He backs this view with a commercial rather than social argument: “With telecoms operators’ revenues from voice services dropping quickly, everyone is looking for digital data services to stoke growth. The model of a wearable computer is exactly that... and it is working. “Google maps for handhelds, push e-mail and digital cameras are all computer applications migrating to the mobile. Even the physical aspect of the mobile is being designed around wearability. Look at the Moto line, the Oakley Bluetooth glasses and Bluetooth headsets.”

Then they discuss the use of various technologies in medicine/health (using body sensors and "intelligent clothing conversing with chips inside the body to monitor well-being").

Why do I blog this? this also connects to the workshop at Nordichi about the "near-field interactions". However, I am quite dubious about direct transfer of desktop applications to cell phone. Using google maps on my cell phone is not always easy and pertinent for example.

A locomotion interface using a group of movable tiles

CirculaFloor, a project led by Prof. Iwata (presented at SIGGRAPH 2004):

CirculaFloor is a locomotion interface using a group of movable tiles. The movable tiles employ holonomic mechanism that achieves omni-directional motion. Circulation of the tiles enables the user to walk in virtual environment while his/her position is maintained. The user can walk in arbitrary direction in virtual environment. This project is a joint research with ATR Media Information Science Labs.

A video here (22Mb).

Why do I blog this? because I am curious of various tangible interfaces (+ gaming potential).

Collectic: collect access points and combine them in a puzzle

Thanks Cyril for pointing me on Collectic: developed by Jonas Hielscher as a part of a graduation project for the Masters program Media Technology at Leiden University in 2006. I met Jonas in Utrecht few months ago (are you in Basel now? stil in game stuff as I see) and I am always intrigued by what this guy is doing.

he game is developed for the Sony PSP and uses the standard features of the console, especially scanning for wireless access points to the Internet.

CollecTic can be played anywhere, where WLAN access points can be found by a PSP. The objective of the game is to search for different access points, to collect them and to combine them in a puzzle in order to get points. In the game, the player has to move around in her/his local surrounding, using her/his PSP as a sensor device in order to find access points. By doing this, the player is able to discover the hidden infrastructure of wireless network coverage through auditive and visual feedback.The game is designed as a single player game, but it can be easily played competitive after each other or at the same time with two PSPs.

A video here.

Why do I blog this? I like this idea of a game played with regular console features enhanced by some software components. Besides, the game concept is quite simple and funny and discovering network infrastructure that way seems to be a cool experience. I am looking forward to test this!

GE Healthcare 3D mouse

Via, this news: GE Healthcare 3D mouse (a General Electric company):

The 3D Mouse is a user-interface device for sterile surgical settings that allows the physician to more easily view real-time 3D images during surgical procedures. The 3D Mouse combines the control of six distinct, complex user movements (X, Y, Z rotations and X, Y, Z translations) into a single liquid-proof joystick, while providing the functionality of a standard 2D mouse for interaction with GUI functions. Design details map key control locations, providing positive tactile feedback that allow physicians to work without taking their eyes off the patient.

Why do I blog this? yet another curious interfaces with some tangible implications, I like the tactile feedback idea as well. Where can we try that?

QR code definition of Ubiquitous Computing

qrcode This is a quick definition of Ubiquitous Computing (taken here), maybe not the most accurate but short as I needed (especially given that I cut the definition!).

Generated with Kaywa QR code generator that I am testing with the Kaywa reader. The QR code generator allows to create a QR code linked to different content (opening a webpage, sending an SMS, calling someone or displaying a text message). You can do that in almost 2'. Good job!

Excerpts of Toshio Iwai's interview

Pixelsurgeon features a nice interview of Toshio Iwai. A japanese media artist, building electronic/physical instruments (and designing games such as Elektroplankton), Iwai gives some hints about his activity: the importance of tangibility, the need for visual feedthrough, a need to design for play and everyone:

In projects like Tenori-On, how important is the physical interface - the thing you touch and hold? How does it affect the act of making music?

Any instruments are characterised by their physical interface, such as the key of a piano or the bow of a violin. And these physical interfaces give important direction to the way they are played and the sound itself. However, as long as electric instruments are concerned, this aspect is not emphasised very much. In the Tenori-On project, we started from thinking what is the reasonable interface for an electric instrument or digital instrument. (...) For the digital instrument, interface, exterior design, software, sound and so on are independent each other. I am examining the way all of them naturally unite, just like in the violin. (...) The design of the visual interface is very important. The flow of time is not visible and very difficult to handle, but by expressing it visually it can be understood and handled by everybody. Moreover, music can give different impressions when it is expressed visually. (...) Since it became possible to make sound electrically or electronically, the synthesizing of sound has been separated from the visual world. However with the senses we are borne with, we think it is more natural to experience sound and vision at the same time. (...) As everybody wants to touch instruments or toys which he or she hasn’t seen before, when I design something, I am trying to create it so that it is very attractive at first sight. And when players touch it, it can be instinctively understood and they can be pulled into it very strongly and start trying to create their own designs in many different ways.

Why do I blog this? because of current research about tangible interfaces I am interested in Iwai's work; which I found great. Elektroplankton is fantastic (easy to handle and I discover new features everytime I play). What he is describing is very intriguing: how to create new musical instruments (new objects then) with simple affordances, linking sound and visual patterns to engage people in playful activities.

See also his blog about tenori-on, a brand new musical instrument / musical interface for the 21st century which I have been developing under the collaboration with YAMAHA Corp.

Hand/finger tracking system

A curious hand/finger tracking system is described in Smart Laser Scanner for Human-Computer Interface by Alvaro Cassinelli, Stephane Perrin & Masatoshi Ishikawa

The next logical step would be to remove the need for any (dedicated or merged) input space, as well as the need for any additional input device (stylus, data-gloves, etc). This would allow inputting data by just executing bare-handed gestures in front of a portable device - that could then be embodied in a keyboardless wrist-watch. (...) We are currently studying a simple active tracking system using a laser diode (visible or invisible light), steering mirrors, and a single non-imaging photodetector, which is capable of acquiring three dimensional coordinates in real time without the need of any image processing at all. Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view, restricts its scanning area, on the basis of a real-time analysis of the backscattered signal, to a very narrow window precisely the size of the target.

Lots of video demo!

Why do I blog this? working on the user experience of tangible interfaces, I am interested by such new interactions styles.

Tangible interface issues with the Wii

French game site Overgame has a pertinent interview of Roman Campos Oriola, a game design from Ubisoft who is working on a game for the Nintendo Wii. There are some good thoughts about the game controller and the potential interactions (I rougly translated the interesting excerpts):

- We had to reinvent control methods because there are no standards, we worked on that with Nintendo - fight with the sabers are achieved through motion detection: movements are detected and compared with a set of known movements, if there is a matching, the program trigger animations (so it's not a a real spatial positioning) - the challenge was to find the most natural movements for the players. Typically, for doors, we first though a movement (like a wrist rotation) would be ok because we are used to open doors like that. We actually noticed that nobody does the same movement do to that. And finally the most natural way to do it was to explain to the players that he simply had to push the door, and this is the movement we kept. That's where the difficult part is: trying to know what will be understood by players in terms of movements. - the problem is then to know what will be obvious for the player but there are also other issues, for instance we had to cut out the different actions: it's not possible to ask the players to perform simultaneous movement with both pads as if he were playing drums

Why do I blog this? This kind of issues are very important and empirical testing would be great to understand the grammar of interactions that would be affordable/understood by players using such tangible interfaces.

Wearable sensors detecting air quality

VOCquet is a project by Jennifer Kirchherr (at ITP), under the guidance of Tom Igoe.

VOCquet is a wearable sensor in the shape of a flower that detects the air quality in the immediate vecinity of the wearer. VOCquet is a playful comment on local air quality. Shaped like flowers, VOCquet opens to a full bloom when no air contaminants are detected, and wilts in the presence of contaminants. VOCquet is a lighthearted look at the quality of the air we breathe, not an imperical measurement of air contaminants. It should not be used in place of calibrated air monitoring systems, rather as a whimsical look at our invisible environment.

What happen when you have a real-time awareness of invisible phenomenon such as pollution?

Toewie: Puppet game controller

Toewie by Jelle Husson (postgraduate in eMedia in Belgium)

Toewie is about a 3d game for pre-school children. Most 3d games are being navigated by means of the arrow keys for movement, and the mouse for looking/direction. Because this is quite complicated, especially for very young children, Toewie will be controlled differently. The idea is to build a real life puppet and put some movement sensors in it. When the child interacts with the puppet, the 3d character on screen will perform a similar movement.

Why do I blog this? I am following lately how tangible interface can be used as innovative game controllers, this is a relevant example.

Availabot: tangible resence awareness via USB

Availabot is a tangible presence awareness device designed by Schulze and Webb:

Availabot is a physical representation of presence in Instant Messenger applications. Availabot plugs into your computer by USB, stands to attention when your chat buddy comes online, and falls down when they go away. It’s a presence-aware, peripheral-vision USB toy… and because the puppets are made in small numbers on a rapid-prototyping machine, it can look just like you.

Look at the Matt-Jones implementation :)

It's yet another tangible representation that can be used in IM (see the Nabaztag also, thie wifi rabbit that can move its ear if you get a Gtalk message), here the specificities are:

Because Availabot works entirely over USB, you can plug in as many puppets as you have USB ports (or friends on your IM buddylist). (...) Availabot stores the IM details of the friend it represents in the puppet itself. (...) Each Availabot is customised

Why do I blog this? it's a good step towards tangible artifacts connected to Net; I like the customization issue (very important in a DIY culture) and the fact that you can plug multiple Availabots. What is also pertinent is that they are producing prototypes to investigate what works, before thinking about sth larger.

The Orbital Browser:networked services management

Trevor Smith describes a new sort of user interface called the "Orbital Browser" meant to enable users to "discover networked services, select a subset of them, connect them, and finally control them in an appropriate manner". This is about "service composition", an interesting metaphor that would eventually be geared towards directing information coming from different sources, connect those sources and control them. It's designed at PARC by Nicolas Ducheneaut, Chris Beckmann, Trevor F. Smith, James “Bo” Begole, Mark W. Newman (CHI 2006).

To test our concept with real applications and data, we built the Orbital Browser on top of Obje (...) To compose services in Obje users need to create connections between components, thereby initiating the transfer of data. This entails browsing a list of available hosts (e.g. Bob’s laptop) and, once the right one has been found, selecting the host to browse its list of attached components (e.g. Bob’s DVD player, Bob’s screen). If one of these components is an aggregate (e.g. Bob’s “music collection” directory), users will need to expand it to see the list of components it contains (...) our Orbital Browser is (loosely) based on the “ball and stick” metaphor from the world of chemistry. In our system each “molecule” is a unique host on the network, represented by a small circle. Components are represented by larger “balls” connected to their host by a “stick.” These components are all placed at an equal distance from their host, as if they were “orbiting” it (see Figure 1). In turn, all “ball and stick” compounds are arranged at an equal distance from each other. (...) users interact with the Orbital Browser using a Powermate knob that can rotate in two directions and can also be pressed for a “click” action.

Look at the demonstration video

Why do I blog this? because this is an interesting example of how very simple primitives can be used to design a service that would allow to "act" in a pervasive world full of different networked services. I like this idea of service composition and data/source/flow management and I am curious about potential non-intended usage.

Appropriate tangible interactions

Lately, I've been thinking a lot about tangible interactions (because of the wii and certain projects here and there). Wired News also addressed that issue, focusing on some very important questions:

But do such physical motion-sensing controllers really signal the beginning of an emerging trend?

"The big question is whether folks can design compelling games using them," said MacIntyre. "Motion-sensing controllers really capture people's imaginations, but no matter how mundane traditional game controllers are, they have the advantage of precision and lots of simultaneous channels of input, whereas the others can only sense a smaller number of relatively crude and imprecise channels. The former makes for great demos because anyone can pick it up, but the games often lack depth because it's hard to support skillful play. The latter, on the other hand, are hard to learn but support expert play really well." (...) "Look at the two tennis games -- AR Tennis and the Wii tennis game. I don't think either make for good games for folks who want to play for many hours; AR Tennis is using a tiny screen that you have to hold still and not move too fast, and the Wii game doesn't appear to let you do much more than swing. It doesn't track position, just motion, so you wouldn't be able to move your character or control things like volleys. Perhaps sports games are not the right target, since such games make people want to 'play the sport,' and require lots of input. (Dance Dance Revolution), for example, is quite good and is based on the four foot buttons. (The game's developers) manage to simulate the essence of dancing and even let people appropriate the game for 'real' dancing.

Why do I blog this? because tangible interactions still need to be explored in terms of their use/the grammar of actions that would be appropriated for engaging users in playful and usable interactions.

Musical interface: the magic cube

The Magic Cube described in this paper is an interesting example of the tangible interactions paradigm to listen to music. It's designed by Miguel Bruns Alonso (ID StudioLab, Delft University of Technology).

Listening to digital music on a computer has led to a loss of part of the physical experience associated with earlier media formats such as CDs and LPs. (...) To return part of this physical experience a design named the MusicCube is presented that visualises several content attributes of Mp3 formatted music and makes control access more tangible. Play lists, music visualisation, volume, and navigational feedback are communicated via multicoloured light displayed in a tangible interface. Users are able to physically interact with music collections via the MusicCube, a wireless cube-like object, using gestures to shuffle music and a rotary dial with a button for song navigation and volume control. Speech and non-speech feedback are given to communicate current mode and song title.

The usage is pretty curious:

Why do I blog this? this gives somes ideas about how tangible interactions could be used, might be useful for new projects about defining a grammar of possible interaction with objects (and hence with the wii controller for instance)

Is Nabaztag about calm computing?

Just had a quick chat with frederic (one of our new lab colleague) about the fact that the notion of "disappearing computer" or Weiser's "calm technology" is more and more criticized. Computers were supposed to become invisible and information would then becomes ambient. He took the example of the Nabaztag, the wifi-rabbit who is a physical ambient visualization device in the form of a bunny rabbit. Among the rabbit features, there are different capabilities such as displaying several live datasets retrieved through a WiFi Internet connection (weather forecast, news...) AND messages sent by people.

There are now 50,000 nabaztags sold approximately according to french newspaper Libération. It seems that the usage study of the rabbits showed interesting results: people are more interested and rather used the message feature (sending messages, listening to certain nabcast; that is to say podcasts for nabaztag) than the ambient information flow. Also, based on the server usage, they found that people first subscribe to lots of channels and then tick them off (every interaction with the rabbit goes through their server, well normally, some people can always create their own proxy server).

What this seem to mean is that in this context (I won't generalize), interruptions-based interaction worked more than fluid and calm information flows. The computer (or at least the computing artifact) does not disappear that much and is then disruptive. This is the function used by Fabien's group at the second blogject workshop: a “spokespet” for blogjects. An ambient device moderating and intermediating between a user and his/her blogjects.

Why do I blog this? because I am interested (as a researcher) in the user experience of pervasive computing. I found these usage trends interesting. Of course, one example does not mean that it's a failure but it's an interesting debate that emerged from this user study.

SignalSpace: a Networked Gestural Sound Interface

Project Ambient is a compilation of UC Irvine grad school projects. It's focused on the design of ambient display that would go beyond the dichotomy of peripheral and focal using the "foveal" meaphor": embedding interactions physically in space. One of the project I like in this list is SignalPlay by Amanda Williams, Eric Kabisch:

When computation moves off the desktop, how will it transform the new spaces that it comes to occupy? How will people encounter and understand these spaces, and how will they interact with each other through the augmented capabilities of such spaces? We have been exploring these questions through a prototype system in which augmented objects are used to control a complex audio 'soundscape.' The system involves a range of objects distributed through a space, supporting simultaneous use by many participants. We have deployed this system at a number of settings in which groups of people have explored it collaboratively. Our initial explorations of the use of this system reveal a number of important considerations for how we design for the interrelationships between people, objects, and spaces.

SignalPlay is a sensor-based interactive sound environment in which familiar objects encourage exploration and discovery of sound interfaces through the process of play. Embedded wireless sensors form a network that detects gestural motion as well as environmental factors such as light and magnetic field. Human interactions with the sensors and with each other cause both immediate and systemic changes in a spatialized soundscape. Our investigation highlights the interplay between expected object-behavior associations and new modes of interaction with everyday objects

More about that in this paper: SignalPlay: Symbolic Objects in a Networked Gestural Sound Interface

Why do I blog this? because it addresses a phenomenon I am interested in, as a researcher: how pervasive computation transform spaces. It's also because it's connected with the blogject concept.

Physical visualization bubbles

Seen this week-end in Geneva at the "Fête de la Musique" (a big gig in which there are music bands everywhere in the city). Like last year, the organizers adopted the following "bubble visualization" system to show directly on the street where stuff happens. Little bubbles are located in-between areas, they shows directions to go where things happen (represented also on the ground with big bubbles): Bubble vizualiser (1) Bubble vizualiser (2)

Why do I blog this? I like this sort of real-space event visualizer, the design seems to be directly connected with trendy infoviz practices.

Pick up color readings and transmits them into the viewer's eyes

Monochromeye is a project carried out at the Smart Studio. Part of a more general project, it's actually a portable device with a fingerholder that picks up color readings and transmits them into the viewer's eyes:

Monochomeye is one of several optical machines that were built in an art driven research project about light and perception called Occular Witness. The project attempts to stake out the limits of human vision and it examines how information is malleable and how meaning is formed through image in a time when information is abundant and our culture is saturated with layers of processed imagery. (...) Monochromeye is a portable device that enhances low resolution vision. A fingerholder contains one red, one green and one blue lightsensor that read the environment as you point at it. It feeds back the color information to two tricolored (RGB) light diodes that emit two beams of light straight into the viewers eyes. At such a low resolution, the viewer can only get color readings. They do not contain any information beyond the color that is registered at the point in space where the viewer points his finger.

Why do I blog this? because this project is appealing to me; from the user experience point of view, I like this idea of enhancing resolution vision. Besides the design is nice.

The context of a display ecology

In Displays in the Wild: Understanding the Dynamics andEvolution of a Display Ecology, Elaine M. Huang, Elizabeth D. Mynatt, Jay P. Trimble is an in-depth field evaluation of large interactive displays; it exemplifies the "context of a display ecology".

It's a study about large interactive displays within a multi-display work environment used in the NASA Mars Exploration Rover (MER) missions, used in a complex and ecologically valid setting. What is interesting, is the lessons learned from this experiment:

the “success” of a large interactive display within a display ecology cannot be measured by whether a steady state of use is reached. Because people appropriate these tools as necessary when tasks and collaborations require them, there may be a natural ebb and flow of use that does not correspond to success or failure, but rather to the dynamic nature of collaborative work processes. Success is therefore better evaluated by examining the ease and extent of support that such displays provide when tasks call for a shared visual display or interactive work surface. (...) Another important lesson regarding the value of large displays in work environments came from our observation of the interplay between interactive use and ambient information display. In the realm of large interactive display research, a decrease in interactivity is often viewed as a failure of the system to support workgroup practices. We observed a migration from interactive use to ambient information display, and through our interviews discovered how valuable this ambient information was. (...) in the greater context of a display ecology, it is misleading to evaluate the isolated use of a single system; the existence of other displays in the environment means that it is important to understand how the ecology functions as a whole, not just how individual displays are used.

Why do I blog this? I found this paper interesting because it describes how people made use of such a display; the highlights researchers brought forward also show pertinent issues in the domain of ambient/interactive furnitures, which could be helpful for some of our projects at the lab.