Tangible/Intangible

Inflatable interactive games

This website is a well-documented resource about inflatable stuff. My attention has been drawn to inflatable interactive games because it struck me as the most interesting usage for such devices.

Why do I blog this? I find this interesting to create new types of playground (maybe more temporary), there is a potential to use this as a basis, with some technology augmentation to create new playful experiences. They're quire basic but I'd be interested to see an inflatable brick game. It's also funny to see how they put the emphasis on the word "interactive".

Wii glove and other craft ideas

Via zogdog, this interesting velcro glove for the wii (mmh why are there two controllers): .Why do I blog this? I know the console has been released yet but I am a bit disappointed that nobody used duct tape and wires already to do something weirder with this game controller. At least with some drawing and sketches like the one above, but maybe I haven't looked enough. There is a good potential to craft cool gadgets for the wii controller (ranging from duct taping it to your arms/legs to putting a huge ball of styrofoam around them to juggle).

Touch or don't touch?

While doing a very quick search over touch interfaces, I though back of this very curious Nintendo U-Force controller for the NES:

According to Wikipedia:

The U-Force is an accessory for the Nintendo Entertainment System made by Brøderbund. It employed 2 large infrared sensors and a series of switches allowing the user to program it to recognize movements across the sensors as button presses and send those corresponding signals to the NES.

From a print advertisement circa 1989 (emphasis left intact from the adcopy): "Introducing U-Force, The Revolutionary Controller For Your Nintendo Entertainment System. So Hot, No One Can Touch It. Now you can feel the power without touching a thing. It's U-FORCE from Broderbund--the first and only video game controller that, without touching anything, electronically sense your every move. And reacts. There's nothing to hold, nothing to jump on, nothing to wear, U-FORCE creates a power field that responds to your every command--making you the controller. It's the most amazing accessory in video game history--and it will change the way you play video games forever. It's the challenge of the future. U-FORCE. Now nothing comes between you and the game."

Why do I blog this? well some near-field interactions have been tried on old Nintendo projects and it would be good to collect some user experiences insights of this sort of device.

Ecological approach to kids palyground

An Ecological Approach to Children’s Playground Props by Susanne Seitinger (Smart Cities Group / MIT Media Lab ), in Proc. of IDC'06 (Tampere, Finland, June 2006). This paper describes an interesting approach about the designing of a new kind of kids' playground. The authors try to bring forward a "new category of space explorer emerges that interacts with children as they engage their outdoor play environment". What I liked in this project was this notion of "space explorers":

In trying to develop a prop from the suggested approach, a new category emerged called “space explorers” for preschool children, which derives from the pull-along toys many of us remember from our own childhood. What are space explorers? They are animated objects that reveal important information about outdoor play environments by adding another layer of interactions to the triangle of children ↔ objects ↔ play setting.

In literal outer-space exploration, the spherical robot plays an important role. There are several examples of inflated or solid spherical robots which have been developed for understanding distant planets. Some attempts have been made to adapt these objects for children, but they are starting from a robotics framework [18]. Adapting rolling objects for children’s play is nothing new – the ball is still one of the most common play objects. An initial prototype emerged starting from this universal spherical form and adding the idea of an exploration device. The basic scenario for such a roller would be: Children encounter the roller – or another space explorer – in an outdoor play setting where it is activated by their presence. The types of ensuing behaviors include expected and unexpected outcomes, for example: the ball may initially roll down a hill as expected only to turn around and return towards the child.

And an example of such space explorer is the following "wheel space explorer placed in the snow to illustrate the powerful relationship between object and ground. Children are connected to the space directly through their presence in it and the intangible links to the object".

Why do I blog this? I see more and more occurences of new types of playgrounds, based on user-centered design, with some ubiquitous computing technologies; this seems to be an interesting topic in urban computing, with a different scale (compared to locative media stuff).

V-Migo: virtual pet with 1st world connection

V-Migo is a very curious and simple plug-and-play console in which you have both the set-top box connected on your TV AND a mobile version that has a pedometer. The point of this is to raise a dog a la nintendogs; and it seems that the pedometer allows you to measure the distance you travel with your dog and change its behavior in the game accordingly.

Why do I blog this?/ this is important in terms of first world (physical world) and second world (virtual game) connections because the actions performed in the physical environment are taken into account in the game. An additional implications is the mobile component of the console (you have both a fixed and a mobile element) but this is less new.

u-texture

In the last issue of ACM interactions, Lars Erik Holmquist mentions a very intriguing technology called u-texture:

Another laboratory at Keio SFC is run by Professor Hide Tokuda. This lab concentrates on the enabling technology for ubiquitous computing, such as operating systems and networks. One fascinating system is the u-Texture, a set of interlocking computational tiles that can be combined to create different applications. The tiles are roughly the size of a Tablet PC, have integrated network connections and of course RF-ID readers. They can be assembled in many different shapes and will automatically configure themselves to acknowledge the new connections. Fancy a new digital shelf, a smart table, or an electronic wall? Just put together a few u-texture blocks and you've got your new interactive furniture! I wonder if IKEA will catch on?

Some of the applications:

The AwareShelf can be created on a shelf-shaped u-Textures. When a user puts a real object such as a camera, a book, or a key on a u-Texture, it enables to browse information of the real object on the display on another u-Texture. The u-Textures have to be connected vertically to the u-Texture on that a thing is placed. (...) The Collaboration Table is a system that supports cooperative work with several participants by connecting u-Textures horizontally. Users can exchange and merge drawing data among connected u-Textures by drag-and-drop operations. (...) The ProjectionWall magnifies a connected u-Texture's small display onto a big one. It is effective for displaying a large picture that is too small to be shown on only one u-Texture. Data can be handled interactively by users with touch panels

Ghosts of Liberty

'Ghosts of Liberty' is yet another mobile/pervasive game (played in Boston) and designed by Urban Interactive.

Players roam through the lamp-lit alleys of Boston's North End, following a trail of ghostly messages to track a mysterious enemy of the state. A cell phone weaves electronic gameplay and live action into the nocturnal ambiance, as participants race against the clock to solve riddles, discover hidden items, and interact with characters both real and digital.

(picture by Evan Richman/Globe Staff)

The Boston Globe has a piece about it, somehow describing how the players apprehended the game:

Met by a ``secret agent," Bitkower's foursome was handed a cellphone programmed with all the night's clues, an ultraviolet pen, a map of the North End, a ``classified" case briefing, and a folder to open in the event of an emergency (i.e., if they became hopelessly lost).

Wolfe's wife, Nan, a kitchen designer, took over as master code-breaker, jotting down letters and numbers from bronze plaques and muttering aloud solutions. Bitkower, the group's text-message fanatic, was glued to the cellphone, tripping over cobblestones and even a small fence in his haste to relay information from digital maps, text messages from ``Director Finch," and voice mails from a ghost-channeling psychic to the group. As the team raced down Salem Street past Bova's Bakery , Jim Wolfe signaled to turn left instead of right -- to throw other groups off their scent. ``I feel like we're behind," said Jerry Ringuette, an information technology specialist from Quincy, before sprinting down Commercial Street in search of a woman's feather boa. Lost on Hanover Street, Bitkower slyly reached into his coat pocket for a travel map. ``We brought a cheat sheet," he whispered.

Check also the players' briefing sheet.

Day of the Figurine evaluation

The last deliverable of the IPERG project is of interest for people into pervasive gaming development/observation. The iperg project is EU funded research consortium that looks at pervasive gaming from a multi-disciplinary angle (since the consortium is composed of researchers from various disciplines). The document describes the evaluation of "a prototype public performance called Day of the Figurines, a slow pervasive game in the form of a massively­multiplayer boardgame that is played using mobile phones via the medium of text messaging".

This deliverable presents an evaluation of a first public test of this version of Day of the Figurines that took place in London in Summer 2005 and that involved 85 players over a month. This evaluation draws on multiple perspectives, including analysis of exit questionnaires from players, ethnographic study of behind­the­scenes control room activities, and descriptive statistics derived from system logs, in order to establish a rich picture of how the game was experienced from the perspectives of both players and operators.

Why do I blog this? The whole document is a great read to be informed about problems, highlights, players' reaction, communication that occured. It's also very good to have both the perspective of the players AND the operator. The game designer's role is even more prominent when gaming is set in physical space because there are other constraints to deal with. The title of the document is quite evocative: "The City as Theatre Evaluation": that we can read as "landscape as a game interface" or "city as a performative infrastructure".

SYS/*016.JEX*02/1SE6FX/360°

Discussing lately the issue of augmented playground with some folks, I remembered one of the bets piece from Lyon Art Biennale in 2001: "SYS/*016.JEX*02/1SE6FX/360°" is a project by Mathieu Briand. It is basically a participatory interactive 360° environment made of a big trampoline on which the participant hop around, he/she s then scanned by 75 input points and this data are then displayed on panaramic screens which encircle one at lagging speeds.

The adding together of images with a common viewpoint creates a movement that can confuse our mind. We think that it is a camera that is turning since we have to move in order to look at an object from every angle. In this situation we are everywhere and the object is able to move.

Joseph Nechvatal describes his thoughts about it:

Briand takes participatory principles found in virtual environments (VEs – or that which is better know as VR (virtual reality) and externalises them. For example, his clearly mature participatory interactive 360° environment called "SYS/*016.JEX*02/1SE6FX/360°" manifests the principle of what I have been calling the ‘viractual’ brilliantly. The viractual is the space of connection between the computed virtual and the uncomputed corporeal world which here merge. This space can be further inscribed as the viractual span of liminality, which according to the anthropologist Arnold van Gennep (based on his anthropological studies of social rites of passage) is the condition of being on a threshold between spaces. This term (concept) of the viractual (and viractuality) is the significant connivence/complicity experienced in the show - a connivence/complicity helpful in defining the third fused inter-spatiality in which we increasingly live today as forged from the meeting of the virtual and the actual - a concept close to what the military call "augmented reality".

Why do I blog this? this example of tangible computing at a higher level is curious, especially if we think in terms of how people perceives one's activity on the panoramic display. I was also unaware of this "viractuality" concept.

Next Nabaztag version: nabaztag/tag

This - sort-of - businessy presentation of Nabaztag is very interestint because the founder show the new version: Nabaztag/tag. Among its new capabilities, it can obey to voice orders ("it has a belly button and everyones knows that rabbit hear through their belly buttons"): like "weather in NY?" and voice capabilities has been improved since it can read stream for any sources (podcasts, web radio...). My favorite part is when the rabbit smells stuff like carots and says "I am wifi rabbit for god's sake, I cannot eat carots".

As described by network worlds:

Version 2 will be announced, which includes speech recognition functions, to allow users to use the rabbit as an input device, or even as a push-to-talk or VoIP phone. "Everything that you can do with an audio input device you'll be able to do with V2," Haladjian says. In addition, the V2 will be able to stream audio from the Internet through the device, which allows for things like listening to podcasts or Internet radio streams. The company says V2 will launch in November and will likely cost more than the current Nabaztag, which sells for about $150.

Why do I blog this? even though it's just a small step, this new version have slight improvements (I'd like to try the voice recognition). They also said that other devices produced by Violet will be released so that it can communicate with the rabbit: "this is leading the way for the Internet of Things" as Rafi Haladjian says.

Haptic radar for spatial awareness

Augmenting spatial awareness with Haptic Radar by Alvaro Cassinelli, Carson Reynolds, and Masatoshi Ishikawa; a paper presented at the International Seminar of Wearable Computing in Montreux, Switzerland. This paper is about an "haptic radar": device that would allow its users to perceive and respond simultaneously to multiple spatial information sources using haptic stimulus.

Each module of this wearable “haptic radar” acts as an artificial hair capable of sensing obstacles, measuring their range and transducing this information as a vibro-tactile cue on the skin directly beneath the module. Our first prototype (a headband) provides the wearer with 360 degrees of spatial awareness thanks to invisible, insect-like antennas. (...) Among the numerous potential applications of this interface are electronic travel aids and visual prosthetics for the blind, augmentation of spatial awareness in hazardous working environments, as well as enhanced obstacle awareness for motorcycle or car drivers (in this case the sensors may cover the surface of the car)

Avoiding an "unseen" object:

Why do I blog this? I was interested by this spatially extended skin paradigm and how it can be used with regards to the topic of spatial awareness. Slightly connected to my PhD research, this is intriguing because it relies on lower-level processes of awareness (from a cognitive science standpoint)

Tangible Play: Research and Design for Tangible and Tabletop Games

(via)Tangible Play: Research and Design for Tangible and Tabletop Games is a workshop at the 2007 Intelligent User Interfaces Conference organized by Elise van den Hoven and Ali Mazalek.

Many people of all ages play games, such as board games, PC games or console games. They like game play for a variety of reasons: as a pastime, as a personal challenge, to build skills, to interact with others, or simply for fun.

Some gamers prefer board games over newer genres, because it allows them to socialize with other players face-to-face, or because the game play can be very improvisational as players rework the rules or weave stories around an unfolding game. Conversely, other gamers prefer the benefits of digital games on PCs or consoles. These include high quality 3D graphics, the adaptive nature of game engines (e.g. increasing levels of difficulty based on player experience) and an abundance of digital game content to explore and experience.

With the increasing digitization of our everyday lives, the benefits of these separate worlds can be combined in the form of tangible games. For example, tangible games can be played on digital tabletops that provide both an embedded display and a computer to drive player interactions. Several people can thus sit around the table and play digital games together.

Some examples described on the workshop page: Weathergods (Philips Entertaible), Pente (TViews Table), Yellow Cab (Philips Entertaible), Digital Dialogues (TViews Table).

Why do I blog this? how digital world and physical artifacts knit together is an important trend in the future of computing, especially in the context of gaming; that's a dimension I am interested in, especially from the interaction viewpoint: how these new input/output systems would allow playful activities (in context)?

Vibefones

(this one's for your emily): VibeFones: Socially Aware Mobile Phones by Anmol Madan, Alex ìSandyî Pentland . A paper that is going to be presented next friday in Montreux, Switzerland for the International Seminar of Wearable Computing.

In this paper, we describe mobile social software that uses location, proximity and tone of voice to create a sophisticated understanding of people's social lives, by mining their face-to-face and phone interactions. We describe several applications of our system ñ automatic characterization of social and workplace interactions, a courtesy reminder for phone conversations, and a personal trainer for dating encounters. (...) We introduce the paradigm of mobile devices as social coaches or personal trainers, for phone conversations and dating encounters. Several related issues deserve consideration - how can we improve the accuracy of our predictions, and how appropriate and useful is this type of feedback?

Why do I blog this? what is interesting here is that the phone becomes a socially-aware arifacts by measuring non-linguistic speech (viz. tone, prosody) and interaction metadata (e.g. physical proximity). The point is hence to use the phone as "social prosthesis". However I am less interested by this idea of social coach

Wii wheel

UbiSoft and Thrustmaster recently revealed a very intriguing Wii controller: a steering wheel. No big suprise here but what is strikingly curious is the fact that the wheel has no physical anchor to the ground, as with traditional steering wheels. It basically works simply as a controller device that the Wii remote is placed within and then manipulated.

Why do I blog this? what I find interesting here is the notion of immersion with such a device, reminds me how kids play with plates as steering wheel in imaginary games. What about removing the wheel?

Cosmic Modelz: 3D design and print for kids

Now it seems that Cosmis Modelz has a website. I already blogged about it here); it's actually a subset of Dassault Systemes (the french company which does Catia, which is also related with their big mothership Dassault the private/military airplane company).

In partnership with ZCorp, their motto is "Create a one-of-a-kind collectible from your Cosmic Blobs 3D artwork". The point is to allow kids to design toon-like characters and 3D-print them with a device that they designed.

Why do I blog this? this is a god stop towards new interactive toys but I am wondering about how they handle the interface of 3D graphic software for kids; there must be some tangible user interface around to smooth the process.

A graphic language for touch-based interactions

Straight from the Mobile HCI workshop about " Mobile Interaction with the Real World " (see the proceedings), this paper: "A graphic language for touch-based interactions" by Timo Arnall. It actually investigates the visual link between informationand physical things, using cell phone to interact digitally with augmented artifacts and spaces.

Timo's point is not take the counter-approach of existing practices (RFID, NFC...) which "hides" the range of possible interactions with augmented objects. He then proposes an iconography for interaction with objects, based on existing signs.

Sketching revealed five initial directions: circles, wireless, card-based, mobile-based and arrows. The icons range from being generic (abstracted circles or arrows to indicate function) to specific (mobile phones or cards touching tags).

Arrows might be suitable for specific functions or actions in combinations with other illustrative material. Icons with mobile phones or cards might be helpful in situations where basic usability for a wide range of users is required. Although the ‘wireless’ icons are often found in many current card readers, they do not successfully indicate the touch-based interactions inherent in the technology, and may be confused with WiFi or Bluetooth.

The circular icons work at the highest level, and might be most suitable for generic labelling. A simple circle was chosen for further investigation. This circle is surrounded by an ‘aura’ described by a dashed line. This communicates the near-field nature of the technology but also describes a physical object that contains something beyond its physical form. The dashed line distinguishes touch-based interactions from generic wireless interactions.

Why do I blog this? It's been a while that I wanted to post about it and I took this workshop paper as an opportunity to describe what Timo does. I find this work very interesting in the sense that revealing possible interactions to the user is an important point, especially regarding touch (it's not self-revealing for lots of human activities).

Now, thinking about gaming applications, I like what Timo mentions "could be applied to situations as diverse as a gaming sticker offering powerups", there is a lot to think about here: not only for collecting objects/improving capabilities in cities but also dropping game artifacts that could trigger specific behavior to other players (not from your team).

Seminar at Nokia Design

I was today participating in a Nokia Design meeting, presenting some stuff at their "IN&Out speaker series" in Topanga (California). My presentation (pdf slides, 6.2Mb) was about tangible interfaces and some potential misconceptions drawn from user experience research, concepts I found pertinent and stuff I've read. It's absolutely not academic research but more "food for thoughts" for designers, like what I do for video game companies. This material is meant to trigger some insights and discussion about design problems/solutions and ideas. The cover picture is taken from Cronenberg's movie eXistenZ.

It was also a good opportunity to meet Jan Chipchase and talk more about what is doing + methods he uses.

Hand gesture interface for Google Earth

Atlas Gloves is a "DIY hand gesture interface for Google Earth":

Atlas Gloves is a DIY physical interface for controlling 3D mapping applications like Google Earth. The user interface is a pair of illuminating gloves that can be used to track intuitive hand gestures like grabbing, pulling, reaching and rotating. The Open Source Atlas Gloves application can be downloaded here and operated from home using a webcam and two self-made illuminating gloves (or flashlights).

The user stands in front of a large scale projection of the earth with a special set of illuminating gloves on their hands. By gently squeezing each glove, an LED turns on, which is translated by the computer into navigational commands. The user is then free to fly above the world, zooming in and out, tilting, rotating at their leisure.

(Picture techepics).

Teddy: a Sketching Interface for 3D Freeform Design

via: Teddy: A Sketching Interface for 3D Freeform Design (by Takeo Igarashi,a Java-Applet Drawing Program that takes the 2D images you draw and renders them in 3D. The commercial version can also be found here. Video there (32Mb).

The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making wide areas fat, and narrow areas thin. Teddy, our prototype system, is implemented as a JavaTM program, and the mesh construction is done in real-time on a standard PC.

Why do I blog this? even though it's a bit old (1999), it's quite relevant to some other projects today concerning simple 3D modeling of simple objects.