Tangible/Intangible

Gummi toys

Few ago, wandering around in Zürich, I ran across curious boxes in a chupa-chups store, it occured that it was super-nice toys meant to create special "dishes". For instance, take a look at the Fruchtgummi Yummi Gummi Maschine (TOGGO):

First the name is great and second the object is marvelous. The "Kellogg's Cereal Bar Maker" is curious too, offering an interesting user interface:

Finally, Sweet eat (what a name!?) has this "marshmallow maker" and a "schokamell"

Why do I blog this? because the toy industry seems to design very pertinent "user interfaces" for tangible and "creative crafting" activities (not linked to digital world interaction though, but this is not my point). No mention of some underlying culture assumptions at stake as well (that I find less interesting). I put creative crafting into brackets because it's of course limitative but it might be funny to have this sort of user interface to create stuff in a "virtual world".

Gadget about light for kids

Digital Light Studio:

This is the digital light machine that allows children to create light sculptures by using seven freehand control knobs to manipulate 32 LEDs mounted on a spinning post under a 360 dome. In demonstration mode, the LEDs light up in different patterns, speeds, and directions to display any one of over 50 pre-programmed images, including a fountain, UFO, elephant, and pirate. A warp button spins, rotates, bounces, and disassembles images. An animation button plays up to 14 animated sequences, including a dragonfly, penguin dance, and blinking eye. Original light sculptures and animations may be created by using the knobs in tandem with normal, mirror, and kaleidoscope modes to draw, rotate, and invert images.

Why do I blog this? I really like this kind of interface, very tangible with a gestural dimension; IMO this is the direction where robots, video games and interactive toys are heading. Turning knobs to make your character rolling with a tangible instantiation in the form of light (REZ was so light oriented that you could think about removing the display and having this sort of lightform).

Nintendo R.O.B

Who remembers Nintendo R.O.B (The Robotic Operating Buddy)? an accessory for the Nintendo Entertainment System released in 1984 in Japan as the "Famicom Robot" and in 1985 as R.O.B in North America.

The R.O.B. functions by receiving commands via optical flashes from a television screen. With the head pointed always at the screen, the arms move left, right, up, and down, and the hands pinch together and separate to manipulate objects on fixtures attached to the base. Gamers without experience might wonder how R.O.B. relays data back to the NES, and in fact there is no direct way to do so. In Gyromite, one of R.O.B.'s base attachments holds and pushes buttons on an ordinary controller. In Stack-Up the player is supposed to press a button on his or her own controller to indicate when R.O.B. completes a task. While the Robot Series games were among the most complex of its time, they were reliant upon the honor system.

An interesting video here.

Why do I blog this? even though it was not a commercial success, the ideas developed by Nintendo are quite innovative, this buddy-metaphor is interesting and there are curious connections with tangible interfaces.

Nordichi workshop about near-field interaction

We recently posted the 15 accepted papers for the NordiCHI workshop "Near field interactions" that Timo, Julian and myself organize. They tackle diverse aspects of the interactions that may emerge in the context of the internet of things, with cell phones as enabler. What is tremendously interesting is the large variety of disciplines we have: designers, hci people, architects, industrial designers. A selection of images from submitted papers:

Experientia's new blog about playful learning with tangible interfaces

Experientia's new blog about playful learning with tangible interfaces is there.

Playful & Tangible is about playful learning with new interfaces, particulary in museums and entertainment environments. It documents many inspirations and examples of playful and tangible interactions and interfaces, and has a strong interaction design focus. Most of the content is by Héctor Ouilhet and Alexander Wiethoff, who worked as Experientia interns during the summer of 2006.

Good stuff.

What about voice?

I am not following voice-recognition and its potential applications but today I've been confronted to three papers about it in my daily scans. Even though it's still R&D oriented, each papers delivered some promising messages about a technology that I am skeptical about (based on previous research project and research readings). First there is this ACM Queue discussion by John Canny (University of California, Berkeley), which is actually a great piece about the future of HCI. Canny quote Jordan Cohen's work (formerly of VoiceSignal, now of SRI International)

"The killer application is probably going to end up being some kind of interface with search, which seems to be the very hot topic in the world today; for mobile search especially, speech is a pretty reasonable interface, at least for the input side of it,"

This "search" concept is what I ran across this morning in a Business Week article by Steve Hamm, there is a presentation fo a curious application called TellMe about voice-driven Web information:

The idea is to create mobile search services that can make it easy for those on the go to find people, businesses, and information. That goes for any phone, but especially those equipped with browsers. A tourist might bark "restaurants," "sushi," and "downtown" into his cell phone and then see listings, read online reviews, make reservations, and retrieve a map with directions. "It has taken us six years to get to this point, but now we can really start to deliver on our original mission," says McCue, TellMe's CEO. (...) Skeptics point out that despite technology advances, voice recognition still turns off many consumers, who remember past glitches. But experts say that will change when systems combine voice, text messaging, and graphic info from Web pages. Each mode will be used for what it does best. "People will be using voice to launch into their search, and they'll want to see the information on a screen," says David Albright, executive director for marketing for Cingular Wireless, which is working with TellMe.

Yes, of course these last pointed I quoted are recurrent, but as presented in this Speech Technology Magazine Issues, there are others applications:

Use your telephone or cell phone to talk with Google—search the Web for answers to your questions, extract the information chunks you need, and listen to the results...Rather than struggling to find the answer to a specific question by chasing links across a Web site, you can simply click a button on the GUI screen and be connected to a human or artificial agent... instruct your oven through your cell phones...

Why do I blog this? don't know whether it's apophenia but I ran across those 3 articles today. So what? I am still dubious about speech technologies but there seems to be confidence in this avenue.

Artwork that changes to suit your mood

People from Bath University (UK) developed artwork that changes to suit your mood. It's called "empathic painting", the university webpage is more verbose about it:

"empathic painting" - an interactive painterly rendering whose appearance adapts in real time to reflect the perceived emotional state of the viewer. The empathic painting is an experiment into the feasibility of using high level control parameters (namely, emotional state) to replace the plethora of low-level constraints users must typically set to affect the output of artistic rendering algorithms. We describe a suite of Computer Vision algorithms capable of recognising users' facial expressions through the detection of facial action units derived from the FACS scheme. Action units are mapped to vectors within a continuous 2D space representing emotional state, from which we in turn derive a continuous mapping to the style parameters of a simple but fast segmentation-based painterly rendering algorithm. The result is a digital canvas capable of smoothly varying its painterly style at approximately 4 frames per second, providing a novel user interactive experience using only commodity hardware.

Why do I blog this? if the world infrastructure reacted to my emotion it would be crazy. Imagine mellow sidewalks...

Lollipop as user-interface

Regine completed my yesterday's post about tongue-based interactions with this right-on-the-spot innovation: lollipop as a user interface (by Lance Nishihira and Bill Scott):

Participants suck on lollipops embedded with sensors to control robotic babies in a race. (...) Sensors transmitted each sloppy stroke to a laptop that was controlling the movements of several robotic toys. ``I'm trying to think which one of our properties can be driven by a lollipop,'' joked Scott, a member of Yahoo's platform design group. ``Maybe Yahoo Games.'' The ``Edible Interface'' was one of 10 prototypes featured at Yahoo's University Design Expo, an annual event that explores how humans interact with technology

(picture by Gary Reyes / Mercury News)

Why do I blog this?a curious interface; what happen when the interface is more "invasive" than just a joypad? Would I like to control cell-phones games or billboard through this sort if interface...

About tongue-based interactions

People interested in tongue-based interactions should have a glance at this thesis (in japanese though), there are results from different tests/analyses of potential stimulus recognition (at least judging from what babelfish managed to translate).

The next step is then to find uses as in Nikawa's work: "Tongue-Controlled Electro-Musical Instrument", The 18th International Congress on Acoustics, Vol.III, pp.1905-1908, (2004.4)

This study aims to develop a new electronic instrument that even severely handicapped people with quadriplegia can play in order to improve their quality of life (QOL). Ordinary orchestral and percussion instruments require fine movements of the limbs and cannot be used by those with quadriplegia. In this study, we made a prototype of an electronic musical instrument that can be played by tongue movement. This instrument is composed of an operation board inside the mouth and a sound generator. The signals emitted from the operation board are transmitted to the sound generator equipped inside a personal computer. Music is generated through speakers.

Another example is the Nintendo's tongue controlled GBA which is a curious hack too using a New abilities TTK: a tongue-touch wireless keyboard transmitter (an orthodontic retainer with nine membrane buttons).

Others also use it as a "third arm" for astronauts:

The proposed alternative hands-free computer control system ACCS - Alternative Computer Control System - (...) ACCS will provide pilots and astronauts with an additional flight control contour, which will allow for continuous computer control of the flying apparatus at max. G-force, vibration, as well as blindly due to blood surge back from retina. ACCS is placed in a person's mouth (and comprises a tongue controlled directional command module along with 12 additional commands). It does not interfere with breathing, talk and consumption of fluids.

Why do I blog this? websurf about curious human-computer interaction systems...

HyperScan: Mattel RFID-enabled game console

It seems that Mattel is back in the video game console business with their HyperScan project. It's aimed at tweens (8 to 12 year-old audience) and consists of a console, a controller, a game CD and six collector cards featuring a character or special power. The cards have embedded RFID chips and having new characters (= new cards) allows to get upgrades in the game. According to TG Daily:

"The black and red HyperScan console is about the size of a hard cover book when opened and can be folded up for easy carrying. There are two ports in the front for game controllers and a port in the back connects to a television. Games are started by inserting a game CD and then swiping an RFID-enabled character card over the console. (...) HyperScan is also trying to cash in on the red-hot collectible card phenomenon. Card-based games like Magic: The Gathering and Yu-Gi-Oh have millions of players and sanction tournaments with millions of dollars in prize money."

Why do I blog this? So there's going to be a new "Touch" habit using this video game console: players will have to swip a card on the console, that's intriguing. Good stuff for Timo's project.

A table singing "hap hep hip hop"

Spanish designer Guillermo Lorenzo created this interactive installation:

Interactive audio-visual intalation. where the players can modify sound by placing pucks on virtual tracks on two tables. One of the tables is singing "hap hep hip hop" while the other serves as a mixing table. The visitors move the pucks on the table, which is divided into square regions. Each square senses the amount of light that is left through the objects, the more objects the more sound can be heard.

Dance Dance Revolution User study

Johanna Höysniemi, International survey on the Dance Dance Revolution game. Computers in Entertainment (CIE), April-June 2006, Article No. 8. The article is an account of a study about a specific form of physically interactive game-playing: dance gaming.

An online questionnaire was used to study various factors related to Dance Dance Revolution (DDR) gaming. In total, 556 respondents from 22 countries of ages 12 to 50 filled in a questionnaire which examined the players’ gaming background, playing styles and skills, motivational and user experience factors, social issues, and physical effects of dance gaming, and taking part in dance-gaming related activities. The results show that playing DDR has a positive effect on the social life and physical health of players, as it improves endurance, muscle strength and sense of rhythm, and creates a setting where new friends can be found.

Why do I blog this? This is of interest to people (especially game designers) who will have to think about these issues, with regards to new consoles such as the wii. I tend to watch conspicuously this sort of user studies because there is not that much about it and because it might form the grounding recommendations for some tangible game design ideas.

Bruce Sterling at Ubicomp

It seems that Bruce Sterling will give the keynote presentation at Ubicomp 2006. It's always good to have a science-fiction writer giving some fresh air in a scientific conference.

Ubicomp: Reifying the Fantastic: Suppose a world really occurs where ubiquitous computing is as common as electricity and radio are today. What would that look and feel like and how would we describe it? Bruce Sterling has been working on a science fiction novel with exactly this topic, and has some thoughts to share on all things physical, fabbable, ambient, findable, and pervasive.

This is somehow what Sterling said he was working on when he gave his talk at LIFT06: how would it be like when everyware will be everywhere. We'll see what he will say about it.

Tap dance performance

Tap-N-Bass by Lalya Gaye, Valerie Bugmann and Alexander Berman. It is basically an improvised tap dance performance where the sounds of wired-up tap shoes are picked-up by piezo contact microphones and remixed live, resulting in drum-n-bass-inspired music.

Drum-n-bass is one of the most exhilarating music styles that have emerge during the last few years. Noticing pattern similarities between certain rhythms in drum-n-bass and in tap dancing, we decided to see what would happen if we crossed these two genres. In Tap-n-bass, we aimed at making a tap dance performance that would produce booming bass and fast syncopated rhythms reminiscent of drum-n-bass, while staying true to the genre of tradition of tap dancing and its characteristic sound. The music is produced live by sounds picked-up by contact microphones attached on the shoes. The sounds are filtered and remixed live through a mixer board and custom-made program run on a laptop. The Tap-n-bass performance is improvised and collaborative, in terms of the dialogue established between the laptop remixer and the tap dancers.

Digital but physical surrogates

Ambient information such as Monkey Business, Nabaztag or Availabot are related to the idea of embedding awareness with in a tangible artifact. This has been addressed by Kuzuoka, H. and Greenberg, S. in 1999 in their paper "Mediating Awareness and Communication through Digital but Physical Surrogates". ACM CHI'99 Video Proceedings and Proceedings of the ACM SIGCHI '99 Conference Extended Abstracts. There is a video here. Some excerpts:

Digital but physical surrogates are tangible representations of remote people positioned within an office and under digital control. Surrogates selectively collect and present awareness information about the people they represent. (...) Because these devices are located in the physical world, they attract one's attention through natural environmental cues (sounds, movement, etc.), are easily and naturally manipulated, and can serve as dedicated and responsive communication conduits.

Then they present examples of such surrogates:

The first class of our surrogates illustrates how activities of a remote person can be embodied within a physical surrogate located in a local office. (...) The next class of surrogates illustrates how a person can explicitly express different degrees of interest in others by manipulating a surrogate. (...) The final class of surrogates illustrates how they can be used to mediate communication.

And the authors points to relevant issues related to awareness surrogates:

First, awareness surrogates are caricatures with only limited ability to express information. Consequently, surrogates are best suited for portraying only limited notions of availability that abstracts one's activity: while still providing a general sense of availability, this lessen the risk of intrusion (...) Second, surrogates are a natural way to control video and audio quality [8], which in tum preserves privacy and minimizes distraction. (...) Third, surrogates can express different levels of salience, and thus can mitigate distraction.

Why do I blog this? it's intriguing to see how this trend emerged, evolved and now is closer to mass market. I am interested in how this would enhance gaming experience.

Tablet tennis for 3

Expected to be at the open sessions at Ubicomp 2006: Table Tennis for Three by Floyd Mueller, Martin R. Gibbs, Bo Kampmann Walther and Matt Adcock:

Table tennis provides a healthy exercise and is also a social past-time for play-ers of all ages across the world. However, players have to be co-located in order to play, and only 2 or 4 players can play at the same time. We are presenting a design concept of a table tennis game playable by three players who are in three different locations, connected with a videoconference augmented with a novel game-play. It is aimed at achieving similar benefits known from co-located table tennis such as providing a health benefit and bringing people together to socialize.

Accelerometers and wearable systems

Knight J.F., Bristow, H. W., Anastopoulou, S., Baber, C., Schwirtz, A., & Arvanitis, T. N. (2006). “Uses of Accelerometer Data Collected from a Wearable System.” Personal and Ubiquitous Computing”. The paper address the use of accelerometers in wearable systems for diverse applications.

It discusses and demonstrates how body mounted accelerometers can be used in context aware computing systems and for measuring aspects of human performance, which may be used for teaching and demonstrating skill acquisition, coaching sporting activities, sports and human movement research, and teaching subjects such as physics and physical education. (...) In particular, systems for the detection of activity status (including ambulatory mode), assessment of performance (such as match or technique analysis and studying skilled performance)

Why do I blog this? this sort of data might be interesting to use a new category of inputs in video games (to raise your farming activity in MMORPG?).

On gestural interactions with games

An interview on Gamasutra of Katherine Isbister and Nicole Lazzaro about Intimate Relations in video games. Some excerpts I liked (but the whole interview is interesting)

G: What are your hopes for the gestural input, particularly with the Nintendo DS and the Wii remote?

KI: (...) I think the trick is to get designers thinking in new ways to take advantage of that tactile interface, and that means understanding the social component of what’s going on gesturally between people. (...) Our lab just got a grant for a motion capture system to study interpersonal gestural dynamics and I’m really hoping we can feed that back into these sort of designs. I’ve got a lot of NSF grants going towards that kind of research. Once motion capture gets to an affordable level we’re hoping we can have these dynamics boiled down to a computable level and literally create gestural interfaces. (...) NI: And I think Katherine, there’s more you could say about the psychology of touch.

KI: Well it depends on what the touch is, holding hands can mean different things in different cultures, that’s a really sensitive issue. We saw that in our workshop, we told everyone that a $100 bill was hidden on someone and people were afraid to touch each other because of the boundary we typically have between other people. It turned out it was in my back pocket. I think everyone has that hesitation, but if you can get people to do something a little risky they automatically bond and their intimacy level goes up.

Why do I blog this? I like this idea of thinking about the social aspects of gesturing, this is so important in terms of how the bodies occupy space (proxemics....), how can we use this to design new interactions? can interpersonal gestural dynamics provide a good rationale for design? Those are intriguing questions. Besides, the touch issue is also interesting (right Timo!) Their conclusion "Constraint is design" also rings a bell.