Tangible/Intangible

Glume: a translucent modeling medium

Glume is a cool project carried ou at MIT Tangible Media Group:
Glume is a computationally enhanced translucent modeling medium which offers a generalized modular scalable platform with the physical immediacy of a soft and malleable tangible material.

The Glume system consists of soft and translucent augmented interlocking modules, each embedded with a full spectrum LED, which communicate capacitively to their neighbors to determine a network topology and are responsive to human touch.

Glume as a viable tool for modeling, visualization and simulation of three dimensional data sets in which users construct and manipulate models whose morphology is determined through the distributed system. The Glume system provides a new and novel means for expression and investigation of organic forms and processes not possible with existing materials by relaxing the rigidity of structure in previous solid building block approaches

Why do I blog this? from an aestetical point of view I like glowing artefacts; besides, from an interaction design POV, this project propose a cool tangible interaction mode.

Cubicle: cube-based interactions

Cubicle is an interesting research project carried out by Lancaster University Computing Department in the field of tangible computing:

Cubicle is a multifaceted, multi-sensory wireless tangible input device. While its physical attributes are modular to fit user preference and ability, Cubicle functionality is established by a set of well-defined, non-verbal dynamics. Cubicle can be used to reduce the complexity of current mobile technologies and to map the most commonly used functions to non-verbal dynamics that make sense to a particular application.

For example, the most common functions of a mobile phone might be: turn on/off; make a phone call and delete. Each of these are mapped to physical actions: turn on/off becomes squeeze; making a call becomes turn; and, delete becomes shake. Key to this design and development is the knowledge that possible actions become apparent only when considering object affordance.

Some applications here. More information in the following paper: K. Van Laerhoven, N. Villar, A. Schmidt, G. Kortuem and H.-W. Gellersen. "Using an Autonomous Cube for Basic Navigation and Input". In Proceedings of ICMI/PUI 2003. ISBN: 1-58113-621-8; ACM Press. Vancouver, Canada. 2003, pp. 203-211.

Interactive seat/Fauteuil interactif

Interactive seat/Fauteuil interactif by Guillaume Bautista and Sébastien Parayre. This interactive seat allows the person seated to control video/sounds/odors thanks to a keyboard and a joystick.

Le Fauteuil Interactif ” est un fauteuil permettant à la personne qui s’y installe de pouvoir contrôler à l’aide d’un clavier sensible ou d’un joystick, de la vidéo, du son, voire des odeurs. Le but étant de donner au spectateur les outils nécessaires à la redécouverte des sens. L'objectif est que le spectateur puisse faire vivre l’espace dans lequel il se trouve, et qu'il puisse créer sa propre chorégraphie (visuelle et sonore). Au travers de ce dispositif, le visiteur participe à la création de l’oeuvre et a le sentiment d’avoir vécu une expérience sensible.

A shoe that controls the amount of TV Kids watch

Via the BBC, a shoe that controls the amount of TV Kids watch:

The shoe - dubbed Square-eyes - has a unique insole that records the amount of exercise a child does and converts it into television watching time. One button on the shoe - the brainchild of a student at west London's Brunel University - records the amount of steps taken by the child over the day. Another transmits this information to a base station connected to the TV.

Here are both the base station that converts steps to TV time and the shoe's insole: It seems that there are more and more shoe/foot/toe based interactions lately!

Nintendo Revolution possible game controller

Via Joystiq, a picture of a rumored/possible game controller for the next Nintendo console: Lots of people thinks it's a fake. It may be. Anyway, I found it interesting to report this design here since it makes me think of a paper by Leganchuk, Shumin and Buxton: Manual and Cognitive Benefits of Two-Handed Input: An Experimental Study (ransactions on Computer-Human Interaction, 1998). In this article, the authors explore the potential benefits of such two-handed input through an empirical research:

bimanual manipulation may bring two types of advantages to human-computer interaction: manual and cognitive. Manual benefits come from increased time-motion efficiency, due to the twice as many degrees of freedom simultaneously available to the user. Cognitive benefits arise as a result of reducing the load of mentally composing and visualizing the task at an unnaturally low level which is imposed by traditional unimanual techniques. (...) Overall, the bimanual techniques resulted in significantly faster performance than the status quo one-handed technique, and these benefits increased with the difficulty of mentally visualizing the task, supporting our bimanual cognitive advantage hypothesis. There was no significant difference between the two bimanual techniques. This study makes two types of contributions to the literature. First, practically we studied yet another class of transaction where significant benefits can be realized by applying bimanual techniques. Furthermore, we have done so using easily available commercial hardware in the context to our understanding of why bimanual interaction techniques have an advantage over unimanual techniques

This paper is a must read for people interested in two-handed computer input, it provides a relevant literature review (not up-to-date but pertinent anyway).

Technojewelry and wearable technology

IDEO's Technojewelry devices are appealing. There is the Penta Phone and the Ring Phone

Penta Phone and Ring Phone are concepts for mobile phones. The design takes its cue from the universal gesture for using a telephone, but is feasibly grounded in the nanotechnology research emerging from start-ups and established companies. Calls can be initiated by raising the hand to the proper position, and voice-activated interaction will allow instant communication. The little-finger units will vibrate to indicate incoming calls and the thumb unit will beam the sound towards the ear when the hand is held in the listening position.

I also like the GPS Toes a lot:

nano-derived technology, GPS Toes are toe rings that communicate to a GPS receiver kept in a purse or worn on a belt. Wearing one on each foot, the GPS Toes device will guide the wearer to a preset destination by vibrating and lighting up to signal upcoming direction changes. The left toe ring will indicate left turns and the right toe right turns, whether driving on the highway, walking on city streets, or hiking on the mountain trail.

People interested in this kind of stuff might read The experience of enchantment in human-computer interaction by John McCarthy et al. , a paper that will appear in the Personal and Ubiquitous Computing Journal

Digital tape-drawing system

An old project but still relevant: Digital Tape Drawing (.pdf) by Ravin Balakrishnan, George Fitzmaurice, Gordon Kurtenbach and William Buxton.

Tape drawing is the art of creating sketches on large scale upright surfaces using black photographic tape. Typically used in the automotive industry, it is an important part of the automotive design process that is currently not computerized. We analyze and describe the unique aspects of tape drawing, and use this knowledge to design and implement a digital tape drawing system.

A video here

Great Eye-toy interaction with PS3

Brian D. Crecente describes the , PS3 game demonstrations that Sony recently gave. The point I am interested in is the use of the eye-toy to interact with the game:

The ones I found most impressive include the demonstration of real world objects being used to manipulate the game world with the help of the Eye Toy camera. In the demo, the creator of the Eye Toy held up two plastic cups and used them to move their digital equivalents on the giant screen behind him. Using the Eye Toy, he was able to scoop up water and pour it from cup to cup. He also tossed sprays of water across a tub filled with floating plastic toys.

Closer and closer to an mass-market implementation of tangible interaction richer and more original.

Chainsaw game controller

Another crazy game controller: Bloody Resident Evil 4 Chainsaw Controller designed by NubyTech , in partnership with Capcom. According to Lik-sang.

Designed with an actual Chainsaw lying in the middle of the R&D lab during brainstorming sessions, this massive sculpted controller comes with a built-in sound chip, imitating the roar of the powerful weapon. The attention to detail goes as far as having gory blood marks on the blade. When not chopping Zombies into pieces, the Resident Evil 4 Controller can rest on its stylish stand, on top on your TV, well at sight for the bill collectors knocking at your door. Very impressive in size, it will bring you fear and respect at the same time, yet light and ergonomic, it stays comfortable and does not affect gameplay at all.

Well, this just a josypad reshaped into another tool, there is (fortunately) no chainsaw-based interactions!

Pigs can play video-game (snout-based joystick)

I came across this stunning research project:

Animal scientist Stan Curtis and graduate student Candace Croney have embarked on a unique research project that uses video games to gauge the mental abilities of pigs. (...) to learn more about how pigs perceive the world by using video games to measure their cognitive ability

We're now sure that pigs can play computer games. In this experiment, the animals use their snouts to move joysticks, to hit targets on a computer screen with a cursor.

Croney hopes to quantify the cognitive level of pigs by encouraging them to do something that many parents wish their children wouldn't do so often–play video games. However, the pigs won't be playing arcade favorites like Mario World or Mortal Kombat, at least not at first. "We start with a very simple task," Croney says. "The computer screen has a series of different icons, or shapes, on one side and a single shape on the other. First, we try to get the pig to move the single shape across the screen to touch the one that matches it. Once the pig accomplishes that, we move on to more complex tasks. Pigs are known to be smart animals, and we expect them to do more than recognize symbols. Our tests are similar to many used in child cognitive psychology. They'll give us an idea of how advanced pigs are in mental development."

When it's time for a pig to play a game, the researchers position the computer monitor so that the pig can easily see it while it manipulates a joystick with its snout. "As video game enthusiasts can tell you, some joysticks aren't very durable," Croney says. "They couldn't withstand the strength of a pig. That created an unusual challenge–just how do you modify a joystick for a pig? We came up with a design that encased the shaft of a standard joystick in a steel handle, then added a device like a gearshift knob to the top of the joystick to help the pig control it."

As in many games, there are rewards:

The research team, which includes several undergraduates in animal bioscience, also had to design a special food delivery system. "Food is used as a reward to motivate the pigs to play the game," says Croney. "When the pigs correctly move the object on the screen, a bell rings, telling the pig that it's about to get a reward. Then a treat drops through a tube right into the pig's cup." The researchers also have installed a videotape system to record each experiment from four angles, which can be played back on screen simultaneously. "The videotapes help us carefully analyze the pigs' behavior while they are using the joysticks," Croney says.

I would be delighted to have a glance at the coding scheme they use to do the qualitative analysis of the participants' behavior!

Real-time, multi-site, distributed, interactive music performance

A project by the Integrated Media System Center (University of Southern California): Distributed Immersive Performance (DIP):

This project is working on the architecture, technology and experimental applications of a real-time, multi-site, distributed, interactive and collaborative environment called Distributed Immersive Performance (DIP). The objective of DIP is to develop the technology for live, interactive musical performances in which the participants - subsets of musicians, the conductor and the audience - are in different physical locations and are interconnected by very high fidelity multichannel audio and video links. DIP is a specific realization of broader immersive technology - the creation of the complete aural and visual ambience that places a person or a group of people in a virtual space where they can experience events occurring at a remote site or communicate naturally regardless of their location

They ran different sets of performance descrobed here like a two-way Interactive Duet or a remote master class.

Audio d-touch: tangible interface for music

Audio d-touch by Enrico Costanza:

Audio d-touch is a set of 3 tangible interface applications for music composition and performance: the Augmented Stave, the Tangible Drum Machine and the Physical Sequencer. "

Audio d-touch" uses a consumer-grade web camera and customizable block objects to provide an interactive tangible interface for a variety of time based musical tasks such as sequencing, drum editing and collaborative composition. Three instruments are presented here. Future applications of the interface are also considered.

A paper about it here(.pdf)

EyeDraw: draw pictures with eye movement

After the nose-based interface previously described in this blog, here is a simpler controller: EyeDraw (by Anthony Hornof, director of the Cognitive Modeling and Eye Tracking Laboratory, and developed with Anna Cavender and Rob Hoselton):

EyeDraw is a research project at the University of Oregon that enables users to draw pictures solely with the use of their eyes. (...) An eye tracker is used to detect eye movements and that data is interpreted by the application in order to allow users to click on buttons, choose starting and ending points, and save and retrieve drawings. (...) EyeDraw is being designed for children and teenagers with severe mobility impairments. Although other software exists for them to type and read, a drawing program will be a novelty for these users.

MouseField: motion controller

Mousefield by Masui, T. , Tsukada, K. and Siio, I:

a new simple and versatile input device called the MouseField, which enables users to control various information appliances easily without huge amount of cost. A MouseField consists of an ID recognizer and motion sensors that can detect an object and its movement after the object is placed on it. The system can interpret the user's action as a command to control the flow of information.

By placing and moving an object, the user can interact with the system: They provide an interesting example:

Teddy bear used a remote control

I like this teddy bear used a remote control (I am pretty much into kids tangible interface lately, working for a consulting about it). This project carried ou at IDI is called Quattro Didier Hilhorst and Nicholas Zambetti.A teddy bear as a remote control

Quattro is a radio alarm clock housed in an enigmatic translucent enclosure without any markings. Its function depends on its position: orientating it on its side, upright or horizontally, it becomes (respectively) a radio, an alarm timer, or a clock. As you approach Quattro it detects your presence and reveals illuminated touch-sensitive controls relevant to its current function. This minimal object works in tandem with a cuddly, cute bear; squeezing him performs various actions including a remote ‘snooze’ operation.

Convergence between interaction Design and robots design

Interestingly, we see more and more mentions of research about robots with regard to interaction design and user experience concerns. This is a good move (in the sense that if robots designers wants their products to be bought or "consumed" they need to pay attention to what users may do/wants/feel(/...) like with robots). Maybe this is due to the improvements of robots interface and the fact that the field is now growing from prototype to real (and affordable) products. There is an relevant paper about it in the last Interaction issue. In his column, Lars Erik Holmquist addresses this issue. First he states that tangible computing is thriving and may be a good way to interact with robots:

Consider that one of the most influential recent developments in human-computer interaction has been tangible interfaces—that is, computer input and output that is not just based on the standard screen, mouse and keyboard setup but instead involves a variety of physical and tangible input and output devices. If these interfaces are to support truly tangible output, they must be able to move or otherwise affect the world around them. And what else could we call a physically actuated, computer-controlled entity but a robot?

Then he introduces the convergence between robots and interaction design:

Basic research in robot communication is quite different from designing robotic products for real users. Fortunately, some interaction designers are already taking an interest in robots. For instance, in the People and Robots project at Carnegie Mellon University, interaction designers and robot researchers are exploring "robotic products," which they hope will be "intelligent, social, and able to assist us in our day-to-day needs." Carl diSalvo, a CMU Ph.D. student, has done some interesting work where ethnography and interaction design concepts were applied to robots. (...) At the Viktoria Institute we recently organized the workshop Designing Robot Applications for Everyday Use. Approximately 15 participants were an interesting mix of robot researchers and interaction designers who came from both industry and academia. The event served to spotlight the imminent convergence of robotics and other areas, including interaction design.

So now let's move forward and...

Meetings between robots and interaction designers will become even more frequent in the future. While Web pages and GUIs will always be an important part of the profession, designers who are aware of the potential of physically actuated products—call them robots or something else—will have a clear advantage over those who stay with the purely visual modes of interaction

DIY mobile projector

The Pooch (a team from Lancaster University, UK) has an awesome project: a DIY mobile projector:

Due to the lack of suitable commercial projectors of this type, .:thePooch:. set about creating just such a device. Using simple and cheap off-the-shelf components, it is possible to achieve unexpectedly good results. There will obviously be resolution and brightness issues, as one would expect with any low power device. However the device developed is practical for a wide range of applications requiring small scale projections in suitably dark environments.

In this project (very kindly described on their website), they turn a handheld TV set (+ a magnifying glass + gun-like spotlight) into a mobile projector, with a great design:

The MouseHaus Table

Thanks to regine: MouseHaus table, a new interactive table:

MouseHaus Table provides a computationally enhanced physical environment to support discussion and decision making about urban design. MouseHaus Table provides a physical interface that enables participants who have no previous computer experience to interact with a pedestrian simulation program.

Unlike other tangible interaction projects, MouseHaus enables users to employ ordinary materials in the interface. Users register objects to represent urban design elements by showing them under the camera, then they use the objects to construct a street layout for simulation.

The website provides various footage about it. They used it to study pedestrian behavior pattern: "<iusers discuss and attempt to arrange urban design elements to produce a preferred movement pattern and density level" (hey jamie have a look!).

SonicForms: tangible interfaces for audio visual environments

Chris sent me his new project: Sonicforms, which is a must-see for every tangible interface fans like me!

Sonicforms an open source research platform for developing tangible interfaces for audio visual environments. The aim of the project is to improve this area of musical interaction by creating a community knowledge base and open tools for production. By decentralising the technology and providing an easier entry point, artists and musicians can focus on creating engaging works, rather than starting from the ground up. Sonicforms exists as :

  1. a central repository for others to learn how to make their own interfaces and share their experiences.
  2. a set of tools and strategies for extending open source software to create these projects.
  3. a physical installation that will be exhibited showing other artists creative content through online submission.

The way the project reshape the relationship between consumer and producer is great! More information about how it works here.