Tangible/Intangible

Tangible Acoustic Interfaces for Computer-Human Interaction (Tai-Chi)

Today I had lunch with Alain Guisan/Crevoisier, artistic director of art company b-polar and Myiuki Warabiushi his girlfriend, a japanese dancer. What he does in his art company seems really appealing, I will certainly visit their workspace. Besides, Alain is working on a European project: Tangible Acoustic Interfaces for Computer-Human Interaction (Tai-Chi):

This project explores how physical objects, augmented surfaces and spaces can be transformed into tangible-acoustic embodiments of natural seamless unrestricted interfaces. The ultimate goal is to design Tangible Acoustic Interfaces (TAI) that employ physical objects and space as media to bridge the gap between the virtual and physical worlds and to make information accessible through large size touchable objects as well as through ambient media.

The method that will be developed is based on the principle that interacting with a physical object modifies its surface acoustic patterns, due for instance to the generation of acoustic vibrations (passive method) or the absorption of acoustic energy (active method) at the points of contact. By visualising and characterising such acoustic patterns, it will be possible to transform almost anything (for example, a wall, window, table top or arbitrary 3D object) into an interactive interface (a giant flat or 3D touch screen), opening up new modes of computer-user interaction for responsive environments. Because of their numerous advantages over other methods, including the spatial freedom they provide to the user, the robustness with which they can be constructed and the ease of accommodating multiple users simultaneously, acoustics-based interfaces will become a major sensing paradigm in the future, implying enormous potential for the whole computer and information industry.

Why do I blog this? I still have to see what emerged from this tangible interaction projects, it seems very compelling. On a different note, we exchanged interesting ideas with Alain about a potential non-for profit research lab (about user experience and interactive stuff), which is a recurrent idea lately with some folks around Geneva :)

Technorati Tags: , ,

Augmented reality for poultry trimmers

Interestingly, poultry trimmer can now use augmented reality as explained in this food news

Currently, poultry processors use human screeners to inspect the carcasses. The screeners communicate instructions to trimmers using gestures when they find a bird with undesirable parts.

Now researchers at Georgia Tech Research Institute (GTRI) claim they have designed a computer system that automates the inspection process, making it faster and more efficient. The system eliminates the need for the human screeners and is being field tested prior to being commercialised for use.

The first communications system uses a see-through, head-mounted display worn by a trimmer. It directly overlays graphical instructions on a trimmer’s view of the birds as they pass him on the line.

The second system uses a laser scanner, mounted in a fixed location near the processing line, to project graphical instructions directly onto each bird that requires some action, such as trimming. The system tracks the carcass and beams the product onto it.

“It’s easy to see this technology working in a poultry plant,” said Blair Macintyre, an assistant professor in the Georgia Tech College of Computing and an augmented reality expert. “The question is, ‘What is the best implementation of the technology to satisfy the environmental constraints?’”

Connected pasta: I already posted something about human-poultry interaction with augmented reality.

Foot-based controller without external sensors

Thanks chris for pointing me on this Sensing Gamepad developed at the Interaction Laboratory (Sony CSL) by Jun Rekimoto:Electrostatic Potential Sensing for Enhancing Entertainment Oriented Interactions

This project introduces a novel way to enhance input devices to sense a user's foot motion. By measuring the electrostatic potential of a user, this device can sense the user's footsteps and jumps without requiring any external sensors such as a floor mat or sensors embedded in shoes. We apply this sensing principle to the gamepad to explore a new class of game interactions that combine the player's physical motion with gamepad manipulations. We are also investigating other possible input devices that can be enhanced by the proposed sensing architecture such as a portable music player that can sense foot motion through the headphone and musical instruments that can be affected by the players' motion.

Why do I blog this? it's great to have such an innnovation, allowing your to control games with your feet, without a foot sensor.

IHT about WiFi rabbit Nabaztag

The IHT has an article about Nabaztag:

This rabbit is not beautiful, it is not smart, and it is not that useful, but this first generation has already sold out,'' said Haladjian [CEO ov Violet, the company that does Nabaztag -nicolas] ''Wireless-linked devices will soon be everywhere, and we are now taking the first steps using Wi-Fi.'' (...) For now, the rabbit remains a basic communications device that uses lights, sounds and movements of its ears to discreetly pass on messages to anyone nearby. Sounds can include MP3 files of music, voice or noises, and any combinations of colored lights and patterns can be used to signal specific information. It costs ?95, or $115, plus a ?3.90 monthly subscription fee. Some of the functions that are available include a shining yellow light to indicate that the weather will be sunny; a rising or falling stock price shown by a pattern of lights; or the twisting of an ear when someone wants to get in touch without interrupting a meeting with a phone call. By far the most popular application among the initial users, however, is the ability to send an SMS, or short messaging system, message to the device to make it throb red, telling a loved one that they are being thought about. (...) ''Your alarm clock, coffee maker and heater should all adjust in a synchronized manner to the time at which you want to get up,'' Haladjian said. ''The ultimate goal is to link all devices within a home and even a city for your convenience.'' (...) Some of the things he is working on include an announcement by the rabbit when a specific bus nears the neighborhood in the morning; a teddy bear that can teach a child a language; an iPod-like device that receives TV broadcasts across the network; and video games that mix reality on the streets of Paris with the action on the screen. ''Believe me, I am not taking the trouble to build this network to help people download e-mail in a café,'' Haladjian said. ''Our success will depend on getting people to use the rabbit and other devices that rely on a pervasive high-speed wireless network.''

All of this makes a lots of sense if you consider that Haladjian's company, Violet, "is paired up with another company he founded, Ozone, which is building a Wi-Fi network to cover Paris". Why do I blog this? I like what they do at Violet and I strongly believe in their future products, relying on tangible computing and wireless technologies. Connected pasta other posts about Nabaztage on this blog are here.

From remote control to dialogue systems

via lau, :

Philips wants to ditch the remote control. Instead, you have to talk to a dialogue system, or Smart Companion, as the Dutch consumer electronics giant calls its newest invention. The Smart Companion will act as a friend in the home and, according to Philips, provide an easy-to-use interface to the digital world. (...) Dimi, looks like a modern lamp with a rotating head. The other iCat has all the characteristics of a Japanese robot toy, and comes equipped with 13 servos that control different parts of the face, such as the eyebrows, eyes, eyelids, mouth and head position. iCat can look happy, surprised, angry and sad.

Why do I blog this? even though remote control interfaces today are horrible, I am still dubious by such new kind of controllers. I am curious to see how this will be appropriated by the users.

Workshops on Mobile Music Technology follow-up

Lalya brought to my attention that the websites of the 1st and 2nd international workshops on Mobile Music Technology have been updated and now contain paper and poster proceedings, power point presentations and pictures from the workshops + soon on-line reports about the two workshops and possibly notes taken by participants (if they agree to have them on-line):

Why do I blog this? I am not into the field of music technology, however I sometimes have a glance at what they do because there could be relevant connections between locative media, video games and what they do. Besides, it's cool to have follow up to workshop, especially for people who could not attend the workshops ;) Very interesting is also the Ubiquitous Music panel at Siggraph in LA on 1st August. The panel will be moderated by Lars Erik and Atau, and will include Arianna Bassoli, Gideon D'arcangelo, Akseli Anttila, and Lalya Gaye.

Fursr work: Mr Punch and the Musical Particle Accelerator

New projects of Fursr (the guys who did the painstation). Mr Punch is a very intriguing game:

Disguised as a Punch and Judy Show, Mr. Punch emerges as a crossover between an early automata from the 30ties and a modern videogame: 2 players grab control over mechanical puppets and – by means of lever action – try to knock each others wooden heads off the iron shoulders. The childish nature of the automata provokes a likewise infantile but also agressive behaviour of the players, and when the curtain closes the spectators have had their fun, too!

The interface is pretty neat:

Another project I like is that Ohrwurmbeschleuniger : Musical Particle Accelerator:

The German word „Ohrwurm“ stands for small insects. People believe that they crawl into human ears and start to live there ("earwigs“). But Ohrwurm also stands for a piece of music that you hear and you can’t forget anymore. Melodies that come back into your conciousness when you expect it the least. And you can’t let them go. They are strongly connected with your emotional state and flow of associations.

With the Ohrwurmbeschleuniger you can accelerate the development process of Earworms. You can operate the object like a commercial microwave oven. But instead of putting food into the oven, you have to focus your attention to the „Accelerator-Tube “ at the inside. It is mounted instead of the rotary plate. Using the „Ohrwurmpipette“ (Earworm Pipette) to apply the Earworms to your ear, you can listen to the availabe Earworms. Therefor the Pipette must be very close to the ear. Two of the Earworms must be chosen for the process of acceleration (also referred to as “collision process”). At the outside of the Accelerator-Tube there are microwave-sensors which continuously measure the microwave field. During the process the Accelerator-Tube rotates according to the cooking program chosen. The cooking program also influences the microwave field inside the oven. If you choose for example „defrost“ the Tube rotates slowly and the field is rather weak. If you choose the highest cooking program the Accelerator-Tube rotates very fast and there is a stable 700Watt microwave field generated inside the oven.

A good video about it here. Why do I blog this? those are 2 interesting examples of how the renew 'interactions' with existing technologies. Connected pasta: I already mentionned projects with microwave ovens here (grape racing) and here (weird stuff to do with your microwave).

Research on Interactive Table at CRAFT

JB is our new research associate just finished the description of his project related to interactive table. It's all described here. The purpose of the project is to design, build and experiment furniture with embedded technology to support casual collaborative learning. Currently there are 3 design that he is working on:

Docking table: Arriving to the table with, or without, your laptop and directly accessing to your account, a shared space, the project portal, ...
Noise sensitive table: Using light patterns reacting from the noise level to provide a feed-back of the conversation dynamic.
Social maps: Using the information collected by the tables to show, on the wall or on the web, some maps of: activity; work areas (social places or silent work area) ; communities (1st year, 2nd year).

Why do i blog this? I am looking forward to see what will emerge from this. My involvement on this project is related to our CSCSW course; as a teaching assistant we may have to test that with our students.

Signs of Artificial Life along the Seine in Paris

Carol-Ann Braun. "April in Paris: Signs of Artificial Life along the Seine," IEEE MultiMedia, vol. 12, no. 3, pp. 14-18, July-September 2005.

Abstract: We barely notice the scores of video cameras capturing our movements as we go about our business in the city. But what if every public space and surface were transformed into an interactive experience-capturing our movements, gestures, sounds, and actions and using these to produce newsounds, images, text, and actions? Bus stops could come alive and talk back, advertisements could look back at you and talk to you, and public park robotic devices could play games with you. Will public places eventually become our social barometer as well as director? Will billboards display the populace’s discourse? Carol-Ann Braun brings us to some special streets in France where the city is literally alive with art and explores what it might mean for us to live in a world of artistic interactivity.

Why do I blog this? It's an interesting account about interactive art.

Tangible interactions and kids

The effect of tangible interfaces on children’s collaborative behaviour.. (pdf) Stanton, D., Bayon, B., Abnett, C., Cobb, S and O’Malley, C. In Proceedings of Human Factors in Computing Systems (CHI 2002) ACM Press. P.820.

The physical nature of the classroom means that children are continually divided into small groups. The present study examined collaboration on a story creation task using technologies believed to encourage and support collaborative behaviour. Four children used tangible technologies over three sessions. The technology consisted of a large visual display in which they could input content (using Personal Digital Assistants (Pda) and a scanner), record sounds (using RF-ID tags) and navigate around the environment using an arrangement of sensors called ‘the magic carpet’. The children could then retell their story using bar-coded images and sounds. The three sessions were video recorded and analysed. Results indicate the importance of immediate feedback and visibility of action for effective collaboration to take place.

Why do I blog this? even though the paper is short, it's interesting to see what can be extracted from the analysis of kids using tangible devices. For instance, I like that result: "Children were often observed to collaborate without verbal communication indicating that the design of the technology encouraged collaborative behaviour." and would love to see how it works. Awareness of others actions (and hence feedback/through) seemed to be very important for that matter.

The Skybluepink Interactive Box

Skybluepink Interactive Box by Amy Branton and Sarah Morris (Skybluepink)

he Skybluepink Interactive Box teaches French to 4-5 year-olds through play. Children interact with the magic box and its animated characters, using cards embedded with radio-frequency ID tags to teach basic colours and shapes. Teachers are able to control which cards the children use thus supporting both linear and non-linear learning. (...) How can we support young children to learn a new language though playful interactions with real and digital objects? This project aims to explore this question through the Skybluepink Box, a ‘magical box’ to help French language learning through play for 4 and 5 year-olds. By placing magical cards (RFID tagged cards) into the magical box (a tag reader) in particular sequences, children trigger a range of different animations and representations on a computer screen and hear and see French vocabulary. The cards and box offer a tangible interface through which to engage children with language through colours, shapes, rhyme and song on a computer screen. This interaction forms the basis for a series of play activities in French, including colouring in; naming and classification; pattern making; shape building; picture/narrative creation; music making; magical transformation.

More about it here.

Solar-powered Electronic Lifeforms

(Via), elf (i.e. electronic lifeforms), a project by Pascal Glissmann and Martina Höfflin:

'Elfs' are small mechanical systems powered by solar energy that behave as natural living systems in many aspects. (...) 'Elf' is a two-part installation developed in the context of the research project 'electronic-life-forms' by Pascal Glissmann and Martina Höfflin. On one hand, the 'elfs' are documented in their natural habitat, and the fading contrast of electronics and nature gives the scenario a surprising common impression. On the other hand, the imprisonment of these life forms in Weck-Preserving-Glasses reminds one of childhood adventures, exploring and discovering the world around us. The light-sensitive 'elfs' desperately use their chaotic sounds and noisy movements to call the attention of the outside world.

Why do I blog this? I found it amazingly cute + interesting. There is also a book about it that is going to be released soon.

Head Mounted Display immersed within the aquarium environment

PANAquarium is a project carried out at University of Southern California:

This collaboration between IMSC and USC School of Fine Art, involved building a circular fish tank around a panoramic camera with live tropical fish swimming within and a coral reef photo serving as background on the outermost tank wall. The users wore an Head Mounted Display, immersing them within the aquarium environment.

Following the aquarium immersion, the coral reef photo background was manually removed to reveal the background activity through the glass which then "breaks of the illusion" for the user. This application serves as test for a future version in which the panoramic camera will be placed within a sealed plexiglass tube and lowered into a very large commercial aquarium exhibit.

Connected pasta there is this Swim goggles with heads-up display that seems to be way better!

A book you can literally walk into

Walk-In Comix by Maribeth Back, Dale MacDonald, Mark Meadows, Scott Minneman, Beatrice Gallay, A. Balsamo, and Maureen Stone (the reading lab):

a graphic novel you can literally walk into -- it's printed on the walls, floor, and ceiling of a small set of labyrinthine rooms we built at the Tech. Talk about getting immersed in a book...

The story tells the adventures of five teenagers who literally get lost in a world of text and can only find their way out by learning to read it. The exhibit's maze-like structure reflects the story's twists and turns.

Why do I blog this? I like this idea of embedding interaction into a physical activity.

Music Insects Performance

Music Insects by Toshio Iwai (1992). Permanent collection at the Exploratorium, San Francisco, U.S.A.

These "music insects" "react" to color dots on the screen. When they pass over them such dots, they trigger musical scales, sounds, and different light patterns. The user selects colors from a palette with a track ball, and paints in their path, and the insects perform the colors when they pass over them. The insects' direction can be changed with certain colors, and colors can be painted to achieve less random musical "performances." This piece is a sort of tool for visual music performance.

Wearable Body Organs: Scream Body

Just stumbled across this Scream Body project. It's carried out by Kelly Dobson during 1998-2004.
ScreamBody is a portable space for screaming. When a user needs to scream but is in anynumber of situations where it is just not permitted, ScreamBody silences the user’s screams so they may feel free to vocalize without fear of environmental retaliation, and at the same time records the scream for later release where, when, and how the user chooses.

Why do I blog this? being myself a screamer to release a bit of pressure. I find it curious.

Jacking into brains and extracting video

(via), an intriguing study from an article released in Journal of Neuroscience, 1999:

Dr. Stanley is Assistant Professor of Biomedical Engineering in the Division of Engineering and Applied Sciences at Harvard University. He is the ultimate voyeur. He jacks into brains and extracts video.

Using cats selected for their sharp vision, in 1999 Garret Stanley and his team recorded signals from a total of 177 cells in the lateral geniculate nucleus - a part of the brain's thalamus [the thalamus integrates all of the brains sensory input and forms the base of the seven-layered thalamocortical loop with the six layered neocortex] - as they played 16 second digitized (64 by 64 pixels) movies of indoor and outdoor scenes. Using simple mathematical filters, the Stanley and his buddies decoded the signals to generate movies of what the cats actually saw. Though the reconstructed movies lacked color and resolution and could not be recorded in real-time [the experimenters could only record from 10 neurons at a time and thus had to make several different recording runs, showing the same video] they turned out to be amazingly faithful to the original.

The picture shows an example of a comparison between the actual and the reconstructed images: Why do I blog this? this is definitely amazing, and very promising in terms of human-machine interactions. Besides, if you're intro brain/mind/cognition stuff, this blog is great. Connected pasta I already blogged about using brain-wave as game-controllers.

Venting machines instead of vending machines

I just wanted to share the "stuff" we designed with my working group at the CAIF workshop. Along with regine, chris, sara, fabien, peggy and kevin we did a pretty good job. The evaluation of the project was conducted through a sonometer that evaluated the noise level due to people knocking on the table, and we ranked #2 :) The goal of the workshop was to envision new artifacts that may have a role in the future EPFL learning center. Our group was focused on 3rd places (café. corridors, entrance... social space)

We worked on the concept of "vending machines" that my be turned into "venting machines". The point is to take advantage of and existing habit (the fact that people goes to vending machine) to offer new perspectives:

  • Recharge thanks to a "PULL MACHINE" to improve your brainpower: through food (coffee, nuts, guarana), music (mp3, ringtones) or "power naps" in a "specific sleeping facility"
  • Release your stress with a "PUSH MACHINE" that you can hit-shake-kick-punch-scream-tickle and also shout in it. After doing that, you get a pill (a placebo :) ). At the end of the day, the energy put in the machine is released in a huge fountain or water stream (like Geneva's jet d'eau).
  • Support social exchange with a "PULL-PUSH MACHINE" which is somehow an interactive noticeboard

The presentation of our project is here (ppt, 12Mb). Thanks Chris for the drawings!