Tangible/Intangible

A telepresence garment

Skimming through Eduardo Kac's "Telepresence and Bio Art : Networking Humans, Rabbits and Robots (Studies in Literature and Science)", I ran across his 10-years old project called The Telepresence Garment and found it of particular interest nowadays:

I first conceived the Telepresence Garment in 1995 to investigate the notion of the mediascape as an expanded cloth; i.e., to consider wireless networking as a new fabric that envelops the body. The Garment, which I finished in 1996, gives continuation to my development of telepresence art. This time, however, instead of a robot hosting a human, we find the roboticized human body itself converted into a host. The Garment was designed as an interactive piece to be worn by any local participant willing to allow his or her body to be engaged by others remotely.

A key issue I have been exploring in my work as a whole is the chasm between opticality and cognizance, i.e., the oscillation between the immediate perceptual field, dominated by the surrounding environment, and what is not physically present but nonetheless still directly affects us in many ways. The Telepresence Garment creates a situation in which the person wearing it is not in control of what is seen, because he or she cannot see anything through the completely opaque hood. The person wearing the Garment can make sounds, but can't produce intelligible speech because the hood is tied very tightly against the wearer's face. An elastic and synthetic dark material covers the nose, the only portion of flesh that otherwise would be exposed. Breathing is not easy. Walking is impossible, since a knot at the bottom of the Garment forces the wearer to be on all fours and to move sluggishly.

Why do I blog this? this nicely expresses how clothing is changing (will change), reshaped by emerging technologies such as ubiquitous/pervasive computing.

Visualization and Immersion of Life Sciences Data

Seeing is Believing is a very interesting article in The Scientist about information visualization. It tackles the fact that lift scientists have to deal with a huge amount of information. The challenge would be to develop relevant visual techniques.

Computers do a great job of finding patterns in data when they're programmed to look for them, notes Jim Thomas, who heads the National Visualization and Analytics Center at Pacific Northwest National Laboratory (PNNL) in Richland, Wash., "but many times, you are discovering what questions to ask. Only the human mind has the ability to reason with what is seen, apply other human knowledge, and develop a hypothesis or question." High-end visualization tools have been long used in applications such as the study of jet turbulence and by security experts looking for "chatter" in reams of telephone calls and transmissions, but only now are such tools being used in the life sciences, says H. Steven Wiley, director of the Biomolecular Systems Initiative at PNNL.

What is also intriguing is this sentence: "Without them, more data won't necessarily translate into better science", a nice evocation of Latour's inscription theory.

For that matter, it seems that VR is still around:

A next generation of visualization software may strive not just to offer a view, but allow the viewer to enter the data. This total immersion concept is the idea behind Delaware Biotechnology Institute's "cave," a Visualization Studio that Silicon Graphics developed, which allows users to literally immerse themselves in the data, both visually and physically. (...) One of the great benefits of the immersive system, Steiner says, is that scientists can "walk around" the data and peer at it from every angle, and do so collaboratively, either remotely or from the same room. And that, Steiner adds, is the great benefit of visualization in general: It can foster interdisciplinary collaboration by helping scientists from a variety of backgrounds understand a problem in order to solve it in a more effective manner.

(image taken from the Delaware Biotechnology Institute)

Why do I blog this? it's interesting to see that VR is still relevant in data manipulation.

Telebeads: Social Network Mnemonics for Teenagers

I've recently read j-dash-bi latest paper and it's very nifty: Telebeads: Social Network Mnemonics for Teenagers by Jean-Baptiste Labrune and Wendy Mackay (IDC2006). It's actually a participatory design paper that describes how they designed a curious artifact:

This article presents the design of Telebeads, a conceptual exploration of mobile mnemonic artefacts. Developed together with five 10-14 year olds across two participatory design sessions, we address the problem of social network massification by allowing teenagers to link individuals or groups with wearable objects such as handmade jewelery. We propose different concepts and scenarios using mixed-reality mobile interactions to augment crafted artefacts and describe a working prototype of a bluetooth luminous ring. We also discuss what such communication appliances may offer in the future with respect to interperception, experience networks and creativity analysis.

The ring addresses two primary functions requested by the teens: providing a physical instantiation of a particular person in a wearable object and allowing direct communication with that person. (...) We have just completed an ejabberd server, running on Linux on a PDA, which will serve as a smaller, but more powerful telebead interface

See the bluetooth telebead ring and how to associate the ring and a contact image:

Why do I blog this? I like this idea of "mobile mnemonic artefacts" as part of a situated and cognition framework: that's an interesting instantiation of communicating objects. Besides, the paper is full of good references about such devices.

Nabaztag + Everyware

In his book "Everyware : The Dawning Age of Ubiquitous Computing", Adam Greenfield says that:

I've never actually met someone who owns one of the "ambient devices" supposed to represent the first wave of calm technology for the home. There seems to be little interest in the various "digital home" scenarios, even among the cohort of consumers who could afford such things and have been comparatively enthusiastic about high-end home theater. (p91)

The Nabaztag wifi rabbit created by french company Violet tries to go against this stance: nabaztag + everywareActually, and to be fair with Adam, what he is criticizing in his book is rather the very complicated technologies that were supposed to be "calm", "intelligent", "ambient" in the digital home of the future imagined few decades ago.

Why do I blog this? it's funny that I received my Nabaztag and Adam's book the same morning. I fully agree with lots of Everyware's claims, I'll post more about it when read.

A MAZEing MOON- Digital experimentation Scenarios for Science Learning

A MAZEing MOON (by Marc Jansen, Maria Oelinger, Kay Hoeksema, Ulrich Hoppe) is a nice example of an educational application that combines handhelds (PDAs) and programmable Lego bricks in a classroom scenario that deals with the problem of letting a robot escape from a maze.

It is specific to our setting that the problem can be solved both in the physical world by steering a Lego robot and in a simulated software environment on a PDA or on a PC. This approach enables the students to generate successful sets of rules in the simulation and to test these sets of rules later in physical mazes, or to create new types of mazes as challenges for known rule sets

Why do I blog this? well, apart from the learning scenario that is interesting (embedding problem solving into a concrete and tangible device control), I love this device:

Protospace: augmented CAD

Protospace Demo 1.2 by the Hyperbody Research Group at TU Delft.

Protospace Demo 1.2, successor of Protospace Demo 1.1, explores 1) the appliance of swarm behavior in an early stage of a building project and 2) the use of experimental user interfaces with motion tracking, wireless controllers and speech recognition. It is a tool for designing diagrammatic layouts, in 3D and with dynamic (as opposed to static) elements. (...) Protospace is as much the intelligent design tools it provides as the user interface(s) for interacting with them. We believe that C(A)AD systems will benefit from more natural interfaces than the classic mouse, keyboard and small computer screen. In Protospace Demo 1.2 we experimented with wireless controllers, motion tracking, speech recognition and sensorial 'playing' field.

Why do I blog this? I like this kid of way of interacting in an embedded way.

Telephoning has lost its physicality

Via news.3yen: Telephoneboxing is an art installation which very clear aims:

Telephoning has lost its physicality; it has literary become weightless. The smaller the telephone gets, the easier it is to communicate, anytime, anywhere, with anyone. (...) What would communication mean if a phone call would become an extremely physical action? When dialing a number requires a lot of concentration and words need to be exclaimed?

"Telephoneboxing" is an installation which explores the borders of communication. In a 20ft container, 10 buttons are attached to the walls. The buttons look like boxing balls and that is exactly what they are. In order to make an international phone call, one puts on boxing gloves and hits the buttons to dial a number. When a connection is made, one has to stand in one specific spot and speak loudly in order to be heard. The answer can be heard on a spot a few meters further into the container. The calling person will automatically adjust the level of communication to his or her eagerness to talk and/or to his or her physical condition.

Why do I blog this? I like this idea of re-introducing physicality in phone communication using tangibility.

msdm: mobile strategies of display and mediation.

msdm:

a research-practice dedicated to mobile strategies of display and mediation. msdm projects explore media in context, including electronic tagging, locative media, games, bots, radio fm, para-architecture and urban screens, with an attention to collaborative experiments in free culture, and open source"

Their flickr account is full of weird pictures that concern the Internet of Things:

And this is almost a thinglink

Why do I blog this? I like this idea of having "mobile strategies of display and mediation", this is intriguing; but I found pertinent the mediation concept; in a world of interconnected things, some mediation occur, but to do what?

More about their work can be found at turbulence.

Kid drawing tablet on TV

V-Tech has an interesting tablet: V.Smile Art Studio

this art studio will help your child become the next budding Picasso! This interactive, creative studio provides opportunities to unleash your child’s imagination! The touch-sensitive drawing pad, which looks like an artist’s palette, and interactive stylus allow children to scribble, draw and learn while seeing their masterpieces appear on the television screen. With over 12 activities included, children can learn to draw lines and shapes, create pictures, color objects, experiment with mixing colors, and tap into their own creativity by drawing their own, unique masterpieces. With a save function included, young artists can save up to five pictures, add animations into their drawings, and then prepare a slideshow for viewing on the television screen. Plus, the Art Studio features fun games such as making toys, rainbow chase, and animation maker to keep children engaged.

There is an article in the NYT about it:

The role of the television screen continues to expand with the V.Smile Art Studio, a large battery-operated children's drawing tablet. The $30 device, made to work with the V.Smile TV Learning System, sold separately for $50, also includes one software "smartridge." Plug everything in, and your TV screen turns into a large blank easel surrounded by 15 color selections, plus icons for basic drawing functions like erase, fill, cut and paste.

The tethered stylus combines a pressure point with a magnetic tip that makes it possible to select or unselect screen items, essential for playing the included sorting games. There are also direction keys and a large "Enter" button, offering several ways to do the same thing. Children are also helped by the oral labels, activated by merely moving the cursor over any screen item. As many as five pictures can be saved in memory and turned into a simple screen saver.

While the resolution is crude, the device encourages children to create screen content rather than just watch it.

Why do I blog this? I like this idea of enhancing tv reaction with a tablet (would it be possible to use it on movies/ad, like drawing mustache on tv news presenters?).

Gispen XS: table to support collaboration

Dutch designer Emilie Tromp pointed me on her Gispen XS table, which is aimed at acilitating creative collaboration (this is also the subject of her Master's thesis at Industrial Design Engineering of Delft University of Technology).

Table for creative collaboration and informal meeting in offices. High table with integrated computer that supports a dynamic way of meeting. By reducing the width and depth of the table the interaction is less formal and hierarchic. The table supports the creation of shared understanding by focussing on human-human interaction instead of human-computer interaction.

Meant to support creative collaboration and informal meeting in offices. The scope was to improve the creation of shared understanding by creating an informal atmosphere (by reducing the depth and width of the table, since distances are related to the level of formality of the conversation) and stimulating a positive body language with the collaborators (by adding tactile interesting aspects to the table).

Why do I blog this? I like the idea of playing on specific features to raise the informality. Besides, I am curious about the articulation of "the creation of shared understanding" and "creating an informal atmosphere".

Bits and pieces from the CrystalPunk Manifesto

I am actually in Utrecht, in the former utility area of a vacant 13 floor office (for the "Crystalpunk Workshop for Soft Architecture"). Rooted in self-education, DIY and drawning "connections between disconnected fields of knowledge" is the motto. Reading their description, I ran across this part that I find fundamental:

Now that we have found data, what are we going to do with it?!

Technologists have for decades been playing with the idea of the supposedly smart home: the entire house adaptive and responsive and proactive, providing conveniences like that resurfacing dystopian killer-app: the refrigerator that makes sure the milk never runs out. No matter how device-centric and profit-inspired these efforts are, and as such divided by a royal mile from the super-serendipity of Crystalpunk roomology, this workshop is moving in the same problem-space of obvious possibilities and unresolved puzzles of making sense from the surplus of automated data production. Everybody can generate a source of water by opening the tap, few are given to come up with conceptually stimulating ways to process the output.

Archikluge: Genetic Algorithm that evolves architectural diagrams

Just met Pablo Miranda Carranza at the Crystalpunk Drug Workshop in Utrecht. One of Pablo's project seems tremendously interesting: ArchiKluge

ArchiKluge is the first of a series of small experiments written in Java which explore ‘artificial creativity’, automatic design and generative approaches in architecture. ArchiKluge is a simple Genetic Algorithm that evolves architectural diagrams. It explores the qualities of design made by machines, devoid of any intention, assumptions or prejudices, and which often display a very peculiar form of mindlessly but relentlessly pounding against obstacles and problems until overcoming them, a manner of acting nature and machines commonly exhibit. (...) ArchiKluge implements a Steady State Genetic Algorithm with Tournament selection. (...) The fitness function consist of the addition of each cell’s added ‘shortest paths’, a measure often used in network analysis (and in space analysis such as Bill Hillier’s Space Syntax). (...) For illustrating the resulting circulations through the evolved layouts, the paths left by random walkers, or agents that move randomly through the lattice have been used.

Why do I blog this? I like this idea of evolving architecture and would be happy to see it as tangible as this project of self-replicated robots! It also reminds me this dog who reconfigure itself into a couch (or check this).

Additionaly, related to the blogjects or fabject concept, it would be curious to integrate genetic algorithms in spimes and let them evolves on their own... would a spimey couch work?

Collaborative WiFi-drinking interface

Lover's Cups, a MIT Medialab project by Jackie Lee and Hyemin Chung:

Lover's Cups explore the idea of sharing feelings of drinking between two people in different places by using cups as communication interfaces of drinking. Two cups are wireless connected to each other with sip sensors and LED illumination. The Lover's cups will glow when your lover is drinking. When both of you are drinking at the same time, both of the Lover's Cups glow and celebrate this virtual kiss.

The idea is to shows how computer interfaces can enhance common activities and use them as communication method between people: the act of drinking is used as an input of remote communication with the support of computer interfaces.

Why do I blog this? well sometimes awareness tools are utterly crazy!

More about it: the authors wrote a paper for CHI, check the pdf.

Shoes interface which enables users to interact with real world objects

Tap World: Shoes interface for real world interaction (developed here):

"Tap World" is a pair of shoes interface which enables users to interact with real world objects. (...) "Smart Tap Shoes", which enables us to manipulate various real world apparatuses by using shoes when user's hands are occupied with another task. "Smart Tap Shoes" is composed of several sensors, Laptop PC, and infrared ray transmitters. (...) Smart Tap Shoes has tap switches behind the shoes. So user can control the objects only tap a floor with Smart Tap Shoes. User also can operate by turning shoe around the heel by rotate sensors in the heel. If he wants to turn up/down TV's volume, it would be the way of easy to use. When user did the action, Smart Tap Shoes transmitted infrared ray to control the objects. In this picture, he switched on the light by tap a floor.

Why do I blog this? this project is quite old but curious enough to land here; we'll have to pay attention to where our shoes are (by googling them) before tuning them to control our set-top boxes!

The "breath mouse": a breath-controlled device

Sometimes when I'm looking at weird game/computer controller, I run across good things. Tonight I found this breath-based controller by David MacKay; it's still a bit rough but it exemplifies the idea. It actually connects lung volume to the mouse y-coordinate More about it in the following paper: Efficient communication by breathing by Tom H. Shorrock, David J.C. MacKay, and Chris J. Ball.

The arithmetic-coding-based communication system, Dasher, can be driven by a one-dimensional continuous signal. A belt-mounted breath-mouse, delivering a signal related to lung volume, enables a user to communicate by breath alone. With practice, an expert user can write English at 15 words per minute. (...) first breath mouse, made from an optical mouse, a belt, and a piece of elastic. The mouse is fixed to a piece of wood, to which a belt is also attached. Two inches of the belt are replaced by elastic, so that changes in the waist circumference produce motion of the belt underneath the eye of the mouse. This sensor measures breathing if the user breathes using their diaphragm (rather than their rib cage). We oriented the mouse so that breathing in moves the on-screen mouse up and rotates the pointer anti-clockwise along the curve; and breathing out moves the on-screen mouse down and rotates the pointer clockwise. The sensor also responds to clenching of the stomach muscles, but we encourage the user to navigate by breathing normally.

And yes, this is a breath mouse:

Why do I blog this? even though it seems funny, sometimes weird controllers (in this context, the point was rather engineering-based than creating a new product) end up into nothing but gives some ideas about the future of interactions.

Ambient Information Visualization thesis

If you're into information visualization, the Licentiate thesis of Tobias Skog (Future Applications Lab, Göteborg) is very appealing. It's called "Ambient Information Visualization" (1.7Mb pdf here) and it deals with various issues regarding informative art, everyday displays as well as their utility and evaluations.

This thesis investigates the concept of ambient information visualization. It has its background in the research fields of ubiquitous computing and information visualization (...) The term ambient information visualization distinguishes an area where these two research fields merge, and can be defined as the use of visual representations of digital data to enhance a physical location. These visualizations are typically displayed using flat-panel displays or projectors and ideally act both as information displays and decorative elements in the interiors where they are placed.

The thesis describes a suite of design examples, where the first ones explicitly address the issue of creating a decorative surface by using the styles of famous artists as inspiration for the appearance of the visualizations, creating so-called informative art. Subsequent designs are developed under the superordinate term ambient information visualization and strive to find generic, inherent properties of peripheral information displays and how these properties come to affect design requirements. As a way of informing the design process, visualizations have continually been tested with users in different environments, including exhibition settings with large amounts of visitors as well as long-term studies of use in office settings with smaller user groups. The knowledge gained from the design and study of these examples is analyzed and the results highlight issues that are of central importance when designing a visualization. These issues are divided into three categories that concern the information source, the mapping from data to visual structures and the use of the visualization.

Some of the examples, my favorite is certainly the one using the Mondrian compositions as inspiration to show information about e-mail traffic:

Self-Replication of a LEGO station by a robot

Self-replication robotics is a curious domain. Unlike, self-reconfigurable robotics, the idea is to utilize an original unit to actively assemble an exact copy of itself from passive components. Greg Chirikjian of John Hopkins University created a self-replicating robot capable of driving around a track and assembling four modules into a robot identical to the original.

Prototype 1 is a remote-controlled robot, consisting of seven subsystems: the left motor, right motor, left wheel, right wheel, micro-controller receiver, manipulator wrist, and passive gripper. This particular implementation is not autonomous. We built it to demonstrate that it is mechanically feasible for one robot to produce a copy of itself. The prototype was made of LEGO parts from LEGO Mindstorm kits.

Why do I blog this? well, would the interactive toys of the future by like that?

Some Xslab projects

At XSlabs they seem to do interesting things: soundSleeves is a project by Vincent Leclerc & Joey Berzowska:

These sleeves are sensitive to physical contacts. When users flex or cross their arms, a sound is synthesized within the sleeves and output through miniature flat speakers. The idea is pretty straightforward: using very simple elements (metallic organza and conductive yarns) we created a flex and touch sensor made of hundreds of switches.

And of course, related to the blogject idea (and one of the project we discussed during the workshop) there is this Memory Rich Clothing: Garments that Display their History of Use.

Physical objects become worn over time. A worn object carries the evidence of our identity and our history. Digital technologies allow us to shape and edit that evidence to reflect more subtle, or more poetic, aspects of our identity and our history. This project focuses on the research and development of reactive garments that will display their history of use. We will employ a variety of input and output methodologies to sense and display traces of physical memory on clothing. (...) [Example: ]An Intimate Memory shirt with a very sensitive microphone in the collar and a series of light points in a flower pattern incorporated into the front of the shirt. When a friend or partner whispers something into your ear, the microphone will record this event and the lights will light up, showing that an intimate event has occurred. The number of lights indicates the intensity of the intimacy event. Over time, the lights turn off, one by one, to show how long it has been since the intimate event took place.

Why do I blog this? these tangibles interfaces are curious and interesting, I like the way technology is embedded into these common objects.

HP and its "misto" interactive table

CNET reports on that HP Labs celebrated its 40th anniversary this week with an open house in Palo Alto, Calif., in which several of its consumer-oriented projects were on display, including a coffee table that featured a touch-screen display that could be used for sharing pictures, playing board games, or looking at a map. It's called Misto, an hybrid of coffee table and tablet PC.

Why do I blog this? yet another interactive table that goes straight in my list