Tangible/Intangible

Future/past of entertainment?

Force feedback device #4 Folks from our lab today visited colleagues next door to see the current projects they're working on, some sort of low-cost CAVE and haptic interfaces. Very instructive to try it out live. Quite big pieces of machinery anyway.

Lab material

In the past twenty years, what were the improvements in haptics? What were the main lessons? What were successes an failures? Does the Wii count here?

Is that the future or the past of entertainment? Why?

Tangible interfaces: Collecting gestural and touch patterns

This transcript of an interview of Dan Saffer about his manifesto for gestural patterns for touch interaction is very pertinent. It's mostly about this wiki resource which aims at collecting and disseminating gestural interface information and patterns, such as found on such devices as the iPhone and Wii (following a discussion Adaptive Path's blog). Some excerpts of this interview:

"How do you document this gesture where I’m sweeping my hand across the screen?” (...) This is our generation’s drag and drop.” (...) I felt it was a really important thing for interaction designers to be doing because, otherwise, we’re going to start to end up with a thousand different ways of turning on my TV where it’s like, “Is this the Microsoft TV where I have to snap my fingers three times or is it the Apple one where I twirl around in a circle?” (...) one of the nice things about having it be in a completely digital medium is that one of the problems with gestures is certainly documenting them. How do you describe something that’s not very ambiguous? It’s awfully difficult with words to describe gestures or even in diagrams to describe gestures.So having the ability to eventually put up movie clips showing this as a pattern with people moving their forefinger and thumb apart, for instance, having that kind of rich experience would be really nice on the website."

Wii usability testing (Picture taken from a wii game usability test I ran few months ago)

The examples he gives revolves around the Wii or the iPhone:

"The Wii certainly is very much about sort of movement in space. You’re not really touching anything except the controller. You’re kind of indirectly using a gesture. With the touch screen on the iPhone and other things, your fingertip is actually touching the device that you’re manipulating. So there is this gradation there."

Why do I blog this? this is indeed an interesting issue, how you describe these movements? can we have a grammar (i.e. a set of patterns). This has some tight connections with a project I am involved in that tries to map the wiimote and nunchuk movements of existing games in a database, this will then allow to analyze them and document their relative importance.

Game vest to simulate impacts on torso

There is an article in Technology Review about tangible interfaces for video-games by Erica Naone. It's basically about a vest (called 3rd Space) that aims at bringing more realism to the game experience by simulating impacts. It's based on pneumatic cells which produce impacts of various strength in different locations on the player's torso.

The article gives a brief overview of user experience issues:

"Force feedback devices are already popular among gamers, and Ombrellaro says that his vest promises an even more realistic experience than today's vibrating controllers. "The drama moment with this is getting shot in the back in a first-person game," he says. In market tests for the vest, he says, people would turn around in surprise when they felt the impact in the back, even though they knew intellectually to expect it. Based on feedback from its tests, the company chose a standard strength of impact, which is palpable but not bruising. "We're pushing the edge," he says. "We're still keeping it very fun but, at the same time, giving you tactile cues that are important. There's even subtly a message--that there are consequences to shooting people." Ombrellaro says that he also plans to ship vests with a more powerful compressor for a subset of gamers who want to feel stronger impacts and for use in military and police training."

Why do I blog this? video-games (as well as lots of digital environments) engage people in immersive experience but the body is often less involved (although the Wii suffers less from that issue...). In this case, even though the player cannot be hurt, the proprioceptive sense is mobilized in an interesting way.

Experiencing NFC in mobile gaming

"Experiencing ‘Touch’ in Mobile Mixed Reality Games" by Paul Coulton, Omer Rashid and Will Bamford is one of these papers which stand on my desktop for ages, waiting to be parsed and analyze (among lots of others). Found time to read it today in the train (heading back to Geneva from a three-day meeting series in Paris), possibly caused by a meeting with Rafi Haladjian at Violet yesterday. The paper describes the user experience of mobile phones equipped with RFID/NFC to play different games that involves RFID-tagged objects. NFC stands for "Near Field Communications" and is an interface and protocol built on top of RFID. The games described in this paper are PAC-LAN (a Pacman-like game in physical space), Mobspray (a virtual graffiti system) and MobHunt (a treasure hunt game).

The most interesting part of the paper (wrt my research) concerns the results from the results. They found that the system usability (touching tags) was efficient and not prone to social acceptability issue. Excerpts from the results:

The users found the objects very useful compared with just placing an RFID tag at a location as they found it much easier to see and felt it added to the immersion within the game play. (...) Another aspect of the objects was that for PAC-LAN, which was played at a much faster pace than the other two games, the players felt that the game disks were an important element of the game experience and minimized the time they had to spend checking their position on the mobile phone screen. Having played many location based games that rely on purely virtual objects we observed that players often become completely focused on the screen to guide them and often become oblivious to their environment which both defeats the premise of mixed reality gaming and can also be very dangerous. (...) One of the other aspects we experimented with was related to giving the user feedback after they have successfully read or written from or to a tag. For PAC-LAN we initially created version that had either visual feedback, through a pop-up note, or audio feedback, by playing a short tune. The audio feedback was unanimously preferred as players were often running at speed and the audio feedback was perceived much less intrusive on the game and harder to miss

Why do I blog this? after a discussion yesterday about gaming, RFID and social computing, it was funny to get back to this paper. Some curious things to draw here about feedback and immersion, quite important factor when designing gaming systems.

Coulton, P., Rashid, O., and Bamford, W., “Experiencing ‘Touch’ in Mobile Mixed Reality Games”, Proceedings of The Fourth Annual International Conference in Computer Game Design and Technology, Liverpool, 15th – 16th November 2006, pp 68-7

Questioning the unfolding of technology in Ubicomp

Read "Questioning Ubiquitous Computing" by Araya this morning on the train. Although the paper dates from 1995, it's still highly relevant considering how it gives a critical analysis of the technological proposals of ubicomp. The author aimed at criticizing the "technical thining", i.e. the kind of assumptions, justifications and modes of reasoning that underlies Ubiquitous Computing. It's important to keep in mind though that what the author judges here is rather the description of Ubicomp based on Mark Weiser's papers and less the concrete instantiations that has been designed afterwards. Araya's claim is that ubicomp leads to "displacement, transformation, substitution, or loss of fundamental properties of aspects of the “world” in such a way that its otherness is increasingly eliminated". The world becomes then "a subservient artifact". Some excerpts that I found interesting:

"What is striking about most of these scenarios is the marginal and irrelevant character of the needs referred to in them and of the envisioned enhancements of the activities (e.g., elevators stop at the right floor, rooms greet people by name, secretaries instantly know the location of employees). Although it is tempting to discard this marginality as if it were only an impression produced by the chosen scenarios, we believe that it has a more fundamental character. (...) Even more striking is the stark contrast between the marginality of the enhancements and the complexity of the computing infrastructure required to achieve them. (...) The question then becomes, if not driven by the purpose of satisfying significant human needs how does Ubiquitous Computing justify itself?"

His answer to this question is: technology. Although he acknowledges that human need may have been historically generated sometimes by technologies, Araya points to the fact that the scale and scope of new needs to be satisfied by ubicomp are unprecedented. He then worries about this technological absolutism in which the technological thinking is never called into question ("the primacy of the unfolding of technology over the satisfaction of humans needs, and the self-sufficiency of this unfolding are taken as absolute givens. "). Down the road, this leads to a situation in which technology does not require any justification outside of itself.

Why do I blog this? although a bit left-over (I haven't seen lots of citations) it's refreshing to run across these critiques. Especially, the discussion about the gap between so-called needs and the infrastructures to be put into place to meet them.

One of the example he takes is very close to things I'm interested in, namely the representation of physical space through digital means:

"By disseminating digital surrogates of the world, that is, digital representations of partial aspects of the world which have been subject to more or less intense pre-processing. As the following scenario illustrates, the utility of these surrogates is not confined to office or working situations, but could also have certain uses at home: “Sal looks...at her neighborhood...through...electronic trails that have been kept for her of neighbors coming and going through the early morning... Time markers and electronic tracks on the neighborhood map let Sal feel cozy in her street (...) What does the scenario in which “Sal looks at electronic tracks of neighbors coming and going in the morning” tell us? The “need for social interaction” has been anticipated in the responsive environment and elaborate surrogates of relevant aspects of the world have been prepared. The street, the morning, the neighbors and their encounters have been displaced in time and space and replaced by surrogates, suffering a deep transformation in the process. Entire aspects of the situation have been filtered away and they can no longer surprise us. The electronic surrogates of the street situation live now in a different world, a world in which surrogates of the past can be replayed at any time, replicated, and distributed at will."

Araya, A. A. (1995). Questioning Ubiquitous Computing. In ACM Conference on Computer Science, pages 230–237.

Physical instantiations of "Processing"

Concrete is a quite trendy store in Amsterdam, NL that sells clothes, toyz as well as designers' accessories. Yesterday, it proposed some work by Casey Reas (viz/image design) and Cait Reas (dresses). Processing shop

Why do I blog this? what was intriguing there was the presence of the book "Processing: A Programming Handbook for Visual Designers and Artists" (Casey Reas, Ben Fry) that is, essentially, the bible to employ this programming language, which allowed the design of the viz on the wall and the dress such as the one represented on the picture above. Physical-to-digital-to-physical translation.

Audio interactions in Nintendo DS games

Beyond blowing at your DS to inflate bubbles in Nintendogs, other games make interesting uses of the microphones: Spectrobes:

"dark energy creatures called the Krawl, and they're now invading your system. The only way to defeat them is to excavate and reawaken ancient creatures that are buried deep underground, called Spectrobes. (...) minigame and involves making a certain level of noise, with the tone and pitch of that noise playing a part in deciding what kind of Spectrobe you will get once the process is complete."

Dragon Tamer Sound Spirit:

"Dragon Tamer: Sound Spirit is basically your standard Pokemon monster battling game, but in order to get new dragons, you record sounds from different instruments and sources with the DS mic.

This is kind of like what Monster Rancher for the Playstation, where different random CDs would generate monsters with different statistics and abilitie"

Why do I blog this? interestingly enough, the mobile game industry, which has the perfect affordance and habits to control things with the voice (i.e. a cell phone...) has never released something similar (although I've seen some prototypes) on mass markets. Interesting HCI anyway... and on the NDS, as usual.

Ball-Shaped camera and tangible interactions

Fabien, who is at Ubicomp 2007, just sent me this crazy project: TosPom: a ball-shaped camera that takes pictures while playing catch by Izumi Yagi, Mitsuyoshi Kimura, Makiko Nagao and Naohito Okude:

"TosPom is a ball-shaped camera that takes pictures while playing catch. When the photographer throws TosPom to the object, the object’s face will be taken automatically as the object catches it, and the picture will be shown on the display. With TosPom, the act of taking pictures becomes a mutual, interactive activity that involves both the photographer and the object while both parties engage in a fun activity of playing catch. Moreover, the photographer can draw out a more natural and relaxed expression from the object."

Why do I blog this? catch, camera, ball, nice combination of keywords. Beyond that, the sort of data that they can get out of this might be curious to design playful activities. The device is playful but at the same time it records interactions with it and the surroundings. It's interesting also to think about other affordances of the ball... using the rolling capacity of the device to take picture of certain places, etc.

Ubiquitous computing normative future and sci-fi

Stone, A.R. (1991). Will the Real Body Please Stand Up? In Cyberspace: First Steps, ed. Michael Benedikt (Cambridge: MIT Press, 1991): 81-118. An excerpt I like from this paper:

"Neuromancer reached the hackers who had been radicalized by George Lucas's powerful cinematic evocation of humanity and technology infinitely extended, and it reached the technologically literate and socially disaffected who were searching for social forms that could transform the fragmented anomie that characterized life in Silicon Valley and all electronic industrial ghettos. In a single stroke, Gibson's powerful vision provided for them the imaginal public sphere and refigured discursive community that established the grounding for the possibility of a new kind of social interaction. As with Paul and Virginia in the time of Napoleon and Dupont de Nemours, Neuromancer in the time of Reagan and DARPA is a massive intertextual presence not only in other literary productions of the 1980s, but in technical publications, conference topics, hardware design, and scientific and technological discourses in the large."

Why do I blog this? avidly reading some material about the relationship between media/culture and their possible influence on technological development. In my talk about the user experience of ubiquitous computing (and how it fails most of the time), I often quote the problem of how sci-fi has created a normative future of what should be the tech future. This quote nicely exemplifies this issue by describing how a novel such as Neuromancer can be think of a common ground for engineers and designers. One can see these novels as sort of anchors to point what the future will be.

Reactrix's game

Visiting the COEX mall in Seoul yesterday, I ran across several interactive media displays designed by Reactrix. Although the point of this device oriented towards promotion and branding, I was more curious about people's reaction. Stood there for a while with Laurent to see what happens around these floor-displays. It's basically a beamer which projects some interactive scenes on the floor. Walking across or gesturing triggers reactions. There are different minigames like 2-players soccer games, whack-a-mole bits and other instantiations such as the one below: Reactrix Tangible Game in COEX center

People's reactions range from 0 attention (those people never look at their feet or they simply do not care) to short play and long play. The only thing is that the mini-games are so short that people seems to be fed up waiting the bloody soccer game to be back. Also of interest, the fact that a minority of users try to understand the infrastructure, looking up at the beamer or opening an umbrella above the floor.

Anyhow, the system's is not really about gaming and rather about enabling brands to be recognized, which obviously failed with him because I am incapable of remembering what ads I've surely seen after staying around these.

Good reads on Ubiquitous Computing

A reader of this blog recently asked me if I had tips about relevant paper to read concerning Ubiquitous Computing that has been released in the last 2 years...I made a quick list of the ones I found really interesting lately and that I rely on when doing presentations about critical overviews of that topic. One might wonder why they all have similar authors... it's definitely that there is some coherent thoughts in Paul Dourish's writings that echoes with my feelings. And of course, it's only 4-5 papers among a ocean of thoughts concerning ubicomp but those are the ones that I liked lately. No exhaustivity hre

Greenfield, A: (2006). Everyware: The dawning age of ubiquitous computing. Adam's book is a good overview of issues regarding the user experience of ubicomp, plus it gives a good primer that can leads to lots of papers on the topic. Have a look at the bibliography.

Bell, G. & Dourish, P. (2007). Yesterday’s tomorrows: notes on ubiquitous computing’s dominant vision, Personal and Ubiquitous Computing 11:133-143. This one gives a good critical vision of how "ubicomp of the future" is yet to be seen (because of issues such as difficulty to have seamless infrastructures) and a "ubicomp of the present" vision should be promoted (for example by looking at Korean broadband infrastructures/practices or highway system in Singapore).

Dourish, P. & Bell, G. (2007). The Infrastructure of Experience and the Experience of Infrastructure: Meaning and Structure in Everyday Encounters with Space. Environment and Planning B, Great food for thoughts about how infrastructures are important in ubicomp and how things are not simple when we think about space and ubicomp.

Williams, A., Kabisch, E., and Dourish, P. (2005). From Interaction to Participation: Configuring Space through Embodied Interaction. In proceedings of the International Conference on Ubiquitous Computing (Ubicomp 2005) (Tokyo, Japan, September 11-14), 287-304. What I liked in this paper is that the authors showed how space is not as smooth as expected by engineers and designers ("space is not a container"), showing how history and culture can shape our environment. Projects and applications are indeed relying on a narrow vision of city, mobility or spatial issues that take space as a generic concept.

Dourish, P., Anderson, K., & Nafus, D. (2007) Cultural Mobilities: Diversity and Agency in Urban Computing, Proc. IFIP Conf. Human-Computer Interaction INTERACT 2007 (Rio De Janiero, Brazil). Here the authors argues for investigating people’s practices, which can help understanding the complexity of how space is experienced, how mobility takes many forms or how movement in space is not only going from A to B or how mobility can take many forms.

Chalmers, M. and Galani, A. (2004): Seamful interweaving: heterogeneity in the theory and design of interactive systems, Proceedings of the 2004 Conference on Designing Interactive Systems (DIS 2004), Cambridge, USA, pp. 243-252. A paper about seamful design, i.e. how the environmental and technical seams can be used as designed opportunities and reflected to the users.

The evolution of objects through ubicomp

Bits from Appliances evolveby Mike Kuniavsky (Receiver), which describes the advent of ubiquitous computing applications:

"We are on the cusp of another profound change akin to that seen by the Baby Boomers. Ubiquitous computing appliances will change the fundamental nature of the home and our experience of it. The house of 2047 will likely not be filled with robotic humanoid servants, be an automated factory of leisure or resemble any of the other images that current domestic technology programs envision. It will be something different, and it will change imperceptibly, appliance-by-appliance, upgrade-by-upgrade, shift-by-shift, year-by-year. Our understanding of what constitutes an object will change: is an ATM a single device, an outpost of a system, or the physical manifestation of a service? Is a phone? Is your bed? And as we use and change these appliances, they will change us, too, as every great shift in the capabilities of our tools has in the past."

Why do I blog this? some good starting thoughts here about upcoming things; will object still be objects? what will count? the physical objects or the ecosystem of services around?

Gestural interface for TV

The rush towards gesture-based interface seems to be a new trend, as shown by this gesture-control for regular TV designed by Australian engineers Dr Prashan Premaratne and Quang Nguye.

What seems to change here is the fact that the concurrency problem is taken into account: "Crucially for anyone with small children, pets or gesticulating family members, the software can distinguish between real commands and unintentional gestures". The good integration to a wide range of devices is also new (" elevision, video recorder, DVD player, hi-fi and digital set-top box"), acting as a universal remote control. In addition, the very basic gestural grammar designed here seems to be simple enough.

Why do I blog this? What is intriguing is the way it is referred to as "Wii-style". This type of system has received a lot of interested in the last 20 years, lots of patents have been filed in the area. Maybe the Minority-Report-like UI as well as the frenziness towards multi-touch interface has led to a situation where people are expecting this sort of things to happen soon (normative future shape by cultural artifacts). The arrival of the Wii that can be seen as the "Steve Jobs of gestural interface" is also an important milestone. Will this pervade multimedia system controllers? Time will tell and it would be good to understand what works and what's not working with the Wii in terms of users acceptances.

Augmented tabletop with RFID

Browsing some pdf I have left on my desktop, I ran across this paper by Steve Hinske and Marc Langheinrich entitled An RFID-based Infrastructure for Automatically Determining the Position and Orientation of Game Objects in Tabletop Games (presented at Pergames 2007). It interestingly describes how RFID technologiy could be used in a tabletop gaming context, allowing to identify objects in an "augmented miniature wargame" (a la Warhammer 40K). The section about why augmenting such games has some good points:

"Popular miniature war games like “Warhammer”, “Warhammer 40k”, and “The Lord of the Rings”2 are excellent examples of games that continuously require precise information about the location and orientation of all game objects. (...) Besides measuring distances and angles, the players must consider the individual features and weapons of each game object. (...) such games can quickly become incredibly complex: Tens or hundreds of different game objects with distinct characteristics and equipment turn the game into an intricate and laborious episode of managing charts, sheets of paper, and measuring equipment. Therefore, the goal is to take the burden off the player by generally displaying static, but essential information about individual game objects (e.g., individual firepower, life points, etc.) on the one hand, and, depending on the current context, by providing them with dynamic real-time information regarding the location and orientation (e.g., unit A is 12 centimetres away from unit B), on the other hand. "

The test environment is very intriguing (using a LEGO Mindstorms robot) and provided a relevant platform to examine whether bringing off some burden related to information-management could change the gaming experience. I'd be curious to see how this is apprehended by players. Why do I blog this? Having played such games few years ago, I find it interesting to analyze the player's experience when supported by diverse technologies such s RFID: how does that change the way people negotiate rules and situations? (something that always happen in this sort of context) Are there other values (or downsides) brought by the inclusion of technology? How does it change the confrontation?

And in the end, how does that inform us about how interactive furnitures and the activities they could be used for? (surely relevant for things such as Philips Entertaible).

Digital/Physical fusing

Mark Baard in Boston.com wrote a short piece about how certain artifacts aim at fusing digital environments and physical activities. It basically gives an account of the "Virtual Worlds: Where Business, Society, Technology & Policy Converge" conference.

"Second Lifers wearing the gadgets will be able to attend "in-world" parties and gallery openings, whether they are sucking down beers at Cornwall's or stuck in Fenway traffic. Motion detectors and other sensors in the devices will also show your virtual mates what you are up to in the real world. (...) Linden Lab vice president Joe Miller described one of the early products that will bridge the two worlds as a wearable box that creates a "3D sound field" that allows the wearer to hear voices from his virtual world without completely shutting out the real people around him. (...) It will take some retooling before virtual worlds can accommodate all of the data streaming from ubiquitous sensors."

Also curious is the fact that companies such as Linden Lab or Blizzard Entertainment are hiring developers with experience in mobile systems (Symbian and Adobe Flash Lite). Why do I blog this? The examples are quite classic but it seems that the meme started to spread.

A return to the earlier mechanical era, with improvements

Following on his earlier column about command line as the future of User Interface, Donald Norman now describes physicality in the latest issue of ACM interactions as another important direction ("the return to physical controls and devices"). As he says "Physical devices, what a breakthrough! But wait a minute, isn't this where the machine age started, with mechanical devices and controls?" this is some sort of throwback to earlier times "with improvement" though.

"Physical devices have immediate design virtues, but they require new rules of engagement (...) Designers have to learn how to translate the mechanical actions and directness into control of the task. (...) As we switch to tangible objects and physical controls, new principles of interaction have to be learned, old ones discarded. With the Wii, developers discovered that former methods didn't always apply. Thus, in traditional game hardware, when one wants an action to take place, the player pushes a button. With the Wii, the action depends upon the situation. To release a bowling ball, for example, one releases the button push. It makes sense when I write it, but I suspect the bowling-game designers discovered this through trial and error, plus a flash of insight. Not all of the games for Wii have yet incorporated the new principles. This will provide fertile ground for researchers in HCI."

He also points out intriguing issues such as the movement towards physical interface would lead HCI to "move from computer science back to mechanical engineering (which is really where it started many years ago)". So he advocates for HCI that would take advantage of both mechatronics and UX: "If the future is a return to mechanical systems, mechatronics is one of the key technological underpinnings of their operation. Mechatronics taught with an understanding of how people will interact with the resulting devices"... wondering where this would happen.

Why do I blog this? it's now well established that "new rules" should be written. New games are being design, new guidelines being described, new approach are required (like gestural language annotation for example) but the final part (about need to have more mechatronic + a user-centered approach) is less common in papers about tangible interfaces. It's curious to see how things will unfold towards that direction (yes I assume that it's a correct direction).

The tent as HCI

Camping in the digital wilderness: tents and flashlights as interfaces to virtual worlds by Jonathan Green, Holger Schnädelbach, Boriana Koleva, Steve Benford, Tony Pridmore, Karen Medina (CHI 2002). The paper describes a very curious project that propose the use of a projection screen in the shape of a tent in order to immerse users in a virtual world (of course based on the metaphor of camping):

"RFID aerials at its entrances sense tagged children and objects as they enter and leave. Video tracking allows multiple flashlights to be used as pointing devices. The tent is an example of a traversable interface, designed for deployment in public spaces such as museums, galleries and classrooms. on interactions that fit naturally with the tent metaphor."

Why do I blog this? what I find intriguing is the discussion about why a tent is an interesting interface:

"As an interface, the tent reflects several current concerns within HCI. First, it represents an example of a traversable interface that provides the illusion of crossing into and out of a virtual world (...) our design tries to meet some of the challenges of designing interfaces for public spaces. For example, studies of interactive exhibits in museums show how passers-by learn by watching others interact. The two- sided nature of the tent provides those outside with a public rendition of the activity that is happening inside, but at the same time maintains a relatively protected and isolated environment for those inside."

Mike Kuniasvky on ubicomp

Some snippets from an interview of Mike Kuniavsky (by Tamara Ardlin) on UX Pioneers:

"TA: Were there products that came out during that time that you thought were especially cool or especially bad?

MK: There were a ton of bad products. There were refrigerators with built in tablet PCs, which are totally useless. At this point all of the internet appliances that had come out — which were essentially dedicated web browsers in a box — and the uselessness of all of those things — was an important lesson. There were all of these different things people were trying. Then there were things that were interesting. Ambient devices like the ambient orb came out around the time I started looking at all of this stuff. That was a very interesting device.

TA: I’ve taken a lot of your time, but I have one more question for you. What really fascinates you the most now? What do you think is going to drive your next five years?

MK: The fact that information processing is dying to be treated by product designers and industrial designers as a kind of "material," and that these people are including it into their devices as a kind of material. What used to be robotics is now showing up as a line item in a design object, like rubber. That is a profound shift in peoples’ relationship to what computers are and what they can do and where they can do it. "

Why do I blog this? interesting content there about ideas that I share (... it's always refreshing to see some resonances elsewhere!).

Robot-Ubiquitous Computing convergence/boundary objects

Talking about the convergence between robots and ubiquitous computing artifacts, I started to list some of the projects that reflect this trend. I know some aspects of certain are not robots or ubicomp but still. Maybe they are sort of boundary objects that we don't name yet: Nabaztag by Violet, the famous Wi-Fi enabled rabbit that "connect to the Internet, send and receive MP3s and messages that are read out loud as well as perform the following services (by either speaking the information out loud or using indicative lights): weather forecast, stock market report, news headlines, alarm clock, e-mail alerts, and others" (Source: Wikipedia).

Chapit by Raytron: "a small new robot named Chapit which is an "intelligent" companion helping you for some basic tasks like turning the light on or turning on electric or electronic devices (...) one of the biggest advantages of the Chapit is the capacity to recognize a man, woman or child without any programmation. The base model comes with a vocabulary of about 100 words only but it is possible to teach it up to 10.000. It also features an internet connection allowing distant control"

Netoy by izi robotics: "Netoy. Netoy is an 802.11g-compatible device with a small 1.8-inch screen. He can display music track information, read news, weather, e-books arms and being an overall general nuisance all while flailing its arms"

Chumby by chumby industries: " a compact device that displays useful and entertaining information from the web: news, photos, music, celebrity gossip, weather, box scores, blogs — using your wireless internet connection".

Stock Puppets by Mike Kuniavsky: "The G-7 Stock Puppets are an Internet-driven kinetic installation that tracks the movements of global stock markets with seven larger-than-life marionette puppets".

(to be completed!)