Tangible/Intangible

Wearable gaze detector in the form of headphones

Via, this "Full-time wearable headphone gaze detector" by DoCoMo seems to be curious (ACM subscription required). A paper by Hiroyuki Manabe and Masaaki Fukumoto submitted at CHI2006. It actually describes a full-time wearable gaze detector that does not obscure the user’s view in the form of a headphone.

Full-time wearable devices are daily commodities, in which we wear wrist watches and bear audio players and cellular phones for example. The wearable interface suits these devices due to its features; the user can access the interface immediately, anywhere desired. For full-time wearable devices, the interface should be easy to wear, easy to use and not obstruct daily life. In this article, the “full-time wearable interface” is defined as an interface that the user can wear continuously without obstructing daily life and can use easily and immediately whenever desired.

What is interesting to me is the potential applications:

This system can be used as a simple controller for many daily use devices or applications, such as audio players. It can also be used as a selector that allows the user to choose surrounding objects. When the gaze detector is supplemented with a video camera and a wireless communication device and the surrounding objects have identifying tags like QR codes, the user can get information about the object of interest simply by gazing it.

Why do I blog this? I was just intrigued by this sort of interface, especially from the cognitive standpoint: how would this impact our practices and how can people cope with the cognitive load it would generate.

Codechecking products

Via [telecom-cities], codecheck.ch described by ars electronica as:

The Codecheck project is an effort to create an informed “community” of consumers who are able to critically assess products prior to reaching their purchasing decisions. Whereas certain initiatives pursue this aim primarily by condemning retail offerings that are potential health hazards, Codecheck takes a different approach: it helps consumers decipher the product’s barcode. The way this works is as simple as can be. A potential buyer uses his/her PC to enter the product’s numerical code and sends it via Internet to codecheck.ch; what immediately comes back are comprehensive definitions and information from experts about ingredients like sodium laurent sulfate and E250. The result is the creation of a reference work that is constantly being expanded and updated with contributions from manufacturers, wholesale distributors, specialized labs, consumer organizations and individual consumers. Potential purchasers thus have access to a wide variety of information, opinions and reports, a body of knowledge that constitutes a solid basis on which to form an opinion about a particular product.

Plans are currently in the works to enhance this system by building in mobility. For example, a shopper in a supermarket could use his/her cell phone’s camera to photograph a product’s barcode and then send this image as an MMS to codecheck.ch, and the relevant information would immediately be transmitted back. By linking up diverse technologies (photography, Internet, telecommunications) in this way, Codecheck represents a step in the direction of well-informed consumers.

Why do I blog this? I am less interested in this as a way to better inform consumers than by the usage it creates: "checking objects". This participates in this kind of interaction people have more and more in places: pointing a device to a certain objects: first it was to take pictures (lots of pictures: moblogging, picture that goes right into flickr from the cell phone), now it's codechecking (not really pointing though...), what's next: touching object to do the codecheck: the "wand" metaphor is more and more relevant.

Finding a location for a pervasive game

Kuan Huang sent me one of his piece, which seems to be quite intriguing. His project entitled "Space Invaders 2006 (done by Computer Science Department and Interactive Telecommunications Program). The project page is informative and explain the whole process (I like when people explain how they are doing what they're doing like "Since it's a thesis project, the most critical thing is that I need to have a working demo to present in the last week of school. So finding a location is the first step.")

In the past one year, some testings and experiements were conducted within NYU campus. For our thesis projects, we decided to put together all the experience and lessons that we learned from previous testings and make an outdoor playable video game in three months.

Space Invaders 2006 is an outdoor video game that takes advantage of real world architecture spaces and transforms them into a game playground. Basically, the video game is projected onto a building. The player has to move left or right to control the motion of the aircraft. Whenever the player jumps, the aircraft shoots out a bullet.

The playground:

Why do I blog this? yet another example of using the real world as the interface. Of course, the analysis is a bit too rough (testing... surveys...) but it's interesting to read about how they thought about that. I am curious about this location thing, what is a good location for pervasive game, what constraints designers can think about? what about the spatial topology? Look at what Ken highlighted as constraints:

here are some technical issues that I can't solve in a short time: - I am not allowed to climb high to mount a camera onto one of the light stands in the park. - I need an at least 30 meters long power strip to get power supply from a building across the street. - There are some drug dealers hanging around the park after 9PM. It is kind of scary if I carry a laptop, a projector, a video camera at that time. - Too much ambient lights in that space which is bad for large-scale projection.

Turning vacuuming robots into pets

Via THE PRESENCE-L LISTSERV, it seems that Roomba vacuum robots are more and more complexe: myRoomBud allows to personalize iRobot Roomba Vacuuming Robot.

Since 2005, myRoomBud™ has been selling RoomBud™ costume covers to the owners of the 2 million Roomba robots and turning their vacuuming robots into pets. Now, the RoomBuds have been given (multiple) personalities. RoomBud Personalities enhance the Roomba pet experience by "teaching" your Roomba to act like the pet or character trapped deep inside it. Roobit the Frog hops around, Roor the Tiger growls then pounces, and RoomBette La French Maid wiggles its behind at you before vacuuming your room.

Why do I blog this? even tough this is a simple step, it's interesting to see how small organisations participate in this exploration of new affordance of things.

Cap Mounted Display

People into baseball cap like me could be interested into cap-mounted display such as the one designed by Lars Johansson and Niklas Andersson. One of their MSc student (Fredrik Nilbrink) designed a prototype:

This project’s purpose was to investigate the truck operators needs and to see how modern digital technology can help to reduce the paper work and increase the productivity and make the operator’s working situation better.

Concept I: Cap Mounted Display A monocular display unit is mounted on an ordinary cap. The unit also contains microphone, earphones, camera and Bluetooth units. The device is voice activated. (...) He [Fredrik]] took apart a pair of Sony Glasstron VGA-glasses to get the monocular Head Up Display we wanted for this project. On top of the cap a web camera was mounted.

Why do I blog this? a cool hack here but I am wondering about its usage in a real-world setting.

Phoxelspace: tangible exploration of voxel data

Phoxelspace is a project by Dr. Carlo Ratti, Ben Piper, Yao Wang, and Professor Hiroshi Ishii from the Tangible Media Group at MIT.

Phoxel-Space is an interface to enable the exploration of voxel data through the use of physical models and materials. Our goal is to improve the means to intuitively navigate and understand complex 3-dimensional datasets. The system works by allowing the user to define a free form geometry that can be utilized as a cutting surface with which to intersect a voxel dataset. The intersected voxel values are projected back onto the surface of the physical material. The paper describes how the interface approach builds on previous graphical, virtual and tangible interface approaches and how Phoxel-Space can be used as a representational aid in the example application domains of biomedicine, geophysics and fluid dynamics simulation.

Why do I blog this? one of the curious project I ran across while scouting for projects about tangible interactions and information retrieval/data manipulation.

Ambient displays in the Googleplex

After a quick search on Flickr, I ran across some of the ambient displays used at the Google Headquarters to show real-time queries (queries content + geographical location):

Pictures courtesy of yoz. Why do I blog this? As a I said in the previous post, we're interested in information retrieval/visualization and ambient displays, so I am just scouting. It's clear that they should have something more elaborate somewhere else, any thoughts about that?

Embeding information retrieval into tangible interactions.

Tangible Interface for Collaborative Information Retrievalby Alan F. Blackwell, Mark Stringer, Eleanor F. Toye and Jennifer A. Rode

Most information retrieval (IR) interfaces are designed for a single user working with a dedicated interface. We present a system in which the IR interface has been fully integrated into a collaborative context of discussion or debate relating to the query topic. By using a tangible user interface, we support multiple users interacting simultaneously to refine the query. Integration with more powerful back-end query processing is still in progress, but we have already been able to evaluate the prototype interface in a real context of use, and confirmed that it can improve relevance rankings compared to single-user dedicated search engines such as Google.

Why do I blog this? because (at the lab) we're discussing projects about interactive tables to embed queries/information retrieval into tangible interactions. This project is however more based on query construction in a collaborative setting.

Blogject Presentation at Reboot 8

At noon, Julian (aka "bleecks") and I gave our talk at Reboot 8. The title was "Networked objects and the new renaissance of things" in which we elaborated on the blogject concept (describing its main characteristics such as geospatial traces, history and agency) and of course highlighted what is stake and why this would be important. Here is the teaser:

The Internet of Things is the underpinnings for a new kind of digital, networked ecology in which objects become collaborators in helping us shape our individual social practices towards the goal of creating a more livable, habitable and sustainable world. "Blogjects" — or objects that blog — captures the potential of networked Things to inform us, create visualizations, represent to us aspects of our world that were previously illegible or only accessible by specialist. In the era of Blogjects, knowing how even our routine social practices reflect upon our tenancy can have radical potential for impactful, worldly change. Nowadays, the duality between social beings and instrumental inert objects is suspicious. In this epoch, a renaissance in which imbroglios of networks, sensors and social beings are knit together, everyone and everything must cooperate to mitigate against world-wide catastrophic system failure.

Slides can be found here (pdf, 4.5Mb), but it's mostly pictures and no text.

So, maybe there needs to be more room is to explain why this blogject concept is important (and why we're running this workshop serie about that). Here are few reasons we discussed (these are notes discussed by Julian and I in the plane):

We're now moving from Web 2.0 to the so-called Internet of Things (some would talk about the "web of things"). And if Web 2.0 was a place where social beings can aspire to 1st class citizenry, what happens in digitally networked world in which objects can also participate in the creation of meaning? Should they be passive, pure instrumentalities, as objects have been sense Descartes? Or should we consider ways to integrate them to help us make meaning, and meaning beyond just that dictated by conventional, rational business efficiency practices? We should definitely care about networked objects because of the possibilities for a potentially richer mechanism for knitting together human & non-human social networks in impactful, world-changing ways.

In addition, this related to a multidisciplinary trend: Objects and context matter for human activities: cognition (Situated Cognition, Distributed Cognition, Vygotsky), Sociology (Latour's ANT: objects are actors), ubiquitous computing (desktop > "smart" objects): it's about human and social agency, computation also lays in Artifacts.

Moreover, information brought by blogjects can be meant to raise awareness about some phenomenon we should be concerned of: what happen when a society get an accurate mirror of its own activities and production (Anne would wonder about why do we always have to raise awareness about bad or missing phenomenon). It also brings more transparency in human practices which may eventually leads to a "renaissance" of public concerns about human activities?

This would then impact industrial design and marketing: production reshaped by a tremendous new amount of information related to the usage of the objects produced: fed back into marketing+production. There's going to be tough issues to think about (privacy, control on data). The question is then "How an object that has the capactiy to report on itself modifies communication/relationships between companies and individuals?" since blogjects could be seen as communication channels between customers and companies. How do you/we design to accomodate two often times antagonistic practices? How would people design objects that customers can keep trusting about: if something can blog about you, your are concerned by who is reading that? who has access to that RSS feed and what goes into it? Therefore, ethical concerns are very important to take into account.

(more to come)

Why do I blog this? It was a very good exercice for us to do that, right after the second workshop; and lots of relevant people were there to comment on that. We tried to show there's an increasing concern about Things and stuff and possible connections for instance with Ulla-Maaria Mutanen's Thinglink or Bruce Sterling's spimes.

Carpet fighting

An interesting project from the //////////fur//// workshop @ ECAL, 2004 :

CARPET FIGHTING by Patricia Armada / Pierre-Abraham Rochat / Gabriel Walt / Mathias Forbach Compete on the keyboard against a player in the real space in this multiple reality crossing, tic-tac-toe like game. (PC Laptop, EZIO interface board, carpet, electronics)

Why do I blog this? I like the reality crossing idea and the messiness of such technology with wires.

Pervasive gaming workshop papers

The paper from the Pervasive gaming workshop during the Pervasive2006 Conference has been released.

The PerGames series of international workshops addresses the design and technical issues of bringing computer entertainment back to the real world with pervasive games. The previous PerGames events were held in Vienna (2004) and Munich (2005) and attracted researchers and practitioners from all over the world

So there are papers about AR gaming, smart RFID cards, the use of seams, cross-media gaming, the user experience of flow, the use of haptic feedback...

Why do I blog this? lots of stuff to parse about the future of gaming, using new paradigms such as tangible interactions, AR or haptic feedback (or new tech like RFID...).

Chocolate Experience for Cadbury

Chocolate Infinity is a project from the HMC MediaLab for Cadburys Chocolate Factory. It's carried out by Adam Montandon and HMC members. It interestingly used a shock sensitive floor and a series of motion sensors to immerse people in an intriguing interaction (to improve the visitors' experience):

As you enter the infinity room a giant chocolate bar melts into gloopy puddles beneath you and, when you jump in them, chocolate splashes all over the floor. Then a sprinkling of individual Roses chocolates appear beneath your feet. You won't believe your eyes when they unwrap as you tread on them – but as you step off they wrap back up.

Chunks of chocolate then fill the floor and when you stamp on them they break open showing gooey caramel, squidgy turkish delight, chunks of mint, orange or Cadbury's Crunchie inside. Finally you get to chase three Creme Eggs across the floor but don't stand still because they'll pop their tops and taunt you until the game is on again.

Why do I blog this? this is one of the trends in roomware: using floor/sensor-based interactions to trigger specific behaviors. I see more and more project about that and I am wondering about potential new places that would allow people to play games with this settings (arcades revival?).

Submerging technologies

Mitsubishi Electric Research Lab (MERL), and Paul Dietz in particular, seems to be working on something called Submerging Technologies (as attested by this SIGGRAPH presentation):

Goal: To show, somewhat whimsically, how emerging sensing technologies can be applied in unusual ways. Three interactive water displays: a tantalizing fountain that withdraws when a hand comes near, a musical harp with water "strings," and a liquid touchscreen.

Enhanced Life The displays apply emerging sensing technologies to the medium of water. In each case, the electro-optic properties of the water itself are exploited to make the water a fundamental element of the sensing system.

While there are serious industrial applications in coating, painting, and soldering for these sensing technologies, this project focuses on human interaction. The larger point is that as new sensing technologies become available, they can and will be used in very surprising ways to change how we interact with our world.

Why do I blog this? that's a curious context with challenging issues in terms of User Experience and human-artifacts interactions.

No buttons to press, just gesture

Time has an article about Nintendo's strategy. There is a relevant point there:

Nintendo can reinvent gaming and in the process turn nongamers into gamers. (...) "Why do people who don't play video games not play them?" Iwata has been asking himself, and his employees, that question for the past five years. And what Iwata has noticed is something that most gamers have long ago forgotten: to nongamers, video games are really hard (...) The learning curve is steep.

That presents a problem of what engineers call interface design: How do you make it easier for players to tell the machine what they want it to do? "During the past five years, we were always telling them we have to do something new, something very different," Miyamoto says (like Iwata, he speaks through an interpreter). "And the game interface has to be the key. Without changing the interface we could not attract nongamers." So they changed it. (...) Of course, hardware is only half the picture. The other half is the games themselves. "We created a task force internally at Nintendo," Iwata says, "whose objective was to come up with games that would attract people who don't play games."

And this seems to attract game designers:

John Schappert, a senior vice president at Electronic Arts, is overseeing a version of the venerable Madden football series for Nintendo's new hardware. He sees the controller from the auteur's perspective, as an opportunity but also a huge challenge. "Our engineers now have to decipher what the user is doing," he says. "'Is that a throw gesture? Is it a juke? A stiff arm?' Everyone knows how to make a throwing motion, but we all have our own unique way of throwing." But consider the upside: you're basically playing football in your living room.

"No buttons to press, just gesture": the essence of tangible interactions!

In addition, in terms of innovation, the article highlights few important concerns:

Nintendo has grasped two important notions that have eluded its competitors. The first is, Don't listen to your customers. The hard-core gaming community is extremely vocal--they blog a lot--but if Nintendo kept listening to them, hard-core gamers would be the only audience it ever had. (...) Cutting-edge design has become more important than cutting-edge technology. There is a persistent belief among engineers that consumers want more power and more features. That is incorrect. (...) intendo is the Apple of the gaming world, and it's betting its future on the same wisdom. The race is not to him who hulas fastest, it's to him who looks hottest doing it.

Why do I blog this? My interest for this console (and hence this article) is threefold: (1) I am curious to try it out (2) it's a good step towards the use of tangible computing metaphors (3) the innovation model of Nintendo is interesting.

Locomotion interface: Powered Shoes

I recently ran across this (I don't know where, maybe at WMMNA): Powered Shoes, a project carried out by Hiroo Iwata. It's basically a "wearable locomotion interface that enables omni-directional walking while maintaining the user's position"

A locomotion interface using roller skates actuated by two motors with flexible shafts. The device enables users to walk in arbitrary directions in virtual environments while maintaining their positions.

Enhanced Life It has often been suggested that the best locomotion mechanism for virtual worlds would be walking, and it is well known that the sense of distance or orientation while walking is much better than while riding in a vehicle. However, the proprioceptive feedback of walking is not provided in most virtual environments. Powered Shoes is a revolutionary advance for entertainment and simulation applications, because it provides this proprioceptive feedback.

Why do I blog this? It reminds me of something discussed with julian about a walking-based interface. Lots of interesting mixed reality application could be used using this sort of device: not in the proper "mixed" system that actually exists (including virtual world features in the real world through glasses) but rather by allowing tangible interactions to control stuff that would happen in the virtual worlds.

The importance of the "body" (the why of tangible computing?)

I am sure this paper is interested for Adam Greenfield's next book ("The city is here for you to use"):How Bodies Matter: Five Themes for Interaction Design by Scott R. Klemmer, Bjoern Hartmann, and Leila Takayama For DIS2006:

It discusses how "our physical bodies play a central role in shaping human experience in the world, understanding of the world, and interactions in the world", drawing on various theories of embodiment in the field of psychology, sociology and philosophy.

What is interesting is that articles presents some relevant arguments and examples that shows the importance of the body. It put the emphasis on the embodiment for (among others), I picked up those I was interested in:

  • Learning through doin: physical interaction in the world facilitates cognitive development (Piaget, Montessori)
  • Gesture is important in terms of cognition and fully linguistic communication for adults (to conceptually plan speech production and to communicate thoughts that are not easily verbalized)
  • Epistermic actions: manipulating artifacts to better understand the task’s context
  • Thinking through prototyping
  • Tangibility of representations: The representation of a task can radically affect our reasoning abilities and performance.
  • The tacit knowledge that many physical situations afford play an important role in expert behavior.
  • hands, as they are simultaneously a means for complex expression and sensation: they allow for complicated movement
  • kinesthetic memory is important to know how to interact with objects (ride a bicycle, how to swim)
  • Reflective reasoning is too slow to stay in the loop
  • Learning is situated in space
  • Visibility Facilitates Coordination
  • Physical Action is characterized by Risk: bodies can suffer harm if one chooses the wrong course of action
  • Personal responsibility: Making the consequences of decisions more directly visible to people alters the outcome of the decision-making process.

Why do I blog this? This echoes with the literature review I did about how space/place affords socio-cognitive interactions. Embodiment is certainly one of the most interesting component of this relationship.

I also think one of the most important dimension is the inherent risk of physical actions, nobody gets physically hurt in virtual worlds but what happened while playing augmented reality quake?

Of course this is meant to support the "why" question of tangible computing?

Interactive tables studies at COOP2006

One of the paper who struck me as interesting (and related to our lab's research) today at COOP2006 way this "Evaluating Interactive Workspaces as CSCW" by Maria Croné (Stockholm University, KTH). It was basically about 3 users studies. It involved small groups of students (3-6 persons, synchronous and co-located), who did their own tasks (collaborative course project, design of multimedia application, brainstorming sessions...) The needs for this kind of collaborative activity are simple: shared surface (visible to all) + private surface (paper or laptop) problems: moving content from shared to private, moving content between laptops.

An interactive workspace is defined as a combination of one or more large displays (shared surface), tools for moving data (dragging file icons on this "teamspace" windows, list of people to send the document) and tools for coordinating interactions between the different surfaces (move the computer cursors on the different surfaces, not allow simultaneaous typing) They conducted three studies (iLounge study 1, iLoungs study 2, Teamspace) that differs over the combination of large displays (screen) and smaller ones (laptops)

The research questions they addressed:

  • how and for what activities are the different work surfaces uses?
  • how is the interaction with different work surfaces coordinated?
  • how is the information trnasferred between work surfaces
  • What tools do groups use for their collaborative work? do they need a shared work surface and how do they achieved that? howe do they transfer information between laptops and between laptops and other work surfaces?

Some results: - good to have shared work surfaces that all group mmebers could interact with - need for individual input devices (so that you don't have a situation in which one student does all the typing) - need for private work surfaces - a more frequenty shifting between collaborative and individual work surfaces when your provide more private surfaces

Plus I like this piece:

The collaborative work of the groups consisted of a more frequent shifting between the different displays, which lead to an increased need for sending data between the different displays. This is also in line with the thoughts of Fisher and Dourish, that most everyday work is carried out using single-user applications for collaborative work, and that the best support would be to offer coordination tools instead of providing CSCW applications.

The main conclusion here was that the most efficient design is to provide a good combination of laptop computers and large interactive shared displays because of the flexibility it proposed.

Why do I blog this? this connects to research conducted at the lab about interactive tables usage, as well as the project we did in my Teaching Assistant duty. This study tend to go further from what we did about how the user experience of augmented furnitures.

Stuffed-doll that reads emails

Regine pointed me on Ubi.ach, by Min Lee, Gilad Lotan, Chunxi Jiang. Close to the Nabaztag, it's a "ubiquitous, personalizable stuffed- doll that is able to read out your emails wirelessly and transmit voice messages" as the designers put it.

In search of using calm technology in our project, we have come up with a friendly-looking stuffed-rabbit that speaks out your gmail, according to your preset preferences on the web. This way, you do not have to solely rely on your personal computer to retrieve your emails. The user has the freedom to preset the importance of his emails, and categorize them as well as be alerted when a new email is received. They can also have personal messages recorded, allowing for the voice to be transmitted. Essentially, we have chosen to use RF (Radio Frequency) as a method to transmit and receive data between the doll and the internet, and a set of walkie talkies to output the emails using Text-to-speech technology, while also allowing for the use of personal speech. Radio Frequency can travel up to 125ft and the walkie talkies transmit and receive up to a distance of 5 miles.

An email is sent to ubiach@gmail.com, with either the word "alert" in the subject, the bunny will read the subject of that email. And a user can also record personal messages for the bunny to speak.

The ubi.ach is a hacked mechanical rabbit that dances around. Inside, there is a board with a microcontroller, radio frequency, LEDs and switches. There is also a walkie talkie that speaks out the emails. On the computer side is the receiver with a toy that is attached to the computer with a similiar board inside.

The project is better described here, the video is funny to watch.

Why do I blog this? a very simple object (one feature = reading email) but it's interesting to see that there are more and more design work around this issue of embedding interactions in a tangible device. The next step is to use this device also an input interaction device; a dimension which is somehow lacking even for the Nabaztag

Tangible Flags: collaborative field trip for kids

A case study of Tangible Flags: A collaborative technology to enhance field trips by Gene Chipman, Allison Druin, Dianne Beer, Jerry Alan Fails, Mona Leigh Guha, Sante Simms, Paper that will be presented at IDC 2006. The paper describes the participatory design of a "Tangible Flags technology" to support children (grade K-4) in collaborative artifact creation during field trips:

We worked with two teams of children in developing Tangible Flags; a group of 6 children, age 6-10, who joined us in our lab after school twice a week and a class of kindergarteners at the Center for Young Children, University of Maryland’s on campus research pre-school. We made observations of the kindergarten classroom’s actual field trips, and both teams participated in mock field trips. We experimented with marking the environment using flags consisting of a pipe cleaner attached to a popsicle stick. We named these Tangible Flags because the children planted them like flags and used them as a mock tangible interface for accessing digital artifacts. Our goal was to see the impact of the Tangible Flags concept on children’s collaborative effort and ability to re-locate or elaborate on their findings. These initial flags were not computationally enhanced, so adult researchers helped the children correlate Tangible Flags with various media, such as notes taken or pictures drawn by the children, or audio and video recordings created by the children.

Why do I blog this? this is a relevant example of how the physical connection to digital information through tangible interactions. The activity study is very insightful with regards to children appropriation of the technology.