Tangible/Intangible

Using tangible interactions to teach astronomical phenomenon

One of the project I bumped into this afternoon: Smart Planets is a project that aims at enhancing the teaching of astronomical phenomenons like eclipses and motions using tangible interactions. It's a project carried out by Bjorn Eisen, Marc Jansen and UlrichHoppe.

Usually, the teacher takes different objects and whirls around with these. We provide Smart Planets, which consists of a polyfoam ball and an RF-ID Tag, which looks like a planet, but they can be used as an entrance into the virtual world. In virtuality, the Smart Planets can be animated with two different animation models according to the helio- and geo-centric view of universe. The shadows are also shown to allow for the examination of eclipse phenomena. With the help of Smart Planets the students can design their universe and can have an enriched view of it.

Why do I blog this? I like the project (not because I am crazy of poly/styrofoam) but because I finf interesting to embed such interactions.

Play Counterstrike while running

Gamerunner = playing a first-person shooter while running:

The GameRunner (patent pending) is an innovative new game controller that fuses the technologies of exercise and video gaming.

There has never been a better way to play First Person Shooters like there is with the GameRunner. The GameRunner brings players one step closer to actually being inside the game. We've worked hard to design a controller that is not only intuitive and easy to learn but is also capable of delivering a fresh breath of life into the overall immersiveness of first person shooters. Naturally this makes playing games on the GameRunner beneficial to your heath by burning calories and keeping your heart rate up.

When you get on the GameRunner you won't want to get off (unless you need to catch your breath!). Playing on the GameRunner allows you to exercise while having all the fun of playing your favorite FPS. Exercising has never been more fun!

duuh?! what a weird reason to play... The interface is pretty intriguing:

The GameRunner plugs in to your USB port and functions as a keyboard and mouse. The entire unit draws all the power it needs through USB. Aiming with the handlebars controls the mouse and walking on the treadmill controls the keyboard. Righthand controller buttons are designated as mouse buttons and lefthand controller buttons are designated keyboard buttons. Treadmill motion is measured optically and movement keys follow internal 'steps' that the GameRunner sees with treadmill motion to allow for the ingame movement speed to correspond with treadmill speed

Shown at Next2005.

Othello played using biology signals

Keisuke Shima, Nan Bu, Masaru Okamoto, Toshio Tsuji: A Universal Interface for Video Game Machines Using Biological Signals. ICEC 2005: 88-98

Abstract: This paper proposes a universal entertainment interface for operation of amusement machines, such as video game machines and radio control toys. In the proposed interface system, biological signals are used as input, where users can choose some specific biological signal and configuration of signal measurement in accordance with their preference, physical condition (disabled or not), and degree of the disability. From the input signals, users’ intention of operation can be estimated with a probabilistic neural network (PNN), and then, control commands can be determined accordingly. With the proposed interface, people, even those with severe physical disabilities, are able to operate amusement machines. To verify validity of the proposed method, experiments were conducted with a video game machine.

It's actually an innovative entertainment interface using biological signals. With the prototype system they constructed, a variety of biological signals can be used as input, and users can choose input signals with respect to their conditions. They tried experiments with an Othello game. This is interesting:

"the users’ intention of operation is estimated from the input signals using a PNN. Due to the adaptive learning capability of neural networks, a high level of discrimination accuracy is achieved. Moreover, video game machines can be easily changed, so that various amusement machines can be incorporated into the proposed system."

Thinglink: connection information and artifacts

An intriguing post by Ulla Maaria-Mutanen about thinglink, a concept I was not aware of:

A thinglink is a free unique identifier that anybody can use for making the finding and recommendation of particular things easier in the Internet.

A thinglink identifier is based on the idea that many of the things we use in our daily life are quite particular. Perhaps we know their origin (who has made them, when and how) and something about their history or previous use (like with furniture and cars). Some things have more meaning to us than others. (...) Thinglinks are unique, 8-digit identifiers that anybody can use for connecting physical or virtual objects to any online information about them. A thinglink on an object is an indication that there is some information about the object online—perhaps a blog post, some flickr photos, a manufacturer’s website, a wikipedia article, or just some quick comments on a discussion site.

The purpose of the thinglink.org is to offer an easy way to learn about products and artifacts in their various contexts of production and use. Small-scale producers such as artists, designers, and crafters can use thinglinks to bring their products to the emerging recommendation-based market in the Internet

Why do I blog this? well this is very close to ID specifications of Bruce Sterling's concept of spimes! There is a website coming out about this: Thinglink Besides, it's closely related to our discussion about blogjects with Julian. This 'thinglink' idea could be seen a way of implementing the blogject concept since it's able to connect information (on the web) and artifacts. So Julian what do you think? Let's all meet and talk about it. Isn't there a workshop scheduled about this in 2006 conference? Well, let's have a workshop about blogject at LIFT then! (something like the day before).

DIY Bluetooth Glove

Farting around the web, looking for bluetooth clothes and gear, I ran across what this Jason Bradbury guy did. He seems to be one these cool DIY hackers who designer intriguing things. His bluetooth glove is pretty neat:

Made from a reconstituted Bluetooth headset and a gentleman's driving glove (well it had to be didn't it?) my phone glove will connect to any mobile phone that has Bluetooth. No wires, no plugs. With your phone stashed out of sight you can make and receive calls with your thumb as the speaker and little finger as mic.

I sandwiched the switch that came with the original Bluetooth headset between two small pieces of transparent plastic and heat sealed them together. Placed inside the glove's open knuckle the switch is activated by me flexing my wrist. With a flick of the wrist I can answer an incoming call; with two flicks re-dial the last number. And by holding it open (a mere extended kink of the wrist) I can voice dial.

<img src="http://www.jasonbradbury.com/jason_bradbury/images/foryou.JPG" width="180"

An esperanto for toasters to do what?

Speaking about the 'Internet of Things' the other day, this Technology Review also makes sense to be read. It's about an "esperanto for toaster" concept: ZigBee, namely a wireless standard that could be used a a "common language" to lights, appliances, doors or cell phones. Specifications could be downloaded here.

In the not-too-distant future, your cell phone might become the key to your home. By transmitting a signal to a sensor, your phone will announce your arrival and the front door will unlock. (...) But before a swarm of sensors can turn into an intelligent network, though, they need a way to communicate with each other. Enter ZigBee. Based on an IEEE radio standard called 812.15.4, it allows digital transmissions of up to 1Mbps in one of two frequency ranges, 2.4GHz or 915MHz (in the Americas).

That is surely another step towards integrated computation. Besides, some big guys are following this, so that interoperability can work out:

More than 150 member companies already belong to the ZigBee Alliance, including such electronics heavyweights as Honeywell, Motorola, Philips, and Samsung. Alliance chairman Bob Heile claims that ZigBee will enable any compatible device -- regardless of the manufacturer -- to communicate with any other ZigBee device, right out of the box. What's more, the specification allows ZigBee devices to form mesh or cluster networks spontaneously, without any intervention from end users, installers, or (gulp) system administrators.

Also, after reading about this Zigbee, I stumbled across this cool article about ambient findability by Peter Morville. The authors adresses this notion of being able "to find anyone or anything from anywhere at any time". The article also tackles various location-based services which are directed towards this goal.

A clear sign of progress is the emergence of ubiquitous findable objects (UFOs). GPS, RFID, UWB, and cellular triangulation enable us, for the first time in history, to tag and track products, possessions, pets, and people as they wander through space and time

Today, a UFO is not longer a UFO. Of course, this definitely echoes with the Bruce Sterling's last book and the very concept of spime (one of the characteristic of spime is to be a ubiquitous findable object). They are "searchable, like Google. You can think of Spimes as being auto-Googling objects" as he says (more about it here).

On a different note, I find interesting what Peter Morville's points out about the social control created by this (LBS):

On a voyeuristic note, we'll all be secretly interested in collision detection. Most likely, Google Alerts will notify us of brief or sustained meetups between two or more individuals from within our social networks. In fact, we'll all come to rely on anomaly detection to highlight meaningful deviations in individual habits or in the flocking behavior of crowds.

Proximity Lab: the implications of physical proximity in social communication

Via Timo's excellent del.icio.us, Proximity Lab is an interesting participatory installation and experimental interface platform. It's meant to visualize relationships between users and mediated spaces.

Built on the premise that physical proximity is a basic unit of social communication, this study seeks to examine the role of physical interaction in social communication. (...) The purpose of this study is to examine the implications of physical proximity in social communication. The study seeks to stimulate inquiry on this topic through facilitated experiences where algorithmic logic, system observation of user behavior, and dynamic role assignment are central elements made accessible to participants for contemplation and discussion.

User perception and response to the overlay of information drawn from the fusion of user actions and system rules is also central to this study. (...) The platform is an 8-foot by 16-foot walkable surface fitted with radio frequency ID (RFID) technology. Participants wear shoes fitted with RFID tags, enabling the system to track and record their positions in real-time. Images projected directly onto the floor are accompanied by stereo sound as a continuous response to the actions and interactions of participants.

The website describes the test platform and shows nice visualizations:

If the changing positions of participants are the primary input for the system, then the visual material projected onto the platform floor is the primary means of output utilized by the system.

Toys trends

The Christian Science Monitor has a good piece about adult technology mimicked by toy manufacturers. Some excerpts I found relevant:

This Christmas, tech-peddlers are turning their gaze toward kids, with new lines of grown-up gadgets built for tiny hands.(...) "There's a shift in need in terms of what a child finds fun and entertaining," says Jim Silver, editor of the toy trade publication Toy Wishes. "A lot of that has to do with the computer age. If a 3-year-old is entertained by software, the toys that might normally entertain him might not have the same value." (...) "Toys mimic what children see in real life. As we look around the house, everything is getting consistently tech-driven." (...) While traditional adult gadgets are fertile ground for "juvenilization," you're still unlikely to see "Baby's First Spreadsheet Application" on store shelves. (...) But despite the introduction of some interesting and inspiring new electronic playthings, some parents and child psychologists question the wisdom behind high-tech play.

"A growing concern of the preschool teachers that I'm talking to is that children are coming to preschool not even knowing how to play," says Susan Linn, a psychiatry instructor at Harvard Medical School (...) Another pitfall for toy companies that "juvenilize" products: Adult products will soon be cheap enough to give to a child instead of a toy. Hasbro's "ZoomBox" is essentially a cheap video projector that is made and marketed for children. This $299 "toy" lets kids project and play video games on blank walls. Another example is Hasbro's $99 imitation cellular phone called "Chat Now." Despite its flip-phone fanciness and built-in black-and-white camera, Chat Now cannot actually place a telephone call.

I like their list of hot toys and in particular:

iZ (Zizzle, 5 years & up, $39.99) By plugging a music player into this alien creature, children can manipulate the sound by turning its ears and flicking its antennae.

Shell Shocker (Tyco, 8 years & up, $79.99) This high-powered vehicle can morph into a "cyberball" or a "cyberbeast" on the fly.

Mascarillons: flying swarm intelligence for scientific, architectural and artistic research

Patrick pointed me on this cool project: Mascarillons, carried out - partly - at our school. Their tagline is very appealing: "Flying swarm intelligence for scientific, architectural and artistic research" and those flying cubes with an exo-skeleton all around them are terrific! They fly/float in the air and self-assemble.

The Mascarillons are the first rigid aerobots developed for the [ SAILS ] project. They are flying cubic automata able to develop collective behaviors and assemblages through swarm-intelligence protocols. Standing at a crossroad between Art and Science, the [ SAILS ] project aims to bring together researchers both in artistic and scientific domain to collaborate towards the production of a robotic environment dedicated to architectural, technological and scientific research, with a major potential for multi-media performance. The final result will see a flock of 12 to 20 Mascarillons evolving within a spherical inflatable dome equipped with panoscopic projectors.

More about it in this paper: Nembrini, J., Reeves, N., Poncet, E., Martinoli, A. and Winfield, A. "Flying Swarm Intelligence for Architectural Research" Communication at the 2005 IEEE Swarm Intelligence Symposium, Pasadena, June 2005.

Plus, there is a video about it on Discovery Channel.

ITU report about the Internet of Things

The ITU (International Telecommunication Union, this big building in my neighbourhood) is eventually releasing a report about the so-called Internet of Things:

The report takes a look at the next step in "always on" communications, in which new technologies like RFID and smart computing promise a world of networked and interconnected devices that provide relevant content and information whatever the location of the user. Everything from tires to toothbrushes will be in communications range, heralding the dawn of a new era, one in which today’s Internet (of data and people) gives way to tomorrow’s Internet of Things. (...) With continuing developments in miniaturization and declining costs, it is becoming not only technologically possible but also economically feasible to make everyday objects smarter, and to connect the world of people with the world of things. Building this new environment however, will pose a number of challenges. Technological standardization in most areas is still in its infancy, or remains fragmented. Not surprisingly, managing and fostering rapid technological innovation will be a challenge for governments and industry alike. But perhaps one of the most important challenges is convincing users to adopt emerging technologies like RFID. Concerns over privacy and data protection are widespread, particularly as sensors and smart tags can track a user’s movements, habits and preferences on a perpetual basis. Fears related to nanotechnology range from bio-medical hazards to robotic control. But whatever the concern, one thing remains clear: scientific and technological advances in these fields continue to move ahead at breakneck speed. It is only through awareness of such advances, and the challenges they present, that we can reap the future benefits of a fair, user-centric and global Internet of Things.

Why do I blog this? the report would certainly be interesting to skim through since it can give insights about how an international structure think about it. My biggest concern is that the summary addresses complaints about problems related to users' fear but they just put the emphasis on the techno-push development in the field. Those guys might benefit from reading Bruce Sterling's last book:"Shaping Things (Mediaworks Pamphlets)" (Bruce Sterling). However, the value of ITU's report (as well as other reports coming from International Organisations like this) is the presence of relevant statistics. Well... it's going to be CHF 100.- (available november 17th).

Bob the blimp

No it's not Sponge Bob, nor CatchBob! but Bob the blimp, a project by Karl Forsberg from the Umeå Institute of Design, :

This product is a robot that can serve as a friend for children and that can help them learn new things by answering their questions. Furthermore, it can act as an assistant in public places, such as hospitals and airports. Thirdly, it can be used remotely, for example, giving museums the possibility to let audiences view their exhibitions via the internet. The robot possesses a superior functionality in terms of mobility and provides new possibilities for other useful fields of application.

A Tree Gaming Controller Unit

Via Gamasutra Esoteric Beat Clumn: This "lumberjacked" game controller is amazing!!!

Lumber jacked! is a computer game that is played with trees. Using the 'Tree Gaming Controller Unit' the human player takes the role of a lumberjack while your tree friend can play the part of the super tree.

You are armed with an axe but it is your choice if you use it . Your tree has control over the super tree which can move around the plantation avoiding you and your axe and giving birth to new saplings.

Simply attach the T.G.C.U. to your tree of choice and start to chop or if you are a tree lover just have a wander around your plantation,. enjoy the weather and let your tree live and produce generations of trees.

The most interesting part of it is the tree controlelr unit:

This piece of equipment makes it possible to play Lumberjacked! against a tree. The ivy leaves, pictured below, are attached to the ends of branches. Each leaf hides a vibration sensor which picks up movement in the tree and translates that movement to the game through an adapted game controller concealed within the pot.

Why do I blog this? I really like this sense of 'intangible interaction', vibration-controlled. Is this 'calm technology'? ;)

Objects that blog!

Tonight I had an interesting debate with Julian about the notion of 'objects that blog' (he calls them blogject), that is to say artifacts that would upload their story up to web. Thy would report the history of interactions the object had with people. It's something very intriguing and close to Bruce Sterling's idea of spime. Julian wrote an insightful post about it. This is part of a project he's carrying out for his seminar on Location-Based Mobile Media. The only example I found is not really a blog but it's a lamp which can show a history of persons who have entered a specific room; this history can be queried from the lamp’s web page.. It's called the Aula lamp and the description can be found in this document about the whole Aula Cooltown project.

Talking about it with Alex Pang from the IFTF, he advised me to check the work of Phoebe Sengers from Cornell Unversity. For instance this project is somehow close to the idea of an 'object that thinks' in the sense that it's a ubiquitous computing system that monitors a home's emotional climate and provides open-ended feedback about it to users:

The Home Health system, a collaboration with Bill Gaver and Michael Golembewski at the Royal College of Art, London, will be a ubiquitous computing system that monitors a home's emotional climate and provides open-ended feedback about it to users. Everyday household objects are wired with sensors. The resulting sensor data is used to develop a model of the current emotional climate of the people living in the home. Once a day, the user receives a list of suggestions from the system of emotional issues that s/he might wish to consider. (...) being open to interpretation and also reflecting accurately the current emotional state of the home as represented by the sensors.

Why do I blog this? to keep trace of these thoughts, since I find this blogject tremendously exciting!

The use of video to study children's interaction with tangible devices

Studying children’s interactions with tangible devices: How will the video help? by Emanuela Mazzone, Rebecca Kelly and Diana Xu, Paper for an Interact 2005 workshop "Child Computer Interaction: Methodological Research":

Abstract: In this paper we describe a pilot test on how to gather requirements for children’s technologies. An activity was planned to explore the potential of tangible devices in children’s learning. The main aim of the pilot test was to understand, by observing and analysing children’s interaction with the objects, if the activity planned was effective for requirements gathering. The activity was observed by the researchers and video recorded. The analysis of the video was conducted by looking at verbal records, gestures and body language and the general interaction of the children with the objects, the researchers and each other. Outputs of what can be elicited before and after the video analysis were compared in order to see what more could be drawn out from a video analysis. It was concluded that the use of multiple analytical methods was essential to provide useful output to inform the design process.

Conclusion: We conclude that observation in the field is necessary to have the overall perception of the activity in the context but needs other methods to support it, especially in a situated activity where lots of elements are involved at the same time.

The analysis of the video adds a lot of useful information that is not possible to get in other ways otherwise but it can also give an incomplete picture of the research. The physical environment and all its elements may not be captured effectively, the possibility of technical faults makes it risky to rely solely on video, and the use of video may affect the behaviour of the participants and therefore bias any results.

Why do I blog this? conclusions are interesting but I don't understand this assertion:

"With no existing structured methodology for video analysis, the researchers agreed upon which aspects of the video to focus upon. This was based on the aim of the analysis: which was to examine what added value could be gained from video compared to what had already emerged from the observation and data analysis. "

Of course it's useful to examine what would be the added value of using video (we do that on some projects involving kids testing tangible devices) but it's wrong that there is no structured methodology for video analysis (see what psychology or ethnography do with video. Both fields offer plenty of methodologies to meet this end).

Expected generalist book about Ubicomp

I am looking forward to read Everyware : The Dawning Age of Ubiquitous Computing by Adam Greenfield (released in february 2006). Judging from I've read on the author's blog/website, there seems to be compelling concepts like ubicomp as "information processing dissolving in behavior":

Some of what you'll find inside is a discussion of what we mean when we say "ubiquitous computing," including my definition of the field, at its most robust, as "information processing dissolving in behavior"; whether it's truly an immediate concern or a "hundred-year problem"; what different sorts of everyware might emerge in differing cultures; and, of course, an extended exploration of the social and ethical implications of this most insinuative of technologies.

Everyware is pitched to the smart generalist

I think the ubicomp field is still emerging and this book might be one of the first to be generalist and that will tackle this topic with the user focus. I hope it will address issues like people mental model about things (not only computers but artifacts) + the notion of affordance in context, the difference between mental model about things and computer, technical uncertainties and how people cope with them (which is an incredible issue in our research about location-based services), emergent usage of ubicomp technologies and so forth.

Architecture and information visulization and Bloomberg's new building

Two good articles about architecture and information visulization and Bloomberg's new building in Metropolis: Brand Central Station (by Alexandra Lange) and By The Numbers (by Peter Hall). Bloomberg's new offices indeed weave information, technology, and space into a seamless display of interior urban planning as the first article reports. The building architecture is meant to support the omnipresent flow of information (goods, people, and data)

I really like this phenomenon:

Employees rising on the escalator from the fifth floor even appear to move at exactly the same speed as the news and information graphics speeding about the screens. (...) "It wasn't just the video on the screen, it was the numbering system of the wayfinding--it's all tied in,"

The article By The Numbers (by Peter Hall) is a compelling account of how designers worked on this information art project.

Finally, the conclusion is relevant:

beyond the well-seized opportunity to make large graphic and architectural gestures, the treatment of information in the Bloomberg headquarters signals a shift in the way we perceive information. The data on ceiling-mounted screens caters to each department (sales figures for sales staff, network operations for the research and development people), and even the big-bellied numbers that fly across the larger screens are not abstracted but graphically contextualized and explained with accompanying text. The design conceit is that the flying data is actually useful. If the dawning of the Internet and the network society were greeted by design fetishizing information and reveling in that very 1990s trope of information overload, the 2000s have been marked by a desire to filter, parse, and deliver data in accessible form.

Robotic jacket for rehabilitation

Via supa-cool blog 3Yen, an amazing robotic jacket aimed at giving stroke victims a hand in rehab. A project cariied out at the Kobe Gakuin University in conjunction with Osaka University and Activelink Co., a subsidiary of Matsushita Electric Industrial Co. in Seika, Kyoto Prefecture.

More about this in the Asahi:

The device-essentially a mesh jacket in form-uses sensors to detect the muscle movements in the patient's healthy arm and wrist, then uses artificial muscles to stimulate that same movement on the damaged side of the body. Researchers hope repeated therapy will bring back the regular functioning of the damaged limb.

"If (the use of) this jacket spreads, it will be possible to provide long-term support for patients at home," said Akio Nakagawa, professor of occupational therapy at . He was part of a robotics research team at the university that worked (...) the team hopes to have the product, first developed in 2004, on the market in spring 2008, selling for about 250,000 yen each.

Tremendous switches, toys and tangible interactions

While working on a project about kids' joysticks/pads and remote control, few months back, I would have certainly been happy to ran across this switches done by an australian company called Technical Solutions (found on this great blog about 'making toys'). They are amazing! I am a great fan of those, especially the floating pillow (on the left below) and the textured roller switch (on the right):

Why do I blog this? It seems that the toy industry already has good insights to offer for people interested in tangible interactions with crazy devices like those above.

This lead me to find of an article written by Edith Ackerkmann for the IDC 2005 conference: Playthings That Do Things: A Young Kid’s “Incredibles”!. The paper raises relevant issues about kids, toys, autonomy and materials. It's more focused on animated toys but it also addresses ""many no-tech or low-tech toys exist, which afford the thrill of controlling things at a distance". More about this here when I get time to peruse the paper more properly. Just one quote:

In interacting with artificial cyber-creatures, the question of significance is not so much how does it work but rather what does it achieve (on its own), and how should it be treated (manipulated or controlled) so that it responds (to one’s solicitations) in interesting ways. In other words, taking a cyber-creature apart (for the sake of transparency) is an awkward thing to do (unless you are a programmer).

Yet another plush-based remote control

This is another plush-based remote control: Plush Toy Gestrural Interaction: Using Everyday Objects for Remote Control (by Yoshinori Kawasaki):

We plan to develop a system that allows users to control information appliances remotely in living environments using everyday objects such as plush toys. Considering the increasing number of computers embedded in living space, interaction techniques with such devices have been becoming more and more important. What is missing currently is design principles to make such information appliances usable for ordinary, non-technology oriented people in easy, joyful and attractive way. In this project, we focus on interaction technique for controlling appliances remotely. We propose gestural interaction using everyday objects. These everyday objects are familiar and attractive for ordinary people compared to special purpose devices such as remote controllers.

Why do I blog this? this seems to be a recurrent trend in HCI/remote control as I mentioned here or here. Check their video there. I like the way he shakes the plush (anyway in terms of HCI this is no big deal, this kind of controller exists for a while but the implementation is always cool to watch).

Noise sensitive project at the lab

At the lab, JB Haué and Guillaume Raymondon are working on a very interesting project: a noise-sensitive table. There is a blog about the project there. The table is meant to perceive nearby users' noise and diplay various things based on it (by LEDs or a beamer). Currently the prototypes are really ROUGH (but nice from my point of view, maybe it's because I like this sort of 'bricolage picture'): noise sensitive table prototype 1 noise sensitive table prototype 2 noise sensitive table 3 noise sensitive table 4

Why do I blog this? We will ask students of our Computer Supported Collaboration Work course to test various configurations with it, that's why I follow the project closely. I am looking forward to see what will happen