Tangible/Intangible

Topology of dining

Yet another tableware project today (it's funny that this week has been filled by discussion about tables here at the lab): Topoware by Alexandra Deschamps-Sonsino and Karola Torkos. The point of the project is to "questions the landscape of dining": is territory an adequate notion during a meal? could the observation of dining allows to make assumptions about eating behaviors? What about the way people occupy space?

"By looking at places, maps and especially contour lines, which define a landscape two dimensionally we decided to in turn "outline" the dining experience. This can also be interpreted as "zooming in" from the whole to the single item, from the tablecloth to the placemat down to the utensils.

The lines decorating the tablecloth are mapping the table, defining the space were people sit and interact at the dinner table. The closer to the person's designated space and area of intense interaction, the darker the lines become. The placemat helps keep the experience of complex dining simple or makes the simple dining experience feel special, each layer defining what comes first and where cutlery and tableware should be placed.

In a playful way the lines reappear on the tableware itself, be it plates, bowls and cups to illustrate, label and determine your dining habits."

Why do I blog this? I quite like this "With the Topoware collection, you are how you eat" motto. To me it's very a very pertinent way to make explicit invisibles (or implicit) phenomenon and behavior, especially in unexplored field such as dining.

Tangible table

The Tangible Table is a new table platform by Manuel Hollert and Daniel Guse:

"Our goal was to build a working prototype of a tangible table-based user interface. In contrast to a simulation, this environment facilitates the evaluation and testing of user interactions. That’s why the visual components on the table surface (such as scales) are quite basic and rough. The principles of interaction and graphical behavior had higher priority."

The technical implementation is described here with a description of how they used fiducial markers. Also, check the video

VR2.0 through gesture recognition?

In the last issue of BW, there is an article about motion capture and gestural interactions. So, this seems to be the new revolution, it traces back the trend to the VR attempts of the 90s, nintendo powergloves and other stuff. Then an Intel Chief Technology Officer claims that withing five years we "could use gesture recognition to get rid of the remote control" and that it willeventually "drive demand for its important new generation of semiconductors, the superprocessors known as teraflop chips, which Intel previewed in February" (I won't comment on this but... mmhmm... mentioning the supeprocessor issue when it comes to human-computer interaction seems not very apropos here). But why would it work this time?

virtual reality 1.0 was a bust. The hype was too loud, computers were too slow, networking was too complicated, and because of motion-sickness issues that were never quite resolved, the whole VR experience was, frankly, somewhat nauseating. (...) VR 2.0, enhanced by motion capture, is different in many critical ways. Most important, the first batch of applications, such as the Wii, while still primitive, are easy to use, inexpensive, and hard to crash. You don't get anything close to a fully sense-surround experience, but neither do you feel sick after you put down the wand. The games are simple and intuitive (...) system enables a presenter to take audiences on a tour of a 3D architectural design or on a fly-through of a model city. And the presenter's measured theatrics make a big impression. "Everyone's looking for the new, sexy way to communicate with their employees and their clients. We're selling their ability to sell,"

Why do I blog this? well, I am not sure the reasons the VR failed for the reasons mentioned, they were surely part of the problems but there is still a misunderstanding about interactions in VR and the notion of 3D. There is still this belief that replicating reality in a 3D digital space is the must, and that gestural interfaces is then the solution because it's more natural (given the direct mapping).

Back to gestures, some excerpts that I liked in the BW article though:

"Any company that creates a product used by people needs to understand how the human body moves," (...) Aeronautics veterans who hear about this program are sometimes skeptical. "When people cannot touch a prototype, it's always a hard sell "It's early, but such simulations could be one of the most profitable areas in the future," (...) "The Wii is helping debug this question about how you move in virtual ways," says Jaron Lanier. After a year with the Wii, society "will be better educated about the overlap of the virtual and the real world," he says.

Dog and augmented reality soccer

(via fabien) For intrepid readers only: an intriguing video of a dog playing augmented reality football on a reactrix setting.

Why do I blog this? food for thoughts for a near future laboratory project about "new interaction partners", or how pets can be partners in technologically mediated interactions. This seems to be a pertinent example and the immersion appears to be working (especially the dog owner who is encouraging the animal). Don't know about the animal frustration or whether this is good or bad.

Critical issues about EEG in gaming

An article in The Economist about brain-controlled devices and games. What is good here is that it shows a critical viewpoint on a topic that it not so easy. It's basically about Emotiv Systems and NeuroSky, two Cal-based companies, which aims at measuring brain-wave activities and turn them into actions in a computer game (using a technique called electroencephalography: EEG). Both seems to get rid of existing problems (lower number of electrodes, no use of gel) and they claim that they can mimic facial expressions. For people who happened to put electrodes on one's head, this seems to be an achievement; back ten years ago it was really a pain to put this dirty gel in people's hair and the possibles actions were quite low. Neurosky even want to have only one electrode!

So what's the connection with games? it might be close to the current market:

"According to Nam Do, Emotiv's boss, those applications are most likely to be single-player computer games running on machines such as Microsoft's Xbox 360 and Sony's PlayStation 3. In the longer term, though, he thinks the system will be ideal for controlling avatars (the visual representations of players) in multiplayer virtual worlds such as Second Life."

More interesting are the problems that prevent designers and developers to create such systems:

"First, although human brains are similar to one another in general, they are different in detail, so a mass-produced headset with the electrodes in standard locations may not work for everyone. Second, about one-third of the population is considered “illiterate”, meaning in this context that not even a full-fledged medical EEG can convert their brain activities into actions. Third, electrical signals generated by muscular activity such as blinking are easily confused with actual brain-wave readings. Wink at a fellow player at the wrong moment, then, and you might end up dropping that sarsen you have lifted so triumphantly from the fields of Salisbury Plain on the toes of your avatar's foot."

Why do I blog this? interesting material about the progress concerning the use of EEG in HCI and gaming, there are lots of projects in the field (e.g. targeting "augmented cognition"), things evolve slowly. In addition, this brings me back to my cognitive/neuroscience studies, playing with this sort of material.

I/O Wall

The I/O Wall is a project carried out by David Gerber, Mark Meagher and Gerber's students from Sci Arc.

" The goal of the project has been to design a new room-scale interface to computer functionality and data: the wall will keep track of the objects stored on its shelves using RFID readers, and will provide an interface for searching the stored objects. Proximity sensors will provide some additional data on patterns of use in relation to the presence or absence of specific objects on the shelves. (...) One of the research questions we’re addressing is how the digital affordances of the wall can be expressed through design (...) We’re finding that the design of the nodes containing the sensors is a critical to the success of the wall project: both because the node design has a direct impact on the functionality of the sensors, but also because the design of the nodes (form, materiality, tectonics) is the primary means we have for communicating the functionality of the wall, and the range of interaction that it affords."

(Image courtesy Jun Yu, David Gerber)

Why do I blog this? My interest in tangible interfaces explains why I am curious about that project; the dimensin I find pertinent is the expression of certain technological aspects. How would this be reflected in the design per se? Maybe the answer lays in the project title.

The tongue becomes a surrogate eye

More about tongue-based interfaces. This is a bit old but I ran across it yesterday: using the tongue as a "surrogate eye" (News from 2001).

Researchers at the University of Wisconsin Madison are developing this tongue-stimulating system, which translates images detected by a camera into a pattern of electric pulses that trigger touch receptors. The scientists say that volunteers testing the prototype soon lose awareness of on-the-tongue sensations. They then perceive the stimulation as shapes and features in space. (...) The Wisconsin researchers say that the whole apparatus could shrink dramatically, becoming both hidden and easily portable. The camera would vanish into an eyeglass frame. From there, it would wirelessly transmit visual data to a dental retainer in the mouth that would house the signal-translating electronics. The retainer would also hold the electrode against the tongue.

(Picture K. Kamm/U. Wis.-Milwaukee)

Why do I blog this? though this is designed for blind people, there are some intriguing potentialities in terms of human-computer input!

How to write gestures and movements

The coming of gestural interactions on mass market products such a the Wii brings lots of question about how to design movements, how to express them and discuss their relevance. This question is of particular importance in the video game industry and there is currently lots of discussion about how to create gestural grammar/vocabularies. I've attended seminars about people try to describe the movements (both the physical movements and their translation in the virtual counterpart) and there has not been any satisfactory solutions. Reading a newspaper, I stumbled across this exhibit called "Les écritures du mouvements" (i.e. The writings of movements) in Paris that presents the different notation systems used in dancing and it seems strikingly pertinent for explaining movements. As described on this website about the show, each notation system attest of the peculiar way to perceive movements, which also depends on the historical, scientific and cultural context of the society in which this system occur. These systems are used either as mnemonic helps but also as a way to train people or even to create. Historically, there has been lots of different systems such as the ones represented below (left: by Bagouet, right: by Zorn):

The most common today are the Laban's system and Benesh's system. Below is an example of Laban:

Of course,t here are tools that allows to use these annotations: see for example Benesh Notation Editor or Credo.

Why do I blog this? This sort of notation systems seems interesting and pertinent for describing gestural interactions. Might have to dig this more deeply. Will wee see superb game design documentations with pages showing this sort of depictions?

Ubiquitous computing and foresight

The Bell&Dourish paper I've blogged about last week is still sparking some interesting discussions (interestingly it's not only ubicomp researchers but also architects). What is interesting to me is how this discussion about focusing on the ubicomp of today and less about proximal future connects with the discussions I had with Bill after the LIFT07 foresight workshop. The "here today" versus "could be tomorrow" argument is indeed one of the underlying questions of foresight versus design practice. In Bell and Dourish article, the authors critique these earlier visions of a proximal future not to complain about past visions, nor to understand why we haven't gotten there but rather because it allows them to question an important assumption made by ubicomp researchers: the coming of a so-called seamless world with no bugs and perfect could of connectivity (that do not hold true as Fabien described it at LIFT07).

So the point here is the importance of the "why question", the crux issue that the LIFT07 workshop addresses; critical foresight is about asking why something worked, why someone would want the future you propose or why the path proposed is possible. In the context of this ubicomp paper, some additional questions about the future of ubiquitous computing can be asked: what would we want: a short term vision of the next incrememental ubicomp 'project' or a new strong vision (as Weiser's calm computing was). But what might be needed for having this strong vision is clear and lucid description of the why that eventually lead to a point people could aim at.

So there could be an interesting exercise to think about when criticizing the intelligent fridge, CAVES, intelligent assistants or other ubicomp dreams that failed. That could be a good agenda for a possible workshop at some point.

Music production through haptic interface

Amebeats is a project by Melissa Quintanilha that allows "people to mix sounds by manipulating physical objects instead of twisting knobs or clicking on a music production software".

As the Melissa states it:

The amoeba shaped board has little boxes in its center that when moved to the arms, activate different sounds. My interest in music and design merged to create a haptic interface (based on touch) that allows people to use gesture to mix sounds with their hands. My inspiration for this robotic installation came from going to parties and seeing DJs create the music on their tables, but no one knowing what they do to make the sounds. Generating music using gesture allows for a much more expressive way of creation.

Why do I blog this? yet another interesting device to be added to the list of interactive tables.

Infrastructure for calm computing?

Source of power Simply, this is the sort of infrastructure that gives birth to ubiquitous computing; at some point people have to give some power to the devices that allow them to access the information superhigways or activate their second lives. And the power is brought to networked cities of the globe through this kind of lines.

Maybe this is what calm computing really is. You hike in the mountain and sit under one of those big power lines and listen to the vibes.

The ubiquitous computing of today

Finally, after a LIFT I managed to have more time for reading good papers such as Yesterday's tomorrows: notes on ubiquitous computing’s dominant vision by Genevieve Bell and Paul Dourish (Personal and Ubiquitous Computing, 2006). The paper deeply discusses Mark Weiser's vision of ubiquitous computing, especially with regards to how it has been envisioned 10 years ago and the current discourse about it. In fine, they criticize the persistence of Weiser's vision (and wording!). To do so, they describe two cases of possible ubicomp alternative already in place: Singapore (example of a collective uses, computational devices and sensors) and South Korea (infrastructural ubiquity, public/private partnerships).

Their discussion revolves around two issues. On one hand, the ubicomp literature keeps placing its achievements out of reach by framing them in a "proximal future" and not by looking at what is happening around the corner. Such proximal future would eventually (for lots of ubicomp researchers but also journalists and writers) lead to a "seamlessly interconnected world". The authors then express the possibility that this could never happen ("the proximate future is a future infinitely postponed") OR more interestingly that ubiquitous computing already comes to pass but in a different form

On the other hand, ubicomp research is very often about the implementation of applications/services, assuming that the inherent problems would vanish (think about privacy!).

Therefore, what they suggest to the research community is to stop talking about the "ubiquitous computing of tomorrow" but rather at the "ubiquitous computing of the present": "Having now entered the twenty-first century that means that what we should perhaps attend to is ‘‘the computer of now.’’". Doing so, they advocate for getting out of the lab and looking at "at ubiquitous computing as it is currently developing rather than it might be imagined to look in the future". And of course, they then points to an alternate vision that Fabien discussed last week at LIFT07:

the real world of ubiquitous computing, then, is that we will always be assembling heterogeneous technologies to achieve individual and collective effects. (...) Our suggestion that ubiquitous computing is already here, in the form of densely available computational and communication resources, is sometimes met with an objection that these technologies remain less than ubiquitous in the sense that Weiser suggested. (...) But postulating a seamless infrastructure is a strategy whereby the messy present can be ignored, although infrastructure is always unevenly distributed, always messy. An indefinitely postponed ubicomp future is one that need never take account of this complexity.

So what's the agenda? Based on William Gibson famous quote about the future being there and not evenly distributed, they encourage that:

If ubiquitous computing is already here, then we need to pay considerably more attention to just what it is being used to do and its effects. (...) by surprising appropriations of technology for purposes never imagined by their inventors and often radically opposed to them; by widely different social, cultural and legislative interpretations of the goals of technology; by flex, slop, and play. We do not take this to be a depressing conclusion. Instead, we take the fact that we already live in a world of ubiquitous computing to be a rather wonderful thing. The challenge, now, is to understand it.

Why do I blog this? Best paper for weeks. This particularly resonates to the way I think about Ubicomp... meaning that no the recurrent intelligent fridge some have dreamed of 10 years ago is not the "fin de l'Histoire" (end of History). I really like when Bell and Dourish bring forward issues like ubicomp can rather be exemplified as Cairo with its freshly deployed WiFi network set to connect all the local mosques and create a single city-wide call to prayer than having a buddy-finder locator.

Moreover, the authors express their surprise to the fact that researchers are still positing much the same vision as years ago. This reminds me the ever-decreasing time-frame futurists tried to predict: the year 2000 was really the ending point and prediction were always targeted to that period. Now that we're in the (so-called?) 21st century, it's as if there could be no other future.

Anyway, that's a call to go "on the field" and see what's happening and the effects of technologies.

Haptic interfaces

Acroe is a company that does haptic interfaces such as the following ones:

Since the first haptic device designed in 1976 by the ACROE laboratory, the first prototype of ERGOS was designed and built in 1988, and was mainly dedicated to artistic applications. This third version of ERGOS, using an innovative actuators design, opens a new dimension in your experience of haptics.

ERGOS is a top-of-the-range technology, designed to provide you a crisp sensation of your virtual models, and to enact them at best. The electromagnetic technology is currently the best for haptic devices requiring high spatial resolution, high dynamics, and a very large force amplitude vs. maximum force ratio. It provides a powerful solution to applications requiring dexterous gesture skills, high precision, and a crisp sensation of the manipulated model. This is a compact solution for the use of a high quality haptic device system in a small environment.

Why do I blog this? what can be designed using a 6D joystick? well I am not that into haptics but rather inetrested by the user experience of gestures to control digital environments.

Mobzombies

Julian kept talking me about this mobzombies project (by William Carter, Aaron Meyers, William Bredbeck):

MobZombies explores a new dimension of handheld gaming by adding motion awareness to classic arcade style gameplay. Using a handheld device, and a custom motion sensor, players enter a virtual world infested with pixel-art zombies (a homage to vintage 8-bit console games). The goal of the game is to stay alive, running away from or planting bombs to destroy the ever-encroaching zombies. The twist is that a player's physical position controls the position of their zombie-world avatar, forcing the player to actually move around the real world to succeed in the game.

The virtual zombie-world is a simple environment -- the game's complexity comes from players having to negotiate real-world objects in order to avoid the zombies and stay alive. The scoring system is simple: the longer you can stay alive, the higher your score. Of course, the longer you stick around, the more zombies you'll encounter.

Why do I blog this? that's a good way to connect the materiality of 1st life (with tangible interactions) and a second life instance. Since I am interested into gestural grammar of interactions, this seems to be a relevant platform to explore.

Ubicomp and user experience at LIFT07

Not very well structured thoughts on the LIFT07 talks about ubiquitous computing. There was a dedicated session about it with Julian Bleecker, Ben Cerveny and Adam Greenfield but some other talks can also be considered as part of that topic (Frédéric Kaplan, Fabien, Girardin, Jan Chipchase).

Adam Greenfield, thoughtfully get back to the definition of ubiquitous computing; starting by explaining the horrible terms (pervasive computing, ambient intelligence...) that lead to the neologism Everyware. Adam's talk was a must-see/read/listen/... about the user experience of Everyware ("information processing invested in the objects and surfaces of the everyday").

Adam gave some examples and moved to the discussion about the upside and downside of people's experience of these technologies. As opposed to his book that starts from more positive aspects, Adam started here with the drawbacks, using a Jeremy Bentham's panopticon to show the foucauldian consequences of Ubiquitous Computing. He exemplified how the data streams produced by our interactions with those systems are colonizing our everyday life and that there are risks about who control them. From the designer's point of view, he showed 3 aspects that are important to consider: people make mistakes (pressing wrong buttons...), inadvertent or unwilling use happens and there are concurrency issues (when technology is everywhere there can be interactions between them: "the all is more than the sum of the parts"). The upside Adam describes concerns how these technologies enable to dissolve into behavior and become transparent (especially as physical manifestations). So, why this is an important? I really like Adam's perspective on ubicomp, it's very balanced and the way he discussed the advantages and drawbacks resonates. I was very interested by his discussion about how machines can derive knowledge from inference and how users can determine that these inferences has been made or they could be seen as invalid. That's a topic I find important, in terms of the research I did about automating location-awareness.

Ben Cerveny then gave us a metaphorical talk called "The Luminous Bath: our new volumetric medium" in which he described the user experience of ubiquitous computing. Ben showed how we live in a luminous bath: the spilling out of information onto physical space; and this is very attention demanding. His talk was about using this metaphor to describe the characteristics of ubiquitous computing.

The first one that struck my mind was "Memotaxis" (maybe because my previous background is in biology), referring to the process of self-organization enabled by the fact that objects gather meta-data. The aggregation of those "morphologies" (that others describes as "mash-ups" make then intelligible. He also used notions of accretions (continuum between an object and a medium), signalling (flows of data are produced not only by mobile objects and ambient displays but also by whatever objects), schooling ("a fish does not know what the school looks like..." meaning that the organisation on a group level is not comprehensible to the members), decanting (distilling information into something less fluid), crystallizing (creation of temporary structure of information) or acculturation ( emergence of practices from being immersed in the environment). So, why this is an important? Ben's talk are always high level (with super-nice slides) and that one was in the same vein. Such metaphors are pertinent in the sense that it allows to move the ubiquitous problem (mostly context-aware computing as described in this IFTF report) into a different semantic description. This allows to rethink the issues and provide some food for thoughts for designers. Tom Hume has a good take on the topic (see his blogpost) by explaining that people attach/categorize meaning to artifacts and record it digitally. The categories created can be aggregated and the information then blend in the environment. What happens in the end is that users could not see this from the inside but can only get the meaning from the interactions.

The day before, Frédéric Kaplan presented his "Beyond robotics" talk, which addressed the notion of ubiquitous computing form the robot side [yes I include robotics in ubicomp because in the end there is more and more convergence between communicating objects and robots].

Frédéric explained different ideas to go beyond current robotics. First, he showed how his former team and himself improved the learning capabilities of the Sony AIBO by implementing a "curiosity algorithm" that allowed the robot to learn how to interact in various environments (walking, swimming...). Second, the discussion about artifacts adapted to the robot morphology (and not linked to a specific usage) was a way to innovate: bikes, water suits. Third, and maybe more interestingly, Frédéric posited that the crux point was to use the history of interactions of the robots with its owners and the environment [very well into the blogject line, I fully concur!]. This connects to what Adam described about prediction that can be made on data collected by the artifact (a topic also addressed by Nathan Eagle in his presentation). According to Frédéric, the point is to use this history of interaction to build predictions, something that artificial intelligence knows how to do for ages. So, why this is an important? In the end, one of the bit he brought to the audience was an open question about what should be one using these ideas. He actually questioned the "calm computing paradigm" to propose the idea of "chili computing": the ones that surprise, stun the users by providing disruptions in the context. This is close to the idea of rude tutor I described last week; I really enjoy when my Nabaztag starts being rude by saying that the party sucks or that we should really go having lunch.

In his presentation entitled "What Happens When 1st Life Meets 2nd Life? How To Live In A Pervasively Networked World", Julian Bleecker described the bridges between "first life" (aka the physical world) and "second lives" (i.e digital environments ranging from MMORPG to blogs, IM, etc.).

His point was that we should be mindful of the material character of what happens digitally: 2nd life worlds have a material basis in there (just as Amazon has huge facilities from which they ship their books) and first life resources support and maintain the digital second lives. Julian additionally brought forward the important notion of embodiment (as opposed to the sedentary attitude of sitting on couches in front of computer screens). Then he described the more pragmatic implications of these statements: the physical environment is important for different reasons: as opposed to digital environments, there can't be any reboot, server updates/scale-up, once our health/body is harmed, we can't create a new one, there is only one possible world that we can inhabit, etc. Julian's stance was therefore that the development of second worlds/digital environments should take material contingencies seriously. He exemplified this through three elements that can be used to bridge 1st/2nd lives: motion (the wii controller is a physical experience), time (in Animal Crossing, the environment is different depending on the seasons) and distance (in Teku Teku Angel, the pedometer allows to use player's movement in space to control a tamagotchi-like creature. So, why this is an important? first bridging 1st and 2nd lives is a powerful was to think about innovative applications. Second, and more importantly, there is really an interesting paradigm shift here. If you think about the Metaverse-like digital worlds (read "Snow Crash" for that matter, Steaphenson described a clear model of separated environments; whereas in what Julian highlighted, there are inter-interelations and cross-pollination between them. I quite like this approach and the underlying reasons to adopt it are very valuable and pertinent. You can read more about this on Julian's blog and from the upcoming work of the Near-Future Laboratory.

During the open-stage Fabien presented an insightful account of how the technological world is messy: "Embracing the real world's messiness". As Frédéric with the idea of chili computing, Fabien questioned the calm computing paradigm and discussed how it is possible for ubicomp to cope with the inherent messiness of our physical world.

Nurtured by tons of examples in the form of pictures taken with his cameraphone, he exemplified what happens: infrastructures break down, standards are different even for things as basic as plugs, competing technologies co-exist, ownership of enabling technologies is fragmented, biased are cultural and contexts are unpredictable. To bridge this world with ubicomp, Fabien presented the idea of "seamful computing" by Matthew Chalmers (revealing limits, inaccuracies, seams and boundaries so that people can adapt) and how to design for user's appropriation (see what I posted about it here). So, why this is an important? working closely with Fabien this is really an ongoing discussion, I am also convinced by the fact that the world is messy and that design should take this into account. What I would upon this, is the notion of aging and dirtiness that could be added on top of the problems of technologies. A last thing I found nice in Fabien's talk was that he described the seams using photo taken from a cameraphone... which of course are not that nice and fluid... because they exemplify the reality of technology. This reminds me an excerpt from Sherry Turkle's book:

I took my seven-year-old daughter, Rebecca, on a vacation in Italy where we went on a boat ride in the Mediterranean; it could have been a simulation because it looked like a post card. She saw a creature in the water, pointed to it and says, "Look Mommy, a jellyfish! It looks so realistic!" (...) told this story about Rebecca and the realistic jellyfish to my friend Danny Hillis, who is a Disney Fellow. He responded to this story by describing what happened when Animal Kingdom, the new branch of the Disney theme parks, opened in Orlando. The animals are real; they are the ones that bleed. So he said that right after it opened, the visitors to the park are asked during a debriefing"what did you enjoy?" The visitors complained that the animals weren't realistic enough -- the animals across the street in Disney World were much more realistic.

Finally, a relevant way to design with this in mind is to follow what Jan Chipchase is doing with his user experience research (field research using ethnographical methods). Though what he presented was not about ubiquitous computing, it's very relevant anyway. Jan described his research about illiterate users and showed how the reality is a complex system in which even illiterate manage to carry out difficult activities. So, why this is an important? a common link between Jan's talk and what has been discussed about ubiquitous computing is the idea of delegation. AI and ubicomp research indeed deploy technologies that aims at assisting users or automating certain process. What Jan discussed in his presentation was that illiterate users are obliged to delegate things and task. He questioned the fact that we can delegate to technology rather than to people, a very compelling topic to me.

Doraemon helicopter hairband

Via: an impressive Doraemon game for kids:

Doreamon has a cool gadget, the Yojigame Pocket (4th dimension pocket), in which he stores just about anything from cars to houses and planes. He also has a helicopter hairband that his friends can use for flying. Epoch has released a video game based on this helicopter hairband! Inspired by the Wii remote controller, the player uses a hairband to control his/her character by moving his/her head in all directions

Why do I blog this? that's an impressive game controller: it allows users to control Doraemon as the character flies through the air using a helicopter strapped unto the head (lean the head forward to go forward, lean the head back to move back, and tilt the head to go sideways)

Brain Imagery navigation with a wiimote

Navigating in a 3D structure as complex as the brain is always painful. Therefore, some smart researchers chose to use the wiimote to do so. Check this Youtube video:

Second example, better quality...wii remote interfaced with glovepie gives to the research community a new type of human interface device. Interfacing this awesome object with intelligent key bindings, you can think about a new way to report, to interface with images, to explore the body volume. In a not so much far future, we can imagine and wonder to report radiology images in a "minority report" way. This is a simple example of me interfacing for window/level, zoom, pan & slice scrolling features on a CT set of images. I'm open to suggestions!

Why do I blog this? simply because I remember my courses in cognitive sciences and how hard it was to look at brain structures and PET visualizations. I haven't tried that system and do not know whether this is as compelling as the video shows it but it's definitely intriguing.

The work of Athanasius Kircher

Athanasius Kircher was a 17th century German Jesuit scholar who invented a large amount of interesting artifacts ranging from speaking tube, perpetual motion machines to cat pianos. Among the devices this Renaissance man created, there are two that I found amazingly intriguing, in terms of ubiquitous computing. The first one is a projector that used candlepower to cast images from glass plates onto a wall (as explained here):

By the flickering light of an oil lamp, Athanasius Kircher projected a series of images engraved on glass onto a wall. He could use his projector to illustrate lectures or simply to amuse his visitors.

And more interestingly, the following picture (from Musurgia Universali (1650)) has been recognized as being a very important step in the history of acoustic theory. This work shows how echoes and reverberations can be bounced for long periods of time in complex wall structures. It's basically a "piazza-listening device". As described in this paper, "the voices from the piazza are taken by the horn up through the mouth of the statue in the room on the piano nobile above, allowing both espionage and the appearance of a miraculous event".

Why do I blog this? I've heard about that incredible man the other day during Jef Huang's presentation at the "classroom of the future" workshop and thought it was a good time to dig more about his work. This sort of design research is impressive and strikingly relevant today when thinking about roomware.

3D printing in 2007?

Some like-minded people give 2007 outcast on CNN Money, among other predictions, I found that one more interesting:

Paola Antonelli (MOMA design and architecture curator) I'm looking forward to the next steps in 3-D printing, where a laser beam solidifies a liquid or a powder to form intricate solid shapes. It currently takes seven days to make a chair from scratch, but soon enough it'll take seven hours, and then seven minutes. You'll be able to inject different colors and textures. People will be able to design their own objects, and 3-D objects will at last join the open-source movement. There will be 3-D printing stations in all neighborhoods all over the world. That will save energy because there will be no need for warehouses and trucks. And the process will use just as much material as is necessary - no waste. This, of course, is a few years from now, but the beginning of the future is today.

Why do I blog this? I don't know whether this should be expected for 2007, nor if all the issues mentionned will be solved by 3D printing but it's an intriguing trend that can have some good implications (about DIY gear and new consumer practices).