Most of the attention in this domain has started from the research in Augmented Reality in the late nineties, leading to lots of prototypes allowing user to access digital textual or audio information that “augment” physical artifacts and places by showing additional information, hidden objects or navigation helps (Feiner, 2002). Most of the times, this information is received on eyeglasses, helmet displays or headsets if there is an audio output. Lots of projects in this field have put the emphasis on designing compelling visualizations such as 3D models to provided people with a tangible way to manipulate complex information. As of 2005, this technology is even available on cell phones that allow to visualize virtual elements on top of the physical space, thanks to computer programs that use the digital cameras. In the case of Augmented Reality, the locus of the output is the very same of the one of the input, that is to say, digital elements appear on top of the corresponding physical elements that triggers their creation. This is why researchers speak of an “overlay of information”. However, the digital-physical convergence does not imply that this locus is always the same. Advances in location-based applications also make it possible to receive digital information on portable devices such as PDA or cell phones (Benford, 2005). In these cases, the digital information is generally not overlayed on a representation of the physical environment but rather as a textual or audio message. Another difference lays in the event that trigger the exchange of such as message. In augmented reality, it is generally the recognition of a certain visual marker allows to replace it on the display by a computer generated graphic. In the case of location-based applications, the information is sent to the users when he or she is located in a specific place, in the vicinity of a certain person or close to an artifact.
Why do I blog this? I was gathering some examples for upcoming talks.