The complexity of GPS accuracy

GPS located on the right Writing a chapter about geolocation history, I am digging the issue of GPS accuracy as it is often a "pain points" in the user (driver) experience. The Road Measurement Data Acquisition System has an interesting paper about it, by Chuck Gilbert.

Gilbert shows how complex the problem of GPS accuracy is and how misleading the advertisements are as they do not convey an intelligible vision of that topic. In general, the admitted accuracy (if there was such thing as admitted accuracy) is between 15 and 100 meters). But what does that range corresponds to? Is it achieve under optimal conditions? under difficult or extreme circumstances? The accuracy values is therefore represented statistically with different means but there is never enough room in an ad to depict this complexity; Gilbert finally recommends not not to use advertisements as an evaluation of GPS accuracy.

The factors that should be listed are the following:

"Required occupation time Type of data recorded (phase or pseudorange) Type of processing (phase or pseudorange) Environmental conditions Maximum allowable PDOP Minimum allowable signal strength Maximum allowable distance between base and rover receivers Horizontal accuracy versus vertical accuracy"

Why do I blog this? although I often focus on the environmental limitations (e.g. narrow streets), the situation is far more complex and it's interesting to pinpoint the different factors that can make a GPS device be inaccurate. How can design take this into account?

The E on touch interface

Playing with a touch-screen Although I don't share the optimism described by this article about touch interface (in the insightful Technology Quarterly in The Economist), there are some good elements discussed there. I recommend reading it in conjunction with Bill Buxton's perspectives about that very topic.

The article in the E gives an overview of touch interface (table, mobiles, etc.) showing how they have been around for quite a while as well as interesting quick descriptions of the available technologies. As with other technologies, I am less interested in the interface per se than how it evolved over time. See for example this description of the limiting factors:

"If touch screens have been around for so long, why did they not take off sooner? The answer is that they did take off, at least in some markets, such as point-of-sale equipment, public kiosks, and so on. In these situations, touch screens have many advantages over other input methods. That they do not allow rapid typing does not matter; it is more important that they are hard-wearing, weatherproof and simple to use. (...) But breaking into the consumer market was a different matter entirely. Some personal digital assistants, or PDAs, such as the Palm Pilot, had touch screens. But they had little appeal beyond a dedicated band of early adopters, and the PDA market has since been overshadowed by the rise of advanced mobile phones that offer similar functions, combined with communications. Furthermore, early PDAs did not make elegant use of the touch-screen interface, says Dr Buxton. “When there was a touch interaction, it wasn’t beautiful,” he says. (...) That is why the iPhone matters: its use of the touch screen is seamless, intuitive and visually appealing. (...) Another factor that has held back touch screens is a lack of support for the technology in operating systems. This is a particular problem for multi-touch interfaces. "

Furthermore, the article also deals with a topic I am researching (mostly with the Nintendo Wii and DS): the one of gestural language for tangible interfaces:

"Microsoft is also developing gestures, and Apple has already introduced several of its own (...) The danger is that a plethora of different standards will emerge, and that particular gestures will mean different things to different devices. Ultimately, however, some common rules will probably emerge, as happened with mouse-based interfaces.

The double click does not translate terribly well to touch screens, however. This has led some researchers to look for alternatives."

Why do I blog this? some interesting elements here about the evolution of technologies, especially showing how slow such interface (almost 20-25 years old) takes time to find its niche.

Urban pranks

plasticninja (a plastic ninja seen in Rome)

Being a great fan of random acts (and André Gide's acte gratuit), it's always to read what the mainstream press has to say about it. So when the WSJ features something about this, there are sometimes some good excerpts, such as:

"The latest pranksters are "urban alchemists," akin to so-called guerrilla gardeners who cram plantings into sidewalk cracks, or people who create "found art" made from random items plucked from the streets, according to Jonathan Wynn, a sociologist at Smith College in Northampton, Mass.

"These are people in cities who take the public spaces and everyday life and make something kind of magical about it," he says."

Why do I blog this beyond the fun part of prank, they're definitely interesting as signal which reveal the need for meaning making in contemporary cities/societies.

Umbrella hack

Umbrella-ed window An intriguing use of umbrella, seen both in Seoul (above) and Geneva (below). Protect your place with what you have up your sleeve!

Nice hack

Why do I blog this? fascination towards mundane creativity, or how people use what they have to repair stuff, and here it's beyond carboard or duct tape.

Explore and produce provocative designs for automated journeys

Buildings Issues in Korea

If you're (still) around Seoul, which I am not, there is this awesomely intriguing "action-packed" workshop next week called Automated Journeys (as part of the Ubicomp 2008 conference):

"Computing technology now pervades those moments of our day when we move through our cities. Mobile phones, music players, vending machines, contact-less payment systems and RFID-enabled turnstiles are de rigueur on our daily journeys. This workshop aims to examine these augmented journeys, to reflect on the public, semi-public and private technologies available to us in them, and to speculate on what innovations might be to come. Taking as our starting point cities such as Seoul, we aim to take seriously the developments in mobile technology as well as the advancements in autonomous machinery and how these mesh with our urban journeys.

Through collaborative fieldwork, group discussion and a hands-on design brainstorming session, the workshop's empirical focus will be directed towards producing 4 envisagements that either speculate and/or critically reflect on technological futures."

Why do I blog this? interest in automatic city + active workshop with participants engagement + already attended another workshop organized by this team. Seoul's a perfect place to investigate such a topic (on par with Rotterdam).

Designing interactions, designing conversations

Morning read in the train: Uncertain futures: A Conversation with Professor Anthony Dunne by David Womack. Yet another insightful short article on the Adobe Design Center think tank website. Womack starts off by describing how Dunne+Raby's work is meant to reclaim the original meaning of interaction design: generate particular types of conversations, usually about technology or an aspect of the future. Some excerpts I found relevant:

"With classic design, the idea is generally to solve the problem or cure the ailment. If you’re getting wet, you make a shelter. Placebo projects we see more as a way of negotiating a relationship to something. It’s not solving a problem. You’re setting up a situation that facilitates a discussion. (...) it stops students thinking in terms of, “Here’s a problem, now I’m going to solve it.” We want to think about people in a complex way that isn’t neat or containable.

For example, if nanotechnology is on its way in its various manifestations, which of these manifestations seem acceptable and which seem scary? And why? Design can be a medium for exploration and a place for experimenting and engaging people in dialogue. We think design can provide a very concrete and down to earth language for exploring the implications of technology.

I would never describe designers as problem solvers. I might describe them as meaning makers."

Why do I blog this? preparing a presentation for a design conference, I am cobbling some notes about utilitarian versus critical design. What I find of particular interest here with what Dunne is claiming is the importance of this approach. As he wrote with Fiona Raby in Design Noir, "Beneath the glossy surface of official design lurks a dark and strange world driven by real human needs". A quote I really enjoy and often use even in less critical-design-prone domain (e.g. with business exec wondering about the "added value" of the weird stuff I throw up when interacting with them). Why is it pertinent IMO? because it's about asking questions, uncovering new meanings and desires and not about doing new product development by adding the word "intelligent" as a creative way to design the future.

Anticipatory or representational visions of ubiquitous computing

Catching up with accumulated RSS feeds, I read with great pleasure the slides from Sam Kinsley's presentation at the RGS-IBG annual international conference.

Kinsley interestingly addresses the vision of ubiquitous computing and how it is employed in the domain of corporate R&D. He takes the example of HP's Cooltown project and what "stories" were set to define the project and the vision. Of course there were some issues with the large quantity of material produced in the Cooltown project. Some excerpts I enjoyed from Kinsley's notes:

"After CEO prominence came, some HP managers went to this producer to create a ‘vision’ video for CoolTown. From a corporate ‘vision’ perspective: the video was a very compact articulation of a lot of things CoolTown as a research project was trying to say about the type of world being created by these types of technologies. From the technology research scientist standpoint - there were things about the video they liked, but many things that made them cringe and say 'we didn't say it would work like that'. As some of the researchers saw it, the producer wasn't very ‘tech savvy’.

The video became an interesting double-edged sword. It had a particular effect on how CoolTown was received. It wasn't accurate to technological development the ensued but represented a ‘vision’. The researchers felt that the overly emotive and simplistic corporate vision elided some of the interesting and important things they were trying to achieve to make the world better. (...) whilst visions are not necessarily realised, nor likely to be, they are productive of particular types of relation between researchers, business managers, clients and various places and things. (...) Vision texts and videos are, in most cases, certainly not glimpses of a future. Rather, they are representational constructs born of anticipatory impetus. "

Why do I blog this? I often find interesting when this sort of gap is revealed as it shows the importance of culture and imaginary expectations in technological developments. The notion of "visions" less teleogical but representational is also important here as it shows that reality is more complex than presented in the pop press/PR communication.

Take-aways from LIFT Asia

Some notes Laurent and myself prepared for the wrap up, insisting on the following take-home issues, the image that takes shape after the conference is done:

  1. experience of space: physical space change, the way we perceive and interact in space is modified. Christian Lindholm talked about wifi places as oasis, Adam Greenfield talked about the new way we will experience places, etc.
  2. currency & business models redefined as shown by Davird Birch's talk, Joonmo Kwon described new business models such as co-promotion or the fact that game money are controlled by game designers (they even control the inflation rate)
  3. service convergence as described by Joonmo Kwon and Takeshi Natsuno
  4. cities as interfaces: Jury Hahn, Jeffrey Huang, Soo-In Yang or Adam Greenfield gave us different propositions
  5. technological relativism: some countries are more advanced than others, the speed of change is increasing (as shown by Jan Chipchase), it's never black or white, the notion of "uses" is also different.
  6. real world is a limit: the limit is often simple: the physical reality: battery life (and that there could be solutions as exemplified by Raphael Grignani), machines that crash, etc.

Kansa-amida!

Robot session @ LIFT Asia

Saturday morning at LIFT Asia 2008, quick notes. Frederic Kaplan began his talk by stating that the number of object we have at home is huge (nearly 3500), all of them have different "value profile". he showed curves that capture the evolution of the experienced value of an object). See the curve below. A roomba for example follows a curve such as a corkscrew (c) whereas an Aibo, an entertainment robot, follows more a "notebook" curve: where value augment over time through the relationship with the owner(s).

Frederic stated how we know how to deal with the mid to end part of the curve but not the beginning, namely how to create the first part of the robot-owner relationship, which is a crux question in general for robots/communicating objects designers. There are many reasons for that: in the west, it's not easy in the occidental culture, to "raise" and talk a robot; most people try but stop, and show it only when friends come visit. So the robot is a pretty expensive gadget.

After moving from Sony to the CRAFT laboratory, Frederic started moving form robot to interactive furnitures and became interested in how objects can be "robotized" and the fact that perhaps robots should not always look like robots. Since 1984, computers have not changed much (shapes, icons have been modified but still it's always the same story). We changed the way we used computers (listen to music, watch photos, get the news, that was not what computers were intended for) but they did not change, so they thought it would be an idea to build a robotic computer as in the former Apple commercial. They therefore designed the Wizkid, an "expressive computer" which recognizes people, gestures proposing a new sort of interactivity with people. To some extent, he showed how you can have expressivity without any anthropomorphic robot (unlike the demo we had of the Speecys robot).

Some use cases: - in the living room, the Wizkid can act as a central interface to the media players: showing a CD make the robot playing it; it can also take pictures autonomously and create a visual summary of the event that can be sent to guests afterwards. It's like an automatic logging system that remembers and use that information. - in the kitchen: the wizkid can help you cook and shop. When the owner prepare a recipe, the wizkid will help following it step by step, tracking face and gestures (ans also doing some suggestions). It would be possible to show an item and the wizkid add it to the shopping list. - games are also an interesting field: you can play augmented reality games with the wizkid: you look at yourself in the screen and see yourself in imaginary worlds.

As a conclusion, Frederic said that most people things that robots will look like objects but he claims that everyday objects can become robot and the next generation of computer interfaces will be robotic. People used to go to the machine to interact but now interactivity comes to you. Computers used to live in their own world, now they live in yours.

Then Bruno Bonnell in his "from robota to homo robotus: revisiting Asimov's laws of robotic" took up the floor and gave an insightful presentation about robot designers should revisit the definition of "robots" (and therefore Asimov' laws). To him, there is a vocabulary problem when it comes to robots.

In Czech, "robot" means "work" and it pervaded our representation of what is a robot, that is to say, a mechanic slave. Hence the laws of robotics for Asimov. These laws work well for military or industrial robots but what about leisure robots such as the Aibo, the roomba, iRobiq? We had the same problem with the word "computer". it's only since World War II that the word "computer” (from Latin computare, "to reckon," "sum up") been applied to machines. The Oxford English Dictionary still describes a computer as "a person employed to make calculations in an observatory, in surveying, etc.". We moved that into machines and computers took over the successive activities: Systematic tasks, support creation tools, became and artistic Medium and finally an amplifier of imagination. And it's the same with animals: it used to be food, then working forces, companions and finally friends. In addition, we don't talk just about "animals": there are ponies, dogs, etc. with a classification: animals, mammals, equids, horses. It would be possible to classify computers according to the same classification: order/family/genre/specy.

So, what about robots? are the very different robots all the same? Couldn't we classify them in a classification: a family of static robot, a family of moving robot, etc. So now, it's no more "robot, robot, robot" but "Robots,Mover,Humanoide,IrobiQ". What is important here is that all these robots in the classifications are recognized as having different features and characteristics. We start recognizing that they are all not the same species. By classifying (giving a name), you generate some different applications and can improve the quality of the product that you are designing. Putting names on things helps creating them. It allows to go beyond the limits of the robot vision: and it allows to reconcile the vision of having of both an anthropomorphic robot (like Speecys' robot we saw first) and a different one (like Frederic's Wizkid) since they are from two different "species".

After this classification, we can go into the evolution, how to branch out the future of robots. there could be the following path: mechanical slave, the alternative to human actions, the substitute of human care, the companions and finally the amplifier of human body and mind. Is it scifi or Reality ? Today or Tomorrow ? Is it possible technically? We don't know but what is important is to start today and look ahead?

An interesting path to do so is to move away from practical robots and investigate useless robots, as well as not being afraid of technical limitations (think about the guys who designed Pong at Atari). To the question "what does the robot do?", the answer is simple: to create an emotional bond with humans (that would be recipe for a robot success). The important characteristics are therefore: fun, thrilling, etc. Which is very close to video games do: they creatine a emotional bond with the players because they are faithful to a reality, they are reliable, available, adaptable, and above all TRUSTFUL. In the same fashion, robots should be trustful. The bottom line is thus that we should forget the Asimov laws and invent the Tao of robotic where the "gameplay" is the key to accept them as part of our reality.

Also, the funny part of the session was the first talk where Tomoaki Kasuga's demonstration of his robot, which "charm point" is the hip (or something else as attested by the picture below), especially when dancing on stage. What Tomoaki showed is that expressivity (through dance, movement, the quality of the pieces) is very important for human-computer interaction.

Jan Chipchase @ LIFT Asia 2008

In this session, focused on mobile technologies, the first speaker was Jan Chipchase. His "Future Social" presentation relied on examples of technology use behaviors to show trends that both disrupt these behaviours and generate new social practices. He basically used cases from his field study and personal experience. Example 1: In a study about how people react to head mounted displays in Tokyo and New York, they hired actors to simulate various use cases to test their social feasibility

Example 2 (Co-presence): people sit in a café, opened clam-shell cell-phone next to the tea mug, to check updates (messages, IM) and also for women sitting alone as a way of sending a social signal to others that they are currently occupied.

Example 3: mobile phone headset held in the hand for 2 purposes: cutting the microphone from ambient noise and tells other people that the person want to be quiet.

Example 4: in a UK café, the manager did not want people to use a laptop, if it's the case, the manager have different strategies to encourage people stop using it (cleaning next to the customer...). When people dot lots of things that are not appropriate, there's a lot of signage that is appearing. Signage is interesting because it shows where society wants to go and who defines authority

Example 5: secret use: it's common in Korea to see school kids watching mobile TV for example secretly with the phone in a case on their desk.

New trends based on Jan (and his team)'s work: - more and more of what we use if "pocketable" (fit in the pocket) carried into context where people do not necessarily anticipate their use. So it provokes behavior leakage from one context to another. So it leads designers from Nokia to ask within what time frame does what stuff become pocketable and what services can be accessible from that device. - serial-solitarity: it's always easier to design something for sole use rather than shared use (although there is a big buzz about youtube, etc.). What this means that we see more and more people in the same place, doing the same things but apart. - real-time associations: technologies enable to ease the answering of questions one have, to make what Jan referred to as "real time associations" of people, things, what people does, etc. - tech literacy/age: technology is use more and more at a younger age - boundaries between work/other things are blurring. It takes a lot of discipline to maintain these boundaries. - speed of change/hours: adoption of services/devices in lots of countries, volume of devices created, etc. - invisible technologies: pocketable is a step towards more important miniaturization: we're going to not see a lot of technologies; because they disappear in the infrastructure. And when technologies disappear, the emphasis on social cues to make then explicit is even more and important.

And the conclusion of his talk was, simply,: "I have way more questions than answers, that's what we do"

"Networked cities" session at LIFT Asia 2008

(Special fav session at LIFT Asia 2008 this morning since this topic is linked to my own research, my quick notes) Adam Greenfield's talk "The Long Here, the Big Now... and other tales of the networked city" was the follow-up of his "The City is Here for You to Use". Adam's approach here was "not technical talk but affective", about what does it feel to live in networked cities and less about technologies that would support it. The central idea of ubicomp: A world in which all the objects and surfaces of everyday life are able to sense, process, receive, display, store, transmit and take physical action upon information. Very common in Korea, it's called "ubiquitous" or just "u-" such as u-Cheonggyecheong or New Songdo. However, this approach is often starting from technology and not human desire.

Adam's more interested in what it really feels like to live your life in such a place or how we can get a truer understanding of how people will experience the ubiquitous city. He claims that that we can begin to get an idea by looking at the ways people use their mobile devices and other contemporary digital artifacts. Hence his job of Design Director at Nokia.

For example: a woman talking in a mobile phone walking around in a mall in Singapore, no longer responding to architecture around her but having a sort of "schizeogographic" walk (as formulated by Mark Shepard). There is hence "no sovereignty of the physical". Same with people in Tokyo or Seoul's metro: physically there but on the phone, they're here physically but their commitment is in the virtual.

(Oakland Crimespotting by Stamen Design)

Adam think that the primarily conditions choice and action in the city are no longer physical but resides in the invisible and intangible overlay of networked information that enfolds it. The potential for this are the following: - The Long here (named in conjunction with Brian Eno and Steward Brand's "Long Now"): layering a persistent and retrievable history of the things that are done and witnessed there over anyplace on Earth that can be specified with machine-readable coordinates. An example of such layering experience on any place on earth is the Oakland Crimespotting map or the practice of geotagging pictures on Flickr. - The Big Now: which is about making the total real-time option space of the city a present and tangible reality locally AND, globally, enhancing and deepening our sense of the world’s massive parallelism. For instance, with Twitter one can get the sense of what happens locally in parallel and also globally. You see the world as a parallel ongoing experiment. A more complex example is to use Twitter not only for people but also for objects, see for instance Tom Armitage's Making bridges talk (Tower Bridge can twitter when it is opening and closing, captured through sensors and updated on Twitter). At MIT SENSEeable City, there is also this project called "Talk Exchange" which depicts the connections between countries based on phone calls.

Of course, there are less happy consequences, these tech can be used to exclude, what Adam calls the "The Soft Wall": networked mechanisms intended to actively deny, delay or degrade the free use of space. Defensible space is definitely part of it as Adam points out Steven Flusty's categories to describe how spaces becomes: "stealthy, slippery, crusty, prickly, jittery and foggy". The result is simply differential permissioning without effective recourse: some people have the right to have access to certain places and others don't. When a networked device does that you have less recourse than when it's a human with whom you can argue, talk, fight, etc. Effective recourse is something we take for granted that may disappear.

We'll see profound new patterns of interactions in the city:

  1. Information about cities and patterns of their use, visualized in new ways. But this information can also be made available on mobile devices locally, on demand, and in a way that it can be acted upon.
  2. Transition from passive facade (such as huge urban displays) to addressable, scriptable and queryable surfaces. See for example, the Galleria West by UNStudio and Arup Engineering or Pervasive Times Square (by Matt Worsnick and Evan Allen) which show how it may look like.
  3. A signature interaction style: when information processing dissolving in behavior (simple behavior, no external token of transaction left)

The take away of this presentation is that networked cities will respond to the behavior of its residents and other users, in something like real time, underwriting the transition from browse urbanism to search urbanism. And Adam's final word is that networked cities's future is up to us, that is to say designers, consumers, and citizens.

Jef Huang: "Interactive Cities" then built on Adam's presentation by showing projects. To him, a fundamental design question is "How to fuse digital technologies into our cities to foster better communities?". Jef wants to focus on how digital technology can augment physical architecture to do so. The premise is that the basic technology is really mature or reached a certain stage of maturity: mobile technology, facade tech, LEDs, etc. What is lacking is the was these technologies have been applied in the city. For instance, if you take a walk in any major city, the most obvious appearance of ubiquitous tech are surveillance cameras and media facades (that bombard citizen with ads). You can compare them to physical spam but there's not spam filter, you can either go around it, close your eyes or wear sunglasses. You can compare the situation to the first times of the Web.

When designing the networked cities, the point is to push the city towards the same path: more empowered and more social platforms. Jef's then showed some projects along that line: Listening Walls (Carpenter Center, Cambridge, USA), the now famous Swisshouse physical/virtual wall project, Beijing Newscocoons (National Arts Museum of China, Beijing) which gives digital information, such as news or blogposts a sense of physicality through inflatable cocoons. Jef also showed a project he did for the Madrid's answer to the Olympic bid for 2012: a real time/real scale urban traffic nodes. Another intriguing project is the "Seesaw connectivity", which allows to learn a new language in airport through shared seesaw (one part in an airport and the other in another one).

The bottom line of Jef's talk is that fusing digital technologies into our cities to foster better communities should go beyond media façades and surveillance cams, allow empowerment (from passive to co-creator), enable social, interactive, tactile dimensions. Of course, it leads to some issues such as the status of the architecture (public? private?) and sustainability questions.

The final presentation, by Soo-In Yang, called "Living City", is about the fact that buildings have the capability to talk to one another. The presence of sensor is now disappearing into the woodwork and all kinds of data is transferred instantly and wirelessly—buildings will communicate information about their local conditions to a network of other buildings. His project, is an ecology of facades where individual buildings collect data, share it with others in their "social network" and sometimes take "collective action".

What he showed is a prototype facade that breathes in response to pollution, what he called "a full-scale building skin designed to open and close its gills in response to air quality". The platform allows building to communicate with cities, with organizations, and with individuals about any topic related to data collected by sensors. He explained how this project enabled them to explore air as "public space and building facades as public space".

Yang's work is very interesting as they design proof of concept, they indeed don't want to rely only on virtual renderings and abstract ideas but installed different sensors on buildings in NYC. They could then collect and share the data from each wireless sensor network, allowing any participating building (the Empire State Building and the Van Alen Institute building) to talk to others and take action in response. In a sense they use the "city as a research lab".

Eric Rodenbeck at LIFT Asia 2008

Eric Rodenbeck (Stamen, design studio in SF) just gave a nice presentation in the "Beyond the Web we know" session. He indeed showed a less known part of Web, in the shadow of social media frenziness: rich data visualization. At his studio, Eric and his team work with flows of data (from the internet and the real world) and find way to represent that data so that people better engage with them. I actually saw only one part of his talk at O'Reilly ETech 2008 and thought it would be great to bring him to LIFT Asia. (Picture of Stamen's Digg swarm visualization)

Eric started with the work of Etienne-Jules Marey, a french hat-lover physiologist who studied movement (heartbeat, human walking and animal movement). His talk basically showed how Marey's work could be turned as design principles for data visualizations. For example, Eric showed how Marey demonstrated how the flight of a bird is different than the flight of an insect by using representations of movements. Marey also designed hardware to represent different movements. What Stamen is doing, to my opinion is taking the same approach and use available tools (software in the present case) such as existing flows of data (taken from database for example or GPS sensors) or capture them and use Web technologies to represent/display them.

IMO the take-aways of his talk are the following issues: - repetition and measurement allow to better understand how system works, as they can reveal phenomena hidden for the person who look at it. - visualization can be very effective to tell you stories: to show patterns for example in the Digg swarm project. - use your eyes instead of your brain. - the visualizations are not always meant to find answers but they help to generate new questions for example: in cabspotting (see example below), the white lines represent taxis. As one can see, there are taxi moving close to Bay Bridge in San Francisco but they are obvisouly over the sea... how this can be possible? The thing was that GPS working fine on the top of Bay Bridge, but not on the lower part (since the upper part block the GPS signal): so the lines next to the bridge reflect the cars with GPS which does not work.

(Picture of Stamen's Cabspotting)

Intentionally slow

Slow mode to take advantage of the view An interesting encounter yesterday at the conference center in Jeju is this sign, which reveal to the user why the elevator is so slow: it has been design for that purpose. Sort of the equivalent of "http://en.wikipedia.org/wiki/Slow_foodslow food" for micro-mobility transportation system.

Why do I blog this? I find this sort of design (and signage) interesting as it shows how transportation systems are more than mere tubes to bring people from A to b as fast as possible. If quantity-management was the paradigm of transportation system in the past, there is now a clear trend towards more qualitative experience/service. That was a topic often addressed by Georges Amar, head of the foresight group at RATP, see for example my notes from one of his talk. Hence the need to slow down sometimes to appreciate landscape (in this case) or also sociability, the passage of time, etc.

Diagrams and visuals in anthropology

("Tuamotuan Conception of the Cosmos", by Paiore, 1820)

Recently looking at how to shape ethnographic results in an adequate form for designers, reading Dori Tunstall's post about how "anthropology has always been visual" is very relevant. She points to this Flickr pool entitled "Great Diagrams in Anthropology, Linguistics, & Social Theory". As she says:

"I have always bristled at the notion that anthropologists are more textually-oriented than visual, that somehow there is no culture of the visual in the field. Having misspent my youth trying to figure out the subtleties of kinship diagrams, mastering the art of reading archaeological site maps, and illustrating the distinct morphology of early hominids (pre-humans), I knew that to be empirically untrue. So I am happy to have the vindication through visual documentation that Anthropology has always been visual."

Why do I blog this? currently looking, by personal interest (i.e. not linked to a specific project so far), the diversity of material which can be generated by ethnographic material. sort of thinking about how to use more visual representation (as opposed to the textual format). Of particular interest to me is this sort of spatial diagrams (would have been helpful in the home ethnography project I did in july):

(Kabyle House or The World Reversed - Bourdieu, 1972)

From "force" to "touch"

haptic Some ads of "Haptics UI" for mobile in full spin in South Korea (Gimpo airport above and COEX center in Seoul below). The semantic of that word may be mysterious; from a Greek word which means “contact” or “touch” but it's definitely interesting to see it applied here.

Haptic user interface

In the 90s, it was often employed for the future of input/output interfaces in virtual realities; especially with a focus on force-feedback. Now, the emphasis is a bit more subtle and seems definitely on "touch": somehow the word made it by loosing the strength characteristic and turned into something more Weiser-ian: a calm "touch" computing paradigm.