Future

Live the future yesterday to invent it

Reading "Sketching User Experiences: Getting the Design Right and the Right Design" by Bill Buxton, I ran across this part that I found relevant:

"in order to design a tool, we must make our best efforts to understand the larger social and physical context within which it is intended to function. Hutchins refers to such situated activities as "in the wild" in order to distinguish their real-world embodiment from some abstract laboratory manifestation that is as idealized as it is un-realistic. I call this process that expressly takes these types of considerations into account "design for the wild". (...) The only way to engineer the future tomorrow is to have lived in it yesterday

To adequately take the social and physical context into account in pursuing a design, we must experience some manifestation of it in those contexts (the wild) while still in the design cycle"

Why do I blog this? preparing some proposal I found this point very nicely expressed and may quote it. Buxton's phrasing interestingly exemplifies this important point. Besides, I quite like the Gibsonesque "The future is already here but it's just not evenly distributed" approach situated in time and mostly in the past.

Letters liberated by the computer keyboard

Read in "Shampoo Planet" (Douglas Coupland):

"We discuss the future a lot, my friends and me. My best friend, Harmony, (...) thinks the future will be like rap music and computer codes, filled with Xs, Qs, and Zs: "Letters liberated by the computer keyboard."

Why do I blog this? I often find funny how sci-fi pieces (novels, manga, comics, anime, movies) use x,y,z as well as crazy numbers or even greek signs to talk about devices of the future. Which is why I like that quote.

[near] futures of digital entertainment

Yesterday I gave a talk in Lyon for the video game/mobile game industry about the "near futures of digital entertainment". Slide are available here (english) and ici (en français).

The talk started by a quick overview of research projects about mobile/pervasive gaming (location-based games, mobile tagging, etc) showing how this is difficult to throw to the markets (hardware/software issues + infrastructure problems...). I then tried to show some hints about what to do with examples that I find interesting and very down-to-earth: using the phone microphone, tv-phone tie in, etc. The point was to show people from the industry that they can do something almost overnight, not using ultra-tech fancy GPS solutions and stuff.

I concluded the talk with a mapping of the possibilities (in the form of an uncertainty cone):

The horizontal line shows the consumer market. When circles/things are close to the edges, it means that it’s not certain to be around before few years? The problem is to find what can turned them into more market-orientated products. For example, a way to bring location-based games closer to the market would be to forget the use of GPS but to let people self-disclose their locations.

William Gibson's interview

Some excerpt from an interview of William Gibson that I found relevant:

"trying to get a handle on our sort of increasingly confused and confusing present. (...) when I started, one of the assumptions that I had was that science fiction is necessarily always about the day in which it was written. And that was my conviction from having read a lot of old science fiction. 19th century science fiction obviously expresses all of the concerns and the neuroses of the 19th century and science fiction from the 1940's is the 1940's. George Orwell's 1984 is really 1948, the year in which he wrote (...) There's a character in my previous novel, Pattern Recognition , who argues that we can't culturally have futures the way that we used to have futures because we don't have a present in the sense that we used to have a present. Things are moving too quickly for us to have a present to stand on from which we can say, "oh, the future, it's over there and it looks like this." (...) I would find that spookier if I had been believing all along that those sort of dystopian themes in science fiction were about some sort of vision of the future. I think they were actually like being perceived in the past when that stuff was being written. 1984 is a powerful book precisely because Orwell didn't have to make a lot of shit up. He had Nazi Germany and the Soviet Union under Stalin as models for what he was doing. He only had to dress it up a little bit, sort of pile it up in a certain way to say, "this is the future." But the reason it's powerful is that it resonates of history. It doesn't resonate back from the future, it resonates out of modern history. And the power with which it resonates is directly contingent on the sort of point-for-point mimesis, like sort of point-for-point realism, in terms of what we know happened. (...) When you're writing about a present, whether it's imaginary or not, and there's some major imaginary elements in Spook Country , the rules are different. It isn't the same. I have to come up with something that allows me to suspend my disbelief in my fantastic narrative and which I hope will allow the reader to suspend their disbelief. So actually, it is more work. It requires a different sort of examination of my own sense of the world outside myself."

Why do I blog this? some good points there (especially regarding the process of writing about present/future), although I am curious about the "things are moving too quickly to have a present".

Paul Saffo's tools and hints about forecast

The last issue of Harvard Business Review features an insightful article by Paul Saffo about efficient forecast. Although the title refers to "6 rules" for efficient forecast, the article actually provides the reader with two "thinking tools" and a set of highly relevant heuristics about how do forecasting. The premise here is that forecasting is in no way about predicting the future but instead to identify the full range of possibilities to take meaningful actions in the present. The two tools described by Saffo are the "cone of uncertainty" and "S-curves".

A cone of uncertainty is a tool meant to help the decision maker exercise strategic judgment by delineating possibilities that extend out from a particular moment or event. At first, defining the cone corresponds to set its breadth: the measure of overall uncertainty. Saffo describes that it is important to define it broadly at start to "maximize your capacity to generate hypotheses about outcomes and eventual responses". Defining the edge is also worthwhile since it enable to distinguish "between the highly improbable and the wildly impossible". Then the point is to fill the cone with external factors to consider, inside the cone there would be factors such as the possible emergence of competing technologies or consumer characteristics (preference) and at the edge of the cone would be wild cards (surprising events such as war, terrorist attack), which are what define the edge of the cone. While the neck of the cone depicts the key speculation, the end shows the possible outcomes.

See the example Saffo gave (about robots):

(Cone as defined by Paul Saffo / HBR)

The other thinking tool presented here is the S-curve (as exemplified by Moore's law). As described previously in this blog, S-curves (power laws) are meant to model the way change happens: it starts slowly and incrementally till an inflection point where it explodes, eventually reaching a plateau. Forecasting is about finding S-curved patterns before the inflection point (left of the curve). Moreover, Saffo highlights the fractal nature of s-curves: they are composed of small s-curves; which means that finding a S-curve can lead to suspect a larger/more important one in the background. He also gives few hints:

" the left-hand part of the S curve is much longer than most people imagine (Television took 20 years, plus time out for a war, to go from invention in the 1930s to takeoff in the early 1950s) (...) having identified the origins and shape of the left-hand side of the S curve, you are always safer betting that events will unfold slowly than concluding that a sudden shift is in the wind. (...) Once an inflection point arrives, people commonly underestimate the speed with which change will occur. (...) expect the opportunities to be very different from those the majority predicts, for even the most expected futures tend to arrive in utterly unexpected ways (...) The leading-edge line of an emerging S curve is like a string hanging down from the future, and the odd event you can’t get out of your mind could be a weak signal of a distant industry-disrupting S curve just starting to gain momentum. (...) The best way for forecasters to spot an emerging S curve is to become attuned to things that don’t fit, things people can’t classify or will even reject."

Then Saffo, through his 4 other rules, give a compelling list of heuristics that I will only quote below:

" just as we dislike uncertainty, we shy away from failures and anomalies. But if you want to look for the thing that’s going to come whistling in out of nowhere in the next years and change your business, look for interesting failures—smart ideas that seem to have gone nowhere. (...) One of the biggest mistakes a forecaster—or a decision maker—can make is to overrely on one piece of seemingly strong information because it happens to reinforce the conclusion he or she has already reached. (...) lots of interlocking weak information is vastly more trustworthy than a point or two of strong information. (...) Good forecasting is a process of strong opinions, weakly held. If you must forecast, then forecast often—and be the first one to prove yourself wrong. (...) our historical rearview mirror is an extraordinarily powerful forecasting tool. (...) The problem with history is that our love of certainty and continuity often causes us to draw the wrong conclusions. The recent past is rarely a reliable indicator of the future (...) You must look for the turns, not the straightaways, and thus you must peer far enough into the past to identify patterns. It’s been written that “history doesn’t repeat itself, but sometimes it rhymes.” The effective forecaster looks to history to find the rhymes, not the identical events. (...) [look for] deep, unchanging consumer desires and ultimately, to the sorrow of many a start-up, unchanging laws of economics. (...) Be skeptical about apparent changes, and avoid making an immediate forecast—or at least don’t take any one forecast too seriously. The incoming future will wash up plenty more indicators on your beach, sooner than you think. "

Why do I blog this? I quite like articles like this because it gives a lot of hints, which resonates with past readings/meetings (see here and there). Currently trying to integrate all these tools and approaches into some more personal approach, I am particularly interested in "find what doesn't fit" or "spot rhythms" hints. I wish he had insisted more on the outcome of such work (story, scenario, decision) and - above all - on what is difficult or hard: how find variables? underlying issues? The part that I am mostly interested in concerns the "data" that could be employed to describe cones or S-curves.

Last, some less organized quotes that I liked: "There is a tendency to overestimate the short term and to underestimate the long term" Roy Amara "Whether a specific forecast actually turns out to be accurate is only part of the picture - even a broken clock is right twice a day" Paul Saffo "Son, never mistake a clear view for a short distance" "The future's already arrived, it's just not evenly distributed" William Gibson "Forecasting is nothing more (nor less) than the systematic and disciplined application of common sense"

Intriguing spam

Spam received 5 minutes ago:

"You're just too ignorant to see the hundreds of explanations for why it's not that simple."

This is a nice quote, maybe the remark is true given the complexity of the problems to solve?

Out from 2007 to 2020?

Email received yesterday:"I will be out of the office starting 03.05.2007 and will not return until 01.01.2020.

Please note that I am not working at XXXXX since the 3rd of May 2007. You can reach me at : xxxx@newcompany.com"

Path to the future

R0011354 A timely and relevant definition of foresight (by Bill Cockayne):

"Foresight is the practice of exploring the long-term future. The goal of foresight is to help individuals and organizations to better prepare for long-term opportunities or problems. Unlike forecasting methods, foresight does not attempt to define specific events or trends in the future. Instead, foresight methods and tools support the development and exploration of a multiple possible futures, none of which is expected to exist. Foresight, like thought (Gendanken) experiments performed by physicists and philosophers, help the practitioner to design and analyze a hypothetical experiment that, while possible, is not likely to be pursued."

(The picture has been taken by myself at my school, though it was appropriate in this context)

Science fiction and predictions

"Corporate research evolved in the 1990s from the invention of science fiction to creating scientific and technological fact (...) the wonders science fiction authors predicted for around the year 2000 - such as mass transportation to the moon and glass-doomed cities on the ocean floor - are within the grasp of modern technology. However, he adds, they are not about to come to pass. The reason: "[writers] forgot the marketing dimension. Nobody is out there that is willing or capable of paying for that"

Wolf-Ekkehard Blanz quoted by Robert Buderi in "Engines of Tomorrow: How the World's Best Companies are Using Their Research Labs to Win the Future" That is why some sci-fi authors prefer to write about things that CAN happen, about problems that could occur in the current situation of R&D prototypes.

Two or three buttons, but will not carry more than four

Read in the NYT (1996):

On the eve of the Wright brothers' historic first flights in 1903, Simon Newcomb, an eminent United States scientist, predicted that the first successful flying machine would be the handiwork of a watchmaker and would carry nothing heavier than an insect. Later he increased the payload: ''It may carry two or three buttons, but will not carry more than four.'' And so it goes. "

More in "The Experts Speak : The Definitive Compendium of Authoritative Misinformation" (Christopher Cerf, Victor S. Navasky)

Challenging the NBIC convergence?

Converging science and technology, firm knowledge base and innovation: The case of nanotechnologies in the early 2000's by Eric Avenel, Anne-Violaine Favier, Simon Ma, Vincent Mangematin and Carole Rieu is a very interesting paper that examines nanotechnologies and its convergence potential, i.e. does it "enhance hybridisation amongst technologies"? I am not that interested in nanotech but rather by the NBIC concept given that nanosciences are considered as an emerging technology which is based on the convergence among different existing scientific fields, biotechnology, information technology and cognitive sciences (see for example this NSF report).

"Based on the worldwide database of nanofirms, the paper examines the development pattern of Nano S&T. It argues that firms integrate nanoS&T by juxtaposition of new projects around the existing ones and by hybridisation of new technologies with existing technologies within the firm. Large firms mainly follow the first path while small and specialised ones (Nanodedicated firms) develop new projects through hybridising with existing knowledge base. NanoS&T appear to be less transverse and competence enhancing than specialised and competence destroying as new competencies replace the existing ones within firm knowledge. The firm knowledge base is not the locus of the convergence as expected."

Why do I blog this? the question of the NBIC is trendy lately and this article challenges the convergence between all these fields, this is of interest to me because I am into IT and cogsci.

HCI in science-fiction

(via Mr. Hand), this compelling paper about Human Computer Interaction in Science Fiction Movies by Michael Schmitz have made my day. It essentially surveys different kinds of interaction designs in sci-fi movies ("Neuro technology", Identification, Speech recognition, Intelligent assistants / Avatars, Displays / Other I/O technologies) and show how they relate to existing technologies. Go read the paper to look at the examples (with pictures); What I found interesting here boils down to the implications, basically about the "key factors that determine or influence the design of HCI in movies":

"The probably most important aspect is the availability of special effects technologies - including the budget of a production to use those. (...) Current trends in IT research and products have of course as well an impact on the movie, since this will probably be the director’s background where his ideas will evolve from (...) the importance of the interaction technique or the device itself for the movie as a whole. The technology could be totally unimportant or play an important role for the plot (so called “plot device”), but most of the times technology is found inbetween and has to support the overall authenticity of the vision of a future world. (...) only more recent movies show attempts to design their HCI more carefully. (...) Others try to adapt technologies that were already available and improve them, but concepts of HCI research are normally not addressed. (...) The main reason might be that HCI is still a relatively young research area and slowly becoming more popular during the past decade. Another reason could also be that human centred, pervasive or ubiquitous computing could look very inconspicuous, whereas high-tech in movies should preferably appear more spectacular. "

(Picture from "Johnny Mnemonic")

Why do I blog this? because the intertwining relationships between HCI/ubicomp and sci-fi is of tremendous interest. The normative proximal future seems to be "a tendency towards conversational speech as an interface and 3 dimensional displays that work without head-mounted device". In the end, this might account for the fascination from the audience to think that the future really lays in this sort of stuff.

Also fascinating is the bolded quote in the blockquote above: the importance of spectacular interfaces in movies. How does this translate to design? Is is really a important criteria? (think about the discreet sms and the spectacular AIBO arf maybe I am a party pooper).

Influences of forecasts

"While think tanks play many roles, an example that brings home their importance now and in the future is the increasing interest in long-range forecasting and thinking about the future. (...) What we must realize now is that as institutions assume the formal role of casting about in the future, they dramatically increase their influence on that future. Simply put, if a think tanks tells its sponsors and others willing to listen that X, Y and Z will occur by the year 2000, then X, Y, and Z are more likely to occur as policy and technological goals adapt to those predictions"

Paul Dickson, Think Tanks, Ballantine Books, 1972.

Future of books according to the E

An article from last week's edition of The Economist deal with the future of readings and books. Some excerpts I found pertinent:

"So who is going to read the millions of pages that Google and its colleagues are so busy digitising? Some people will read them on-screen, some will use Google as a taster for books they will then buy in paper form or borrow from a library, and still more will use it to look for specific snippets that interest them. (...) So books that people would not traditionally read in their entirety, or that require frequent updating, are likely to migrate online and perhaps to cease being books at all. Telephone directories and dictionaries, and probably cookbooks and textbooks, will all fall into this category.

With non-fiction the situation is more nuanced. (...) What about all the genres of books that fill a different human need? Certainly, some types of fiction—novels as well as novellas—are also likely to migrate online and to cease being books. Many fantasy fans, for example, have already put aside books and logged on to “virtual worlds” such as “World of Warcraft” (...) Most stories, however, will never find a better medium than the paper-bound novel. That is because readers immersed in a storyline want above all not to be interrupted, and all online media teem with distractions (even a hyperlink is an interruption)"

Why do I blog this? some intriguing ideas here, the underlying variable used by the journalists to express what can be technologically mediated and what cannot is relevant. I also found funny the fact that people may think MMORPG can serve as an alternative immersion to novels.

Scifi writers and foresight

A good read in Information Week about how science fiction and technology. It's essentially about John de Lancie ('Q' in Star Trek: The Next Generation, Star Trek: Deep Space Nine, and Star Trek: Voyager) who gave the keynote address at the InfoSec World Conference in Orlando, talking about how "today's technology, whether it's cell phones or Second Life, is feeding off the fictional technology dreamed up by science fiction writers years ago".

"Science fiction is a place where people can talk about things," said de Lancie (...) "I began reading more books and they all seemed to be the same sort of guys," he said. "They knew things and they knew how to use things and they made things better for themselves." That, he added, sounds an awful lot like high-tech professionals. (...) What's amazing is they are the ones who can put this all together. That's science fiction becoming science fact. It invites people to think outside the box and be bold and fearless and be explorers and get to the other side. What's exciting is the desire to explore."

Besides, the conclusion intriguing too:

he was quick to add that not everything about Star Trek's fictional advances were a real plus. "I have to say, though, that I never saw them have a really good meal," he said laughing. "And I hated the colors. It all looked like a Holiday Inn. It looked like everyone was living in a hotel somewhere eating bad hotel food. There are a lot of things that are really wonderful the way we have them and that don't need to be changed."

Playground (Picture taken by myself, in Geneva)

Why do I blog this? the relationship between sci-fi writings and innovation, NPD, tech development, diffusion of innovation has always been of interest to me. As a matter of fact, I do really prefer reading sci-fi, instead of so-called "futurists".

Why? for several reasons: (a) narratives are good way to give a flavor of the future, of things to come, (b) Scifi folks write about problems, why things work, do not work, lead to crisis, create social issues (or social issues that create innovation), (c) they put things in context, and when talking about design and NPD CONTEXT is one the crux issue that is often not taken into account, (d) they have their own rules. To some extent, reading scifi is somehow like opposing "critical foresight" to "futurism".

The picture above is a playground in Geneva, it's only meant to show how scifi can be a creative playground in foresight, bringing together a particular kind of "data".

How I use s-curves

A definition of technology s-curves drawn from Clayton Christensen (in this paper):

"The technology S-curve has become a centerpiece in thinking about technology strategy. It represents an inductively derived theory of the potential for technological improvement, which suggests that the magnitude of improvement in the performance of a product or process occurring in a given period of time or resulting from a given amount of engineering effort differs as technologies become more mature. (...) It states that in a technology’s early stages, the rate of progress in performance is relatively slow. As the technology becomes better understood, controlled, and diffused, the rate of technological improvement increases . But the theory posits that in its mature stages, the technology will asymptotically approach a natural or physical limit, which requires that ever greater periods of time or inputs of engineering effort be expended to achieve increments of performance improvement. "

Why do I blog this? Given that I use this tool more and more often in talks, workshops and work, it's good to get back to the literature and understand it more thoroughly. In some work recently I mostly used it to describe evolution of certain technologies such as location-aware systems, 3D virtual worlds or mobile gaming. Generally, the point of is to describe a succession of waves starting from an idea as shown on the picture below. For instance, with the "location-awareness" idea, the first wave of mature products was navigation systems (quite often found in cars with garmin and tomtom devices), a second wave concerns place-based annotations systems or people finder (in that case, nothing's really mature in the same sense as the first wave). Besides, I am well aware of the limits of such curves but they offer a relevant way to discussion diffusion of innovation.

Mistakes in foresight

Reading "Manuel de prospective stratégique, tome 1 : Une indiscipline intellectuelle" (Michel Godet), there was an interesting chapter about the most frequent error when doing foresight. General causes are: 1) Forgetting change (over-estimation) and inertia (under-estimation). 2) "Announcement effect": some predictions only aim at influence the evolution of the phenomenon and then contribute to its realization 3) Too much information (noise), few strategic information 4) Inaccuracy of data and instability of models (one should always ask whether a small modification in input data will change the output) 5) Error of intrepretation 6) Epistemological obstacles (looking at the tip of the iceberg / or where the light is)

Specific causes: 1) Uncomplete vision (leave behind other variables, disruptions, new trends...) 2) Excluding qualitative variables (that cannot be quantified 3) Thinking variables have static relationships 4) Explaining everything by looking at the past 5) Single future 6) Excessive use of mathematical models (mathematical charlatanry) 7) Conformism to gurus

Foresight at Design2.0

To complete my notes on the LIFT07 workshop about foresight, there is a very dense and insightful podcast of Bill Cockayne's talk at Design2.0 (mp3, 15.41 Mb). In this talk, Bill explains that one of the challenge for take companies/student in engineering schools is to get people understand the bigger context, complexity and big systems. Bill started as a technologist and migrate as a technology-foresisght/strategy person. His point is to ask questions such as "where is it going?", "why is it going there?". This is not a matter of being a futurist, not about predicting anything but rather to work on "how do you think about this coming technology?" "how do you think about this coming social change?". Technology sometimes drive social change, does not, sometimes, maybe but the question is how do you know when?. It's not predictions, it's something that comes out of knowing where information comes from.

Beyond tools to design for today's future (ethnography, brainstorming prototypes) and those for going a little further (scenario planning), the point is to go much further: how do you critically assumptions and build models. Oddly enough this stuff is simple, using 3 tools he describes. My raw notes below:

3 tools: point of view questions, X-Y graphs (out to get there by telling stories, looking for triggers of change, think about to get there when we think about what want to be, being normative (design a better future), defensive (design for a future that is coming but we don't like it), how to prepare for that kind of things),

1) Simple rule: You won't get there from here

let's say you design a toothbrush, you observe current users so you're going design today for a year from now as you get out past 7 years that does not work, who are you going to observe? this is all a POV: get out there with

2) X-Y graph: a structured brainstorming tool Issues: A versus B. So discuss with your team: What would be the 2 most salient issue? issues being one on X and one on Y. So you have 4 endpoints. What would the salient issues that affect the questions we're asking in 20 years? It is going to be perception? a social issue? no tech change? Older people (with experience) are better doing this because they've seen change (they felt what is 5 years). After a whole day, you may have 10 good X/Ys. Good = something you learn over time and sth you feel intuitively. Tell stories when you're doing it, catchphrases, funny stories...

What you want to look for is whitespots = possibilities, you can make a difference here either no one is going there because is difficult or it is an opportunity how might we put something there? a toy, a computer, a social change

3) Then you start building scenarios, like design but way far out, 20 years ahead what if have 3-5 stories? what would the world be out there? the most important things about these lines: no changes, lots of changes, one big sweeping change...

tell a story in 5' and then spend the rest of the afternoon going backwards, tell me what had to happen all along the way, tell me when it had to happen, give me a timeframe, the trigger, the driver, when does something has to happen is very critical as you being to go backward, you realize what has to happen (before a product occur, need of having another tech, so another guy has to invent this tech)

As you begin to go further out in time, you have a much harder time to say how close your change drivers are going to be. Then assume that all the decisions you make are too pessimistic and far out. In near term, assume that everything you say is too slow

Long term changes tend to have trigger than is not necessarily in the center of where the change is occurring when economics are changing is not that because a person stood up and said "Wall Street is going into that directions" it's more that you watch the housing data, you watch the number of kids that are how many kids are being born, breastfed,... and then you ask where is another change coming further off and how is it going to be its impacts?

The questions were quite interesting. One of the person asked what is the biggest mistake made by companies. Bill argues that most big companies forgot that research existed for two reasons: invent new things and spend a lot of money obtaining patents, the other is to have a bunch of guy who sit around, doing this kind of things he presented in the afternoon, drinking their coffee. Another issue is the fact that None of us read enough, none of us talk to smart people enough.

"Read methodologies and then read WSJ, E, NYT, CSM... daily because you needs to start getting a feel of where data comes from. You may be watching very closely where your products are going to be but something is changing in an area you never even thought but that could infect it, that could be an opportunity. READ MORE

These publications have the broadest range of readers they have op-eds. Get a broad view of business, social, economics, random technology stuff. Take the biggest daily newspaper that don't focus on news, more like the economist, that look for the analysis, context, why this happened, why A did X... Over time you build up and ability, look for different views, it's not a bias you're looking for, but a a different viewpoint"

Why do I blog this? great food for thoughts, methods and ideas about how to structure what I am doing in something more formalized. Besides, the question of "data" in foresight, addressed in the talk, is of great interest to me.

Building a discourse about design and foresight

Currently completing my PhD program (thesis defense is next week), it gave me the occasion of looking back and think about what interest me. My original background is cognitive sciences (with a strong emphasis on psychology, psycholinguistics and what the french calls ergonomie) and the PhD will be in computer sciences/human computer interaction. In most of my work, I have been confronted to multidisciplinary/interdisciplinarity (even in my undergraduate studies). It took me a while to understand that my interest less laid in pure cognitive science research (for example the investigation of processes such as intersubjectivity, and its relation to technologies) but rather about the potential effects of technologies on human behavior and cognitive process. In a sense this is a more applied goal, and it led me to take into account diverse theories or methods. Of course, this is challenging since mixing oil and water is often troublesome in academia. Given that my research object is embedded in space (technology goes out of the box with ubicomp) and social (technology is deployed in multi-user applications), there was indeed a need to expand from pure cogsci methods and including methods and theories from other disciplines. The most important issues regarding my work for that matter were the never-ending qualitative versus quantitative methods confrontation (I stand in-between using a combination of both, depending on the purpose) AND the situated versus mentalist approach (to put it shortly: is cognition about mind's representation? or is it situated in context?). So, this was a kind of struggle in my phd research.

However, things do not end here. Working in parallel of my PhD as a consultant/user experience researcher for some companies (IT, videogames), I had to keep up with some demands/expectations that are often much more applied... and bound to how this research would affect NPD/design or foresight (the sort of project I work on). Hence, there was a need to have a discourse about these 2 issues: design and foresight. No matter that I was interested in both, it was not that easy to understand how the research results/methods can be turned into material for designers or foresight scenarios. SO, three years of talking with designers, developers, organizing design/foresight workshops, conferences helped a bit but I am still not clear about it (I mean I don't even know how to draw something on paper).

Recently, I tried to clear up my mind about this and the crux issue here is the constant shifting between research and design (or foresight, sorry for putting both in the same bag here but it applies to both). The balance between research that can be reductionist (very focused problem studied, limits in generalizing or time-consuming) and design that needs a global perspective is fundamental. The other day,I had a fruitful discussion with a friend working on consumer insight projects for a big company. Coming from a cognitive science background as this friend, I was interested in his thoughts concerning how he shifted from psychology to management of innovation/design of near-future products/strategy.

I asked him about "turning points" or moments that changed his perspective. He mentioned two highlights. The first one was the paradigm shift in cognitive science in the late 80s when the notion of distributed cognition (Dcog) appeared. Dcog basically posited that cognition was rather a systemic phenomenon that concerned individuals, objects as well as the environment and not only the individual's brain with mental representation. To him, this is an important shift because once we accept the idea that cognition/problem solving/decisions are not an individual process, it's easier to bring social, cultural and organizational issues to the table.

The second highlight he described me is when he use to work for a user experience company that conducted international studies, he figure out that the added value not only laid in those studies but also in the cumulative knowledge they could draw out of them: the trend that emerged, the intrinsical motivation people had for using certain technologies, the moment innovation appeared. This helped him change the way he apprehended the evolution of innovations and made him question the fact that they can follows long s-curves.

material to design the future

Why do I blog this? random thoughts on a rainy sunday afternoon about what I am doing. This is not very structured but I am still trying to organize my thoughts about UX/design/foresight and how I handle that. I guess this is a complex problem that can be addressed by talking with people working on design/foresight/innovation. What impresses me is observing how individual's history helps to understand how certain elements encountered shape each others' perspective.

The picture simply exemplify the idea that conducting design/foresight projects need a constant change of focus between micro and macro perspectives. This reflects the sort of concern I am interested in by taking into account very focused perspectives (user interface, user experience, cognitive processes) and broader issues (socio-cultural elements, organizational constraints...).