VideoGames

Two examples of objects/MMO cross-over

reports on this interesting cross-over: LEGO Mindstorms (the line of robotics components that let you build interactive objects) will have a new line called "NXT that is going to be previewed in Second Life. Also, the insightful Amy-Jo Kim pointed to this article: Habbo China to Match Real and Virtual Purchases:

Habbo Hotel in China , developed by Sulake and apparently operated by Netease, is now allowing online purchases of virtual items that are paired with real-world sales. Flowers, clothes, and movie tickets can be purchased online through Habbochina and the matching real items will be delivered to the purchaser the next day.

Why do I blog this? in terms of video games foresight, this is interesting because it shows (1) video games are so more than video games (social platform, shot to demonstrate a new thing...) and (2) the way old media and new media are more and more intertwined, building a new ecosystem of playful objects (virtual or not).

Playdocam: games with your webcam

Via internet actu, the Playdocam seems to be an interesting device:

PlaydoCAM™ transforms your ordinary web camera into a motion-tracking gaming device and places you at the centre of a unique online gaming experience. Many PlaydoCAM™ games are under development for both single and multiplayer action.

For the widest audience possible PlaydoCAM™ is based on standard Flash and Shockwave technology and can be played directly in your web browser without the need of extra plug-ins or installations. playdoCAM™ is available for custom made entertainment and can also be used offline for a high quality fullscreen experience suitable for exhibitions, display window advertising and more.

Eyekanoid and Playdojam are pertinent examples of new game interactions (a la eye-toy).

Why do I blog this? I find interesting that the innovation in the video game industry is now more and more than just games. Using web-based/like applications (shockwave/flash, it's easier and cheaper than buying a development kit; distributing on the web is way cheaper than having an editor...). And the focus on tangible interaction is more and more present.

Ink printings SL

Recursive Instruments has superb pictures of what they call "Ink printing the primitive Metaverse" (prints from their current show at the Aho Museum in Second Life):

Entitled Recusions, the works depict situations you will find every day in Second Life, from play to punishment. The trappings of technology are both removed and exploited to examine the evolution of media’s effect on the evolution of self. Snapshots from Second Life are digitally altered to allow a Computerized Numerical Control (CNC) mill to sculpt their contours. The result is a woodblock used for traditional ink printing

Collectic: collect access points and combine them in a puzzle

Thanks Cyril for pointing me on Collectic: developed by Jonas Hielscher as a part of a graduation project for the Masters program Media Technology at Leiden University in 2006. I met Jonas in Utrecht few months ago (are you in Basel now? stil in game stuff as I see) and I am always intrigued by what this guy is doing.

he game is developed for the Sony PSP and uses the standard features of the console, especially scanning for wireless access points to the Internet.

CollecTic can be played anywhere, where WLAN access points can be found by a PSP. The objective of the game is to search for different access points, to collect them and to combine them in a puzzle in order to get points. In the game, the player has to move around in her/his local surrounding, using her/his PSP as a sensor device in order to find access points. By doing this, the player is able to discover the hidden infrastructure of wireless network coverage through auditive and visual feedback.The game is designed as a single player game, but it can be easily played competitive after each other or at the same time with two PSPs.

A video here.

Why do I blog this? I like this idea of a game played with regular console features enhanced by some software components. Besides, the game concept is quite simple and funny and discovering network infrastructure that way seems to be a cool experience. I am looking forward to test this!

Mario Soup by Ben Fry

Mario Soup is an information visualization project by Ben Fry that aims at "revealing a beautiful soup of the thousands of individual elements that make up the game screen. It used the "the unpacking of a Nintendo game cartridge, decoding the program as a four-color image, revealing a beautiful soup of the thousands of individual elements that make up the game screen".

Any piece of executable code is also commingled with data, ranging from simple sentences of text for error messages to entire sets of graphics for the application. In older cartridge-based console games, the images for each of the small on-screen images (the "sprites") were often stored as raw data embedded after the actual program's instructions. (...) The images are a long series of 8x8 pixel "tiles". Looking at the cartridge memory directly (with a black pixel for an "on" bit, and a white pixel for an "off") reveals the sequence of black and white (one bit) 8x8 images. Each pair of images is mixed together to produce a two bit (four-color) image. The blue represents the first sequence of image data, the red layer is the second set of data that is read, and seeing them together produces the proper mixed-color image depicting the actual image data

Why do I blog this? I like this idea of "soup" and intertwined individual elements that eventually constitute a game screen: destructuring the game display.

Agent V on Nokia 3230

One of the most intriguing game I played lately is "Agent V" on the Nokia 3230 cell phone. This is an augmented reality game that puts the player in the middle of the action: using the camera viewer and the motion sensor in the phone, you move the phone to catch or shoot in virtual artifacts which appears on the screen overlayed on the environment. As seen on this picture: agent v

Though there are also other games like that (mosquito or CamBlaster for example), this one is pretty impressive. I found myself playing in the train, doing crazy movements to catch energy cells here and there, disrupting the trip of others. Quite addictive and funny but in the long run, it's not that playable. Why do I blog this? because these augmented reality games are really simple (at least you don't have to wear huge eyeglasses) and enable a curious gaming experience that involve both virtual elements and the physical world.

Excerpts of Toshio Iwai's interview

Pixelsurgeon features a nice interview of Toshio Iwai. A japanese media artist, building electronic/physical instruments (and designing games such as Elektroplankton), Iwai gives some hints about his activity: the importance of tangibility, the need for visual feedthrough, a need to design for play and everyone:

In projects like Tenori-On, how important is the physical interface - the thing you touch and hold? How does it affect the act of making music?

Any instruments are characterised by their physical interface, such as the key of a piano or the bow of a violin. And these physical interfaces give important direction to the way they are played and the sound itself. However, as long as electric instruments are concerned, this aspect is not emphasised very much. In the Tenori-On project, we started from thinking what is the reasonable interface for an electric instrument or digital instrument. (...) For the digital instrument, interface, exterior design, software, sound and so on are independent each other. I am examining the way all of them naturally unite, just like in the violin. (...) The design of the visual interface is very important. The flow of time is not visible and very difficult to handle, but by expressing it visually it can be understood and handled by everybody. Moreover, music can give different impressions when it is expressed visually. (...) Since it became possible to make sound electrically or electronically, the synthesizing of sound has been separated from the visual world. However with the senses we are borne with, we think it is more natural to experience sound and vision at the same time. (...) As everybody wants to touch instruments or toys which he or she hasn’t seen before, when I design something, I am trying to create it so that it is very attractive at first sight. And when players touch it, it can be instinctively understood and they can be pulled into it very strongly and start trying to create their own designs in many different ways.

Why do I blog this? because of current research about tangible interfaces I am interested in Iwai's work; which I found great. Elektroplankton is fantastic (easy to handle and I discover new features everytime I play). What he is describing is very intriguing: how to create new musical instruments (new objects then) with simple affordances, linking sound and visual patterns to engage people in playful activities.

See also his blog about tenori-on, a brand new musical instrument / musical interface for the 21st century which I have been developing under the collaboration with YAMAHA Corp.

Animal Controlled Computer Games

I stumbled an interesting project called "Animal Controlled Computer Games" on pixelsix:

Animal Controlled Computer Games is the graduation project from Wim van Eck (...) In his project he build a Pacman game, in that the player can play Pacman against real crickets, that controls the ghosts in the Pacman maze. By doing this he analyzes the advantages and disadvantages of real-time behaviour of live animals in comparison to behavior-generating code in computer games.

A paper about it will be presented at ICEC 2006.

Alice Taylor on game foresight

Alice Taylor's talk at Aula is very relevant to my research and foresight activities related to video games. She pointed out few "trends" or pertinent directions with regards to socio-cultural practices of video games in her slides:

/broadcasting into gameworlds, that she calls "pipe in"

In-game broadcasting: something we see coming as soon as Microsoft build streaming into the Xbox 360; already possible with PC games, PS3 presumably will accept streaming. After that it’s just host server, streams & bandwidth costs. Basic broadcasting, in other words.

In-world realism: 1Xtra (UK radio station for urban music) & Xbox 360: Atari demo of ingame broadcasting. Ignore the graphics: the video demonstrates the driver switching on the in-car radio and 1Xtra live broadcasting into the game. Listen to the radio while you race your buddies….

Extending reach: Radio 1’s Big Weekend: live in Dundee, Scotland but global in Second Life: Live streaming audio and video of the festival’s events; special items for the global audience in the virtual field, including mini radios and radio1 teeshirts. New bands and unsigned talent can perform here: more experiments coming.

/gameworld as narrative environments, which she refers to as "3D drama": The BBC has a long history of producing rich dramas; the future will involve gaming technology, as it becomes par for the course for certain types of audiences who expect and demand a gameworld to complement the linear narrative (or vice versa).

More brand extension, lots more End-games / end-dates? Episodic gaming: in sync with TV? Gestural, physical: Wii, etc Lots more MMOGs (E3 announced 30)

End dates: games/shows/hybrids with a set delivery period. No more MMOGs that just fade out when everyone’s bored, but go out with a bang on a high note. Essential drama!

And her final word was about "the biggest thing on my radar today = Google Earth Goes Gaming" (that she explains more in this blogpost ).

Why do I blog this? I find interesting the trend she discussed (I was not there, I just saw the ppt and inferred from what I watch too). Even though I am quite familiar with trends related to new forms of interaction (tangible interfaces, google earth-like game platform), I find great to have a different perspective connected with the broadcast reality: the ideas of end dates/games and in-game broadcasting are very intriguing avenues.

Toewie: Puppet game controller

Toewie by Jelle Husson (postgraduate in eMedia in Belgium)

Toewie is about a 3d game for pre-school children. Most 3d games are being navigated by means of the arrow keys for movement, and the mouse for looking/direction. Because this is quite complicated, especially for very young children, Toewie will be controlled differently. The idea is to build a real life puppet and put some movement sensors in it. When the child interacts with the puppet, the 3d character on screen will perform a similar movement.

Why do I blog this? I am following lately how tangible interface can be used as innovative game controllers, this is a relevant example.

Experientia report about the new ecology of play

One of my favorite design/foresignt/scouting company Experientia recently produced an insightful report about the latest trends in electronic toys and games. It's called "Play Today" (pdf, 4.7 mb, 71 pages) and is definitely a must-read for people like me in the game-research/industry. It's written by Myriel Milicevic with editors Jan-Christoph Zoels and Mark Vanderbeeken, both Experientia partners).

They present examples of board games, controller toys, electronic friends, educative missions and DYI worlds, location-based games, game activism and romantic encounter.

What is important to me is the underlying rhetoric behind that: 1) due to recent and expected technological advances, boundaries between the game and toy industry is going to fade, then some joint projects, complementarities will be possible 2) the game paradigm per se is more than the individual/system interaction and can be used for different purposes (learning, encounters, urban discoveries...)

Would this be enough to address the slumping sales problem?

Why do I blog this? What I really like in this report, and it's one the approach I am always mentioning when I do seminar about game/toy trends, is the convergence between different industries/domaine: game companies (editors and development studio) and toy company. That is why I like the fact that the report address this issue with no boundary between video games, game controllers, electronic toys and so fort. As they say, it's about "mixing media, mixing worlds".

This is also interesting from the cultural anthropology viewpoint and it makes me think about the work of Mizuko Ito: see for instance her paper about kids participation in new media: a tremendously lively ecology of "media culture" is nascent, based on some media convergence (video games, trading cards in her case), personalization and remix as well as hypersociality of exchange. This Experientia report is really about this new ecology of play which as less distinct boundaries than previously thought.

Technorati Tags:

User-generated content matters in the video game industry

(via wonderland) Edery has a a good list about why user-generated content matters in the video game industry:

  • It can extend the life and drive the sales of existing games. (Example 1, Example 2)
  • It can lead to entirely new games, not just new styles of play. (Example 1)
  • It can drive virtual businesses (and whole economies) which generate profit for developer and customer alike. (Example 1, Example 2)
  • It can reduce the cost of populating games with content, and make those games more dynamic and interesting. (Example 1, Example 2, Example 3)
  • It can take advantage of non-game-based UGC, such as Flickr and YouTube content, using it (in some cases) to blur the lines between the real and virtual world. (Example 1, Example 2)
  • It can help game developers identify potential employees. (Example 1)
  • Machinima! (Example 1, Example 2, Example 3)
  • It can lead to novel uses for game engines, other than machinima. (Example 1, Example 2)
  • It could help reduce piracy. (Example 1)
  • It could inform businesses and lead to the development of real-world products. (Example 1)
  • It can become viral marketing for real-world businesses. (Example 1, Example 2)

Why do I blog this? because it's a good list (even though I don't always agree on every issue). I am interested in this because of some foresight work in video-games.

Information Vizualisation of 3D virtual worlds

Just read that paper (Via Infosthetic): Katy Börner, Shashikant Penumarthy: Social diffusion patterns in three-dimensional virtual worlds , Information Visualization 2(3): 182-198 (2003).

The paper is about a visualization tool set that can be used to visualize the evolution of three-dimensional (3D) virtual environments, the distribution of their virtual inhabitants over time and space, the formation and diffusion of groups, the influence of group leaders, and the environmental and social influences on chat and diffusion patterns for small (1 – 100 participants) but also rather large user groups (more than 100 participants).

Resulting visualizations can and have been used to ease social navigation in 3D virtual worlds, help evaluate and optimize the design of virtual worlds, and provide a means to study the communities evolving in virtual worlds. The visualizations are particularly valuable for analyzing events that are spread out in time and/or space or events that involve a very large number of participants.

This sort of work is very important for studying users/players' behavior virtual space:

extending the work on the spatial-temporal diffusion of groups presented in the section on spatio-temporal diffusion of social groups, user studies will be conducted to examine the influence of spatial, semantic, and social factors on dynamic group behavior. While spatial maps can be used to depict the influence of a world layout and positions of other users on the diffusion of users, they may also help to visualize the influence of spatially referenced semantic information, information access points, on the pathways users take.

The visualizations presented in the paper have three main user groups: Users in need of social navigation support, designers concerned with evaluating and optimizing their worlds and researchers. Even though this is more related to space/time visualization, It reminds me the work of Nicolas Ducheneaut I've seen when I visited him; in the sense that he and his colleagues are interested in "social dashboards" (information visualization tools for community managers of MMORPG). They're more concerned about social aspects (for people who are interested in what they do, the patent is not publicly disclosed yet).

Why do I blog this? clearly this is connected to my research about studying people behavior (communicate, move, act together) in certain technological scenes (virtual worlds but also pervasive computing). For that matter, and like in my masters and PhD project, the use of such visualization (and replay features) is interesting. I also use this sort of visualization (mostly to help me analyzing how people behave in space), for instance in CatchBob:

In addition, since I work with game designers, trying to show them new tools and ideas to improve their work, this sort of research is important so that level modeling and gameplay could be modified.

In addition, I am intrigued by this idea of feeding back the user with visualization (in this paper, it's about social visualization): how can players could benefits from them? would it be valuable to give them a social mirror of their activities (individual or group one)? That's also something I discussed with Nicolas at PARC when I saw his visualization of WoW: how information about a guild (like the evolution of the guild participation or the size of the network over time) could be fed back to the users? would it be valuable for them? how would that change the socio-cognitive processes accordingly?

Mmh a good research topic to investigate for my post-phd life.

Google earth + sketch-up (2)

Tim O'Reilly posted his thoughts about the added value for Google of having bought Sketch-Up (the 3D modeling tool):

Google Maps has more public reach, but it seems to me that Google Earth will ultimately emerge as the real platform play. What's particularly interesting is how much activity there is in adding user-generated data. Especially interesting is the way that Google is trying to get users to build 3D models of buildings with sketchup. (...) It becomes clear that Google Earth is not just a data visualization platform. It's a framework on which hundreds of different data layers can be anchored. It's also clear that Google Earth is entering into the same territory as Second Life. It's so easy to imagine all of the alpha geek behavior on Second Life hitting the mainstream via people building real-world equivalents on Google Earth. And it's easy to imagine interoperability, with virtual worlds adopting KML, so that first and second life become interoperable and connected. (I was going to ask about the Google Earth/Second Life connection with sketchup as the connector, since it seems so obvious to me, but the first question from the audience beat me to it. It's impossible to miss this idea.)

Why do I blog this? because I am interested in foresight about digital entertainment and video-games usage. What O'Reilly describes here is the very cutting-edge trend that we discussed when I was at the Annenberg Center for Communication in April: the potentialities of using Google Earth and sketch-up for playful activities, which is obviously connected to some social MMORPG like SL. What would be the next practice (beyond modeling your/a house and putting it there'): creating alternative versions of the Earth? Modeling MMORPG environment in KML and then playing in them (sort of DIY MMORPG level modeling)? Using this a the new interface for a SIM-like game? Trading KML files on ebay?

Gestural behavior in virtual reality and physical space

With the now overlapping on-line persona and our presence in the physical world, lots of questions concerning the connections between both worlds remains unanswered. This is the research issue addressed by the Virtual Human Interaction Laboratory at Stanford University. Devsource has a good overview about it (via the Presence mailing list), starting with the questionable motto: "How does the world change when you have five arms?".

that researchers have learned that, when we build digital versions of one another, people tend to behave the same in virtual reality (VR) as they do in physical space, at least on a gestural level. His team has studied online communities and avatar-based games, analyzing patterns of interaction and comparing how they relate to the social world. With avatars, he says, the norms of conversation and nonverbal behavior are modeled on how people behave in physical space. But there's one interesting exception: "In games, taller and more beautiful avatars actually perform better."

Why do I blog this? Since I am interested in the relationships between spatial features and behavior, this is relevant; see for instance what Philip wrote about how proxemics is still pertinent in virtual space: Jeffrey, P.and Mark, G. (1998). Constructing Social Spaces in Virtual Environments: A Study of Navigation and Interaction. In: Höök, K.; Munro, A.; Benyon, D. (ed.): Workshop on Personalised and Social Navigation in Information Space, March 16-17, 1998, Stockholm (SICS Technical Report T98:02) 1998) , Stockholm: Swedish Institute of Computer Science (SICS), S. 24-38.

But there is more:

Bailenson [the lab director] offers one bit of practical advice for software developers who build "social" user interfaces. Anytime you have a UI that guides a person, especially with a human face, people tend to make the agent look more realistic than it behaves. And that, he says, causes problems in user expectations.

Spectators for video games?

NS about the notion of spectator in gaming:

he US professional computer gaming league has just signed a TV rights deal with cable company USA Network. Maybe it could be on the way to becoming as popular a specator sport as football and basketball in the US.

Why do I blog this? this is connected to my interest towards user experience of video games: there is really a trend that gaming is more than just interacting with a box: now with replay (in sport games for instance: you can replay your own game) and here with this trend it's more than that: showing to others people's game. I'd be interested to know more about who watch this, what they do out of it... and the practices related to that.

Awareness and Accountability in MMORPG

A very good read yesterday in the train: Moore, Robert J., Nicolas Ducheneaut, and Eric Nickell. (2006): "Doing Virtually Nothing: Awareness and Accountability in Massively Multiplayer Online Worlds." Computer Supported Cooperative Work, pp. 1573-7551

The paper acknowledge the fact that "despite their ever-increasing visual realism, today’s virtual game worlds are much less advanced in terms of their interactional sophistication". Through diverse investigations of MMORPG using video-based conversation analysis (grounded in virtual ethnography), they look at the social interaction systems in massively multiplayer virtual worlds and then propose guidelines for increasing their effectiveness.

Starting from the face-2-face metaphor (the richest situation in terms of social interaction, as opposed to geographically dispersed settings), they state that participants are able to access to certain observational information about what others are doing in order to interpret others’ actions and design appropriate responses. This lead to coordination (I personally used different framework to talk about that, for instance Herbert Clark's theory of coordination). In a face to face context, three important types of cues are available: "(1) the real-time unfolding of turns-at-talk; (2) the observability of embodied activities; and (3) the direction of eye gaze for the purpose of gesturing".

They then build their investigations around those three kind of cues that are less available in virtual worlds. This can be connected to the work of Toni Manninen like The Hunt for Collaborative War Gaming - CASE: Battlefield 1942). It also makes me thing about one of the seminal paper by Clark and Brennan about how different media modifies the grounding process (the establishment of a share understanding of the situation).

Clark, H. H., and Brennan, S. A. (1991). Grounding in communication. In L.B. Resnick, J.M. Levine, & S.D. Teasley (Eds.). Perspectives on socially shared cognition . Washington: APA Books.

Why do I blog this? I still have to go further in the details of each of these investigations but I was very interested in their work because: - the methodology is complementary with what I am doing in CatchBob to investigate mutual awareness and players' anticipation of their partners' actions. The interactionist approach here could be very valuable to apply in my context. I am thinking about deepening the analysis of the messages exchanged by players (the map annotations) to see how accountability is conveyed through the players drawings. - they do translate results from empirical studies intro concrete and relevant design recommendations (for instance: other game companies should probably follow There’s lead and implement word-by-word (or even character-by-character) posting of chat messages. Such systems produce a turn-taking system that is more like that in face-to-face, and they better facilitate the coordination of turns-at-chat with each other and with other joint game activities.)

Video game controller reconfigurability

In The VoodooIO Gaming Kit: A real-time adaptable gaming controller by Nicolas Villar, Kiel Mark Gilleade, Devina Ramduny-Ellis, Hans Gellersen (Proceedings of ACE 2006), the authors propose an interesting idea for innovating about game controllers:

Existing gaming controllers are limited in their end-user configurability. As a complement to current game control technology, we present the VoodooIO Gaming Kit, a real-time adaptable gaming controller. We introduce the concept of appropriable gaming devices, which allow players to define and actively reconfigure their gaming space, making it appropriate to their personal preference and gaming needs. (...) Ad hoc controller adaptation during game-play is the pinnacle of physical configuration in game controllers. Not only can the game controller be configured to suit a particular task for a given user but it can also be reconfigured while the user is still playing to meet any changes in task demand. (...) VooodooIO is a malleable platform for physical interaction, which allows users to construct and actively adapt the composition of their physical interface. Rather than being an interface construction kit for users, the platform is concerned with enabling and exploring the ability of the physical interface to be customized and reconfigured after its deployment into use.

A pertinent affordance for real-time modification of the game controller is that controls can be arranged to depict the intended use-sequence:

Why do I blog this? this is a very innovative idea for expanding on the idea of game controllers that would be more user-centered. Besides, the paper is very complete and shows a proof of concept using World of Warcraft; the usage study is also welcome! I like this idea of DIY gamepad, it's really part of the trend (DIY games, players' participation in the design process...)

Katamari Damacy affordances

Angel Inokon has a good blogpost about the affordances of Katamary Damacy (the PS2 game in which you have to roll a ball to collect items located everywhere):

Three Design Principles Katamari Damacy gets right: - Affordances – affordances enable designers to create gameplay that leverages the natural limitations and features of an object. One of the clear affordances of a ball is that it rolls. Everyone, regardless of age, recognizes a ball and can easily conceive it’s primary function. (...) Users can quickly get immersed because the rolling action is consistent with the simple affordances of a ball. - Visibility – gamers need awareness of the mechanics of gameplay through visuals and audio feedback. Two feedback mechanisms built in the game include a progress icon and sounds. The player is given a simple icon on the corner of her screen that shows the size of the katamari. (...) Gamers need lots of information. Integrating visibility principles allows designers to keep pumping the right information when they need it. - Constraints – constraints prevent gamers from making errors that could decrease enjoyment of the game. Katamari Damacy centers around a single rule – players can’t roll up something that is bigger than their ball. If the player got lost in an area with many big objects, she could get frustrated. So the game blocks the paths to larger objects until her Katamari is large enough to roll over the barrier. It makes the game easier to explore and less overwhelming by essentially modularizing the levels (174). Failure is a critical aspect of gameplay, however good designers know how to constrain the environment so players stay immersed in the game.

Why do I blog this? because I like Katamari and agree with that principles which connects human-computer interaction a la Don Norman to an efficient video game design.

Portable consoles, network and turbulences

On the 802.11 Turbulence of Nintendo DS and Sony PSP Hand-held Network Games is a paper by Mark Claypool that analyses the traffic characteristics of IEEE 802.11 network games on the Nintendo DS and the Sony PSP. Here is the questions they try to answer:

What is the network turbulence for hand-held network games?

Does the network turbulence for different hand-helds (such as the PSP and the DS) differ from each other?

Does the network turbulence for different games (such as Ridge Racer and Super Mario on the same handheld differ from each other?

Does the network turbulence for hand-held games differ from PC games?

Does hand-held game traffic interfere with traditional Internet traffic on the same wireless channel?

Why do I blog this? the paper is quite technical so it's less my focus but the question they ask are interesting from the user experience point of view. This might be close to Fabien's research. How people cope with those turbulences? are there any compensations or strategies (in-game) to handle them?