Visual Patterns and Communication for Robots

The Future Applications Lab in Gotenborg, Sweden is involved in a very interesting project (from my point of view) called ECagents (meaning Embodied and Communicating Agents).

The project will investigate basic properties of different communication systems, from simple communication systems in animals to human language and technology-supported human communication, to clarify the nature of existing communications systems and to provide ideas for designing new technologies based on collections of embodied and communicating devices.

The project is a huge EU thing but what the FAL is focusing on is about investigating how such mobile ommunicating agents would become a natural part of our everyday environment. In the masters thesis proposals, there is a description of what they're up to:

We have previosuly developed a number of ideas for possible application in the form of personas.

This thesis proposal is inspired by the persona Nadim. It is about developing a language for visual patterns using e.g. genetic programming, cellular automata, boids, diffusion-reaction, naming game or any other combination to visualize patterns on a small e-Puck robot. The robots should be able to develop as well as communicate such patterns through the language so that new and interesting patterns emerge from their perception of their environment and interaction with each other. The goal for the thesis is to either make a real demonstrator on the suggested platform (requires some previous knowledge about software implementation on embedded systems) or to make a simulated demonstrator based on the prerequisites.

Why do I blog this? I am less interested in the implementation and technical aspects but rather by the situatedness (or non) of communication between robots/artifacts of the future and human users. What happen during the interaction? what are the inference made by individuals about objects and vice-versa? How to improve this by creating new affordances?