Brain-computer interactions is a pet topic I investigate on the side for quite some time; actually since I was an undegrad in Cognitive Sciences back fifteen years ago. In the last ten years, we saw an interesting evolution in terms of hardware possibilities with the advent of headcaps... this led to a novel situation where prototyping interaction was less cumbersome as it used to be. Plus, the availability of software (games, relaxation apps, etc.) also allows to conduct tests and observe the usage of such devices out of the lab. This is a dimension I'm interested in as wearing these devices in public is not neutral (even more than Google glasses?) and lead to weird technical problems (signal noise) or interaction possibilities (why would I need such device when waiting for my bus?).
This kind of down-to-earth/blue-collar-design perspective was actually the topic of my talk at the recent American Association for the Advancement of Science meeting 2013 in a session called Advances in Brain-Machine Interfaces: Applications and Implications, along with Miguel Nicolelis, Todd Coleman, Martha J. Farah and Brent Waters.
[slideshare id=16969260&doc=2013-aaas-brain-130306035111-phpapp02]
Why do I blog this? In order to move forward, I'm thinking about a new teaching workshop about this topic next year. The panel as well as the discussion with experts there was quite intriguing and led me to think that there's a good opportunity in these topics when it comes to design/foresight.