One of the talk at the Robolift11 conference that I found highly inspiring was the one by Pierre-Yves Oudeyer. In his presentation, he addressed different projects he conducted and several topics he and his research team focuses on. Among the material he showed, he described an interesting experiment they conducted about how robot users are provided with different kind of feedback of what the robot is perceiving. Results from this study can be found in a paper called "A Robotic Game to Evaluate Interfaces used to Show and Teach Visual Objects to a Robot in Real World Condition. Their investigation is about the impact of showing what a robot is perceiving on teaching visual objects (to the robots) and the usability of human-robot interactions. Their research showed that providing non-expert users with a feedback of what the robot is perceiving is needed if one is interested in robust interaction:
"as naive participants seem to have strong wrong assumptions about humanoids visual apparatus, we argue that the design of the interface should not only help users to better under- stand what the robot perceive but should also drive them to pay attention to the learning examples they are collecting. (...) the interface naturally force them to monitor the quality of the examples they collected."
Why do I blog this? reading Matt Jones' blogpost about sensors the other day made me think about this talk at robolift. The notion of Robot readable world mentioned in the article is curious and it's interesting to think about how this perception can be reflected to human users.