Disinhibition with virtual partners, chatbot, and robots

Given that we spend more and more time communicating with non-humans, the topic of politeness with bots, robots, virtual characters, non-playable charcters in video games has always struck me as intriguing. Few months back, I mentioned the chatbot technology and call centres where "it will be also necessary to program chatbots to deal with verbal abuse"". The new issue of Interacting with Computers is about this topic: "Abuse and Misuse of Social Agents". Two papers I found interesting in that issue are:

"I Hate You: Disinhibition with Virtual Partners" by Sheryl Brahnam:

"This paper presents a descriptive lexical analysis of spontaneous conversations between users and the 2005 Loebner prize winning chatterbot, Jabberwacky. The study was motivated in part by the suspicion that evidence in support of the Media Equation, especially in the field of conversational agents, was supported by incomplete data; too often omitted in its purview is the occurrence of unsavoury user responses. Our study shows that conversations with Jabberwacky often bring about the expression of negative verbal disinhibition. We discovered that 10% of the total stems in the corpus reflected abusive language, and approximately 11% of the sample addressed hard-core sex. Users were often rude and violated the conversation maxims of manner, quantity, and relevance. Also particularly pronounced in the conversations was a persistent need of the user to define the speakers' identities (human vs. machine). Users were also curious to understand and test the cognitive capabilities of the chatterbot. Our analysis indicates that the Media Equation may need qualifying, that users treat computers that talk, less as they do people and more as they might treat something not quite an object yet not quite human."

and "Sometimes it's hard to be a robot: A call for action on the ethics of abusing artificial agents" by Blay Whitby:

"This is a call for informed debate on the ethical issues raised by the forthcoming widespread use of robots, particularly in domestic settings. Research shows that humans can sometimes become very abusive towards computers and robots particularly when they are seen as human-like and this raises important ethical issues. The designers of robotic systems need to take an ethical stance on at least three specific questions. Firstly is it acceptable to treat artefacts - particularly human-like artefacts - in ways that we would consider it morally unacceptable to treat humans? Second, if so, just how much sexual or violent 'abuse' of an artificial agent should we allow before we censure the behaviour of the abuser? Thirdly is it ethical for designers to attempt to 'design out' abusive behaviour by users? Conclusions on these and related issues should be used to modify professional codes as a matter of urgency."

Why do I blog this? this is not a research topic I investigate more than looking at few papers once in a while but I am quite fascinated by this sort of behavior and the design implications. Perhaps it's linked to my interest in the user experience of automation: when human agents are replaced by robots or chatbot, one can observe intriguing issues at stake for "users".

The next step, something that I would be very interested in observing, is to study when people kick, punch or break physical objects such as roomba, robots or vending machines. There is definitely something here that I'd be happy to investigate more deeply like... figuring out the reasons (role of context? bystanders? mood?), finding the consequences (broken? not yet?), the need to fix the device (oneself? with others?), the justification to others, etc.

I mean, EVERY device can be the target of this sort of behavior. I remember having a Commodore Amiga 500 which used to go "green screen" (the equivalent of Windows blue screen of death); to make it reboot, I use to raise it like 5 centimeters on the table and drop it. It worked pretty well on me (I had to to relieve some nervous steam) and the Amiga as well. I learnt recently that it allowed one of the chip which was not well inserted to be properly reinserted.