Taking some time reading about technological failures, I found this interesting reference by Luke Swartz called Why People Hate the Paperclip: Labels, Appearance, Behavior, and Social Responses to User Interface Agents. This dissertation deals with Office assistants on computers that seem to be a big pain for lots of people. The documents provides an interesting contextual history of such user interface agents and it tackles the user experience angle based on theoretical, qualitative, and quantitative studies, the author.
Some of the results:
"Among the findings were that labels—whether internal cognitive labels or explicit system-provided labels—of user interface agents can influence users’ perceptions of those agents. Similarly, specific agent appearance (for example, whether the agent is depicted as a character or not) and behavior (for example, if it obeys standards of social etiquette, or if it tells jokes) can affect users’ responses, especially in interaction with labels."
But my favorite part is certainly the one about mental models of Clippy:
"Two interesting points present themselves here: First, beginners—the people who are supposed to be helped the most by the Office Assistant—are at least somewhat confused about what it is supposed to do. Especially given that beginners won’t naturally turn to the computer for help (as they seek out people instead, it may be especially important to introduce such users to what the Assistant does and how to use it effectively.
Second, that even relatively experienced users attribute a number of actions (such as automatic formatting) to the Office Assistant suggests that users are so used to the direct-manipulation application-as-tool metaphor, that any amount of independent action will be ascribed to the agent. For these users, the agent has taken on agency for the program itself!"
Why do I blog this? collecting material about technological failures and their user experience, this research piece is interesting both in terms of content and methodologies. Besides, I was intrigued by the discussion about mental models and how people understand how things work (or don't work). There are some interesting parallels between this and the catchbob results. Especially in how we differentiated several clusters of users depending on how they understood flaws and problems.