Intelligent tutoring system is not a domain that I am interested in but sometimes there are some good elements to draw from other field. Especially, when it's about the behavior artificial agents (like tutors). As opposed to some research that assess the need to have polite intelligent tutor for effective learning gains, some researchers came up with this notion of a "rude tutor" (Natalie Person in here talk "Understanding User Emotion and Answer Quality In Dialogue Systems"):
Why Build a Rude Tutor? 1) Human tutors adhere to the cooperative principle and are polite Attend to students’ needs Minimize imposition on the student Speak off-record 2) Tutoring is Face-Threatening Social distance is great Power differential between tutor and student Tutor imposes on the student to provide information 3) Being too polite can interfere with effective tutoring 4) Some students may like it better
Some examples of the RT (Rude Tutor):
RT: I know that you know more than that. Say more. RT: Did you pay attention in class? How does Newton’s third law of motion apply to this situation? RT: No. Go back and answer the question completely. Can’t you add anything?
In one of her experiment (see powerpoint slides), she showed the differences between a polite and rude tutor in terms of various factors (learning gains, user acceptance...). It's difficult to generalize though. Why do I blog this? I am impressed by this notion of unpolite agents (or objects). Playing with bots or Nabaztags, it's very intriguing to see the disruptions that can be created by the utterance (or behavior) those technological artifacts can have (or are programmed to have). There is a lot to do here.