Photo Credit: Sybren A. Stüvel
The University of Notre Dame has produced the next phase of the robot take-over: now they’re teaching us. And they’re teaching us almost better than we can teach ourselves. Next on the robo-invasion master plan: indoctrinate the young.
“AutoTutor” and “Affective AutoTutor” is a new technology that can read moods and adjust accordingly. Through analysis of facial expression and body posture of the student, the techno-tutor can identify if the student is bored or frustrated by the questions it’s asking and dynamically change its strategies to help aid the student’s learning.
The technology essentially elevates the social skills of a computer: human interaction is about reading body language, intonation, and lots of nonverbal cues, creating a rich understanding of the true meaning of the words coming out of our mouths. Now, AutoTutor, an “Intelligent Tutoring System” (ITS), can understand us beyond just the words we’re typing in to the keyboard, or the area of the screen our mouse is sitting on.
The system has been proven to increase ability by approximately one letter grade – which outperforms inexperienced human tutors and almost meets expert human tutor standards. It can teach Newtonian physics and it even uses images and animations to make learning much more fun.
Soon will we simply be shepherding students from room to room, teachers merely acting as emergency replacements and facilitators? Will there be just rows of computers that children will sit at and get a personalised education from, without the distracting classroom atmosphere, since “considerable empirical evidence has shown that one-on-one human tutoring is extremely effective when compared to typical classroom environments”?
Where could this technology lead? Would you leave your child at home with cyber-nanny, which adjusts its behaviour according to your child’s behaviour? Wouldn’t that be safer than leaving it with a stranger who could be a paedophile or serial killer?
Is taking over teaching roles going to lead to a slippery slope, or is it simply technology augmenting learning to make it more efficient and effective? How far are we willing to trust AI?
[Via Science Daily]