Human-like virtual assistants, especially part of online learning platforms, may actually deter some people from seeking help on tasks that are supposed to measure achievement, making them feeling dumb, researchers say.
Human-like virtual assistants, especially part of online learning platforms, may actually deter some people from seeking help on tasks that are supposed to measure achievement, making them feeling dumb, researchers say. Previous research has shown that people are inclined to see computerised systems as social beings with only a couple of social cues. This social dynamic can make the systems seem less intimidating and more user-friendly. Researchers wondered whether that would be true in a context where performance matters, such as with online learning platforms. The results revealed that participants who saw intelligence as fixed were less likely to seek help from online assistants, even at the cost of lower performance.
“We demonstrate that anthropomorphic (having human characteristics) features may not prove beneficial in online learning settings, especially among individuals who believe their abilities are fixed and worry about presenting themselves as incompetent to others,” said study author Daeun Park of Chungbuk National University in South Korea. To reach this conclusion, Park and co-authors Sara Kim and Ke Zhang did two experiments. In one online study, the researchers had 187 participants complete a task that supposedly measured intelligence.
On difficult problems, they automatically received a hint from an onscreen computer icon — some participants saw a computer “helper” with human-like features including a face and speech bubble, whereas others saw a helper that looked like a regular computer. Participants reported greater embarrassment and concerns about self-image when seeking help from the anthropomorphised computer versus the regular computer. In the second experiment, 171 university students completed the same task of word problems as in the first study but this time, they freely chose whether to receive a hint from the computer “helper”.
The results showed that students who were led to think about intelligence as fixed were less likely to use the hints when the helper had human-like features than when it didn’t. “More importantly, they also answered more questions incorrectly. Those who were led to think about intelligence as a malleable trait showed no differences,” the researchers noted in a paper published in the journal Psychological Science. These findings could have implications for our performance using online learning platforms. “When purchasing educational software, we recommend parents review not only the contents but also the way the content is delivered,” Park said.