Do Robots Make Better Psychologists?

Thursday, 14 August 2014 - 12:09PM
Technology
Thursday, 14 August 2014 - 12:09PM
Do Robots Make Better Psychologists?

Are people more likely to bare their souls to a robot? Researchers at the Institute for Creative Technologies have demonstrated that people are more willing to share their feelings and other personal information with a robot than with a human.

 

The research team, led by Jonathan Gratch, built an artificial intelligence named Ellie, who is programmed to behave much like a real psychologist. Presented as an avatar on a computer screen (pictured above), she first asks friendly questions that are meant to build her rapport with the patient, such as "Where are you from?" She then eases the patient into confiding more personal information by asking about clinical symptoms of psychological disorders, such as aberrant sleeping patterns. Then she'll ask the difficult, extremely personal questions, such as "How close are you to your family?" and "Tell me about the last time you felt really happy." In true psychologist fashion, she'll attempt to elicit more detailed answers to the more sensitive questions by asking, "Can you tell me more about that?" and, of course, nodding. She can understand what the patients are saying, so she knows when to ask follow-up questions and how to realistically participate in the conversation, and she performs sophisticated facial expression analysis.

 

For this study, Ellie conducted "sessions" to 239 people, half of whom were told that she was an autonomous artificial intelligence and half of whom were falsely told that she was remotely operated by a human. Her facial expression analysis software rated their indicators of sadness on a scale from zero (not sad) to one (extremely sad). Three human psychologists also analyzed the transcripts in order to rate their willingness to disclose information during the sensitive part of the interview on a scale from -3 to 3, and all the participants filled out a questionnaire.

 

Dr. Gratch and his team found that people were significantly more likely to disclose sensitive information when they believed they were interacting with an avatar. The group that correctly believed that Ellie was free of human influence were less likely to censor themselves or closely monitor their behavior. This group scored an average of 1.11 in willingness to disclose information, while the group who believed that there was a human operator scored .56 on average. The group who knew that Ellie was an artificial intelligence was also more open about their sadness; they scored a .12 on the expressions of sadness scale while the other group scored an average of .08.

 

From the paper: "As predicted, compared to those who believed they were interacting with a human operator, participants who believed they were interacting with a computer reported lower fear of self-disclosure, lower impression management, displayed their sadness more intensely, and were rated by observers as more willing to disclose. These results suggest that automated [virtual humans] can help overcome a significant barrier to obtaining truthful patient information."

 

This study seems to confirm the results from last month's study which stated that soldiers were more likely to be truthful about past indiscretions during security clearance screenings when talking to a robot than a human being. The combination of these two studies seems to indicate that people feel more comfortable disclosing information to an entity that does not pass value judgments; there is less embarrassment or self-consciousness involved if the "person" is unable to feel anything about sensitive disclosures. This is not at all a new concept; psychologists are taught to seem objective and non-judgmental for this very reason, and when psychoanalysis was all the rage patients would lie on a couch, stare at the ceiling, and talk for nearly the entire session without interruption, essentially so they could forget that a human being was listening to them.

 

That being said, effectively eliciting personal information is only one function of being a psychologist. A human would still be needed to interpret the information gleaned from the interview, as was shown in this study. Furthermore, while psychologists are taught to appear objective, they are also taught the importance of developing a rapport with the patient, which may be difficult if not impossible for an avatar to accomplish. 

Science
Artificial Intelligence
Technology

Load Comments