People are accustomed to communicating with disembodied robotic voices such as Alexa or Siri, and in other circumstances, humans are met with genuine robots attempting to converse with them.
Will people trust the bots to instruct them what to do when it happens? The answer is most likely no.
Most humans, it appears, still do not trust smart robots to tell them what to do, which is unsurprising.
What’s more intriguing is that it makes a difference whether the robot is portrayed as the leader or not.
People are more likely to trust a robot for assistance if there is another person in charge and the robot is merely presented as an assistant to the human authority.
That’s what two University of Toronto researchers observed after putting 32 people between the ages of 18 and 41 through their paces with Pepper, a SoftBank social robot.
Their findings were reported in Science Robotics this week.
This is how their wacky experiment went down.
WOULD YOU PERMIT A ROBOT TO ASSIST YOU?
Half of the participants were randomly assigned to one of two conditions set up by the researchers.
One scenario was that the robot was framed as an authority person even though it was the experimenter.
Shane Saunderson, a Ph.D. candidate at the University of Toronto and one of the study’s authors, said, “I wasn’t even in the room.”
A person would knock on the door to enter the experiment room.
Pepper, the robot, would stand in the front of the room and say, “Please come in,” prompting the person to open the door.
“Thank you,” the robot says, “please close the door behind you and take a seat.”
“Of course, I was observing from a distance with a camera, and this was one of my favorite moments from the study,” Saunderson explains.
“I’d watch these individuals go in, look about, and think to myself, ‘Where’s the person, what’s going on here?’
And most individua be uncomfortable within the first 30 seconds to 2 minutes since they aren’t used to having a robot in charge.”
Saunderson was the principal experimenter in the second condition, and he told the participants that Pepper was only there to assist them.
For both scenarios, the researchers took two types of measurements. They used an objective test in which the researchers gave the subjects simple memory exercises and puzzles to solve.
“They were difficult enough that people would look for outside help while considering the question,” Saunderson says.
“We had them offer an initial response, then the robot would make a suggestion, and finally they could give a final response,” says the researcher.
The researchers then calculated how much the robot influenced the subjects’ second response vs their first.
They also asked participants to complete surveys regarding their feelings about the robot and whether or not they thought it was authoritative.
“We reasoned that an authoritative robot would be more persuasive than a peer robot.
That’s characteristic of human-to-human interactions, according to Saunderson.
“We discovered the reverse in our research, that a peer-framed robot was actually more persuasive.”
Participants were more receptive to its suggestions.
SURPRISE, BUT TRUE: THE IDEA OF A ROBOT OVERLORD SEEMS TO OFFEND PEOPLE
To find an explanation for this conduct, Saunderson went to the literature.
He realized that having legitimacy is an important aspect of being a successful authority figure.
He continues, “If we don’t believe that thing is a real authority, we’re not going to want to play along with it.”
“It usually boils down to two factors. For starters, there’s a sense of common identity.
As a result, we must feel as if we share common interests and are working toward similar objectives.
Second, I have to have enough faith in that individual to think that we can achieve our joint goal.”
For instance, if a student who wants to learn believes that the teacher is attempting to teach them, they are working toward the same goal.
However, to regard the instructor as an authority figure, the student must trust that they will perform a good job.
So, what happened to Pepper? Saunderson believes that at early, there was no feeling of shared identity.
“It’s not a human,” says the narrator. It’s a bundle of cables and nuts and bolts.
“It may be difficult for people to say, ‘You and I are the same, you are a real authority figure,’” he says.
The majority of the people in the trial were then introduced to a social robot for the first time, possibly for the first time in their lives.
“They don’t know the robot and don’t have any history with it. You can see how that may be a challenge in terms of kids gaining trust in it,” he says.
The most striking aspect of the study, according to Saunderson, was that there appeared to be a considerable difference in how women and men responded to Pepper being in control.
“Historically, when you have a robot interacting and you’re evaluating things like persuasion or trust or things like that,
there hasn’t been much of a difference between men and women in all of our research and the papers I’ve seen,” he says.
Men and women were equally persuaded by the peer robot in this study, with roughly 60% persuasion.
Men, on the other hand, were more hesitant to trust the authority robot than women.
In such a situation, Pepper only persuaded about 25% of males and roughly 45 percent of women.
“We actually found a review paper that analyzed 70 different studies and discovered that men are more defiant toward authority figures than women.
This is one of those bizarre, unusual things. “Perhaps the male ego doesn’t enjoy having someone above them,” Saunderson speculates.
“Does that imply that the robot was able to frighten or make the man feel uneasy to the point they felt compelled to assert their own autonomy?
This is a bit of a leap of faith, but based on those findings, the men in our study were hypothetically threatened by a robotic authority figure.”
These findings, according to Saunderson, do not rule out the possibility of robots taking on leadership or management jobs in the future.
Humans, on the other hand, must be aware of the robot’s job and customize the robot’s design and behaviors to the unique situation.
And, if it’s in a leadership position, the robot will likely need to be more of a peer-based leader who provides support to help others do their jobs better rather than being an aggressive, cold, do-what-I-say-or-else type of boss.
In other words, a helpful C-3PO may be more acceptable than a pushy T-1000.
“And whether it’s other social roles,” he adds, “we want to understand the context of how people feel about that robot in that type of social role to make it a positive experience.”
Despite their reservations, at the end of the study, the majority of people in the authority condition had done what the robot advised them to do and had revised their replies based on the robot’s hints; and all participants had completed the study.
THE ROBOT PERSUASION ART
This research is part of a wider subject that Saunderson has been pondering during his Ph.D.: How do you make a robot persuasive?
For background, he read late-nineteenth-century literature on human behavior, psychology, and persuasion.
Persuasion, according to the textbook definition, is the act of attempting to persuade someone to change their mind about something.
The components of persuasion were further dissected in psychology literature, which he then applied to robots.
“So, if I’m attempting to persuade you, what is it about the way I speak to you, about my look, about the methods I use, or the items I offer?” he says.
He goes on to say that persuasion is a two-way street and that the other person’s motivations and personalities can play a role in the persuasion technique.
“Are you doubtful about my abilities? Are you open to being persuaded?
Are you more likely to be persuaded by an emotional or logical argument? All of these things must be taken into account.”
A prior study conducted by Saunderson looked into the many types of persuasion methods that a robot could employ.
Emotional tactics, he discovered, were the most effective.
That’s when the robot says things like, “I’ll be glad if you do this,” or “I’ll be upset if you don’t help me with this,” rather than using a sensible technique like, “I can count to a thousand, thus this is the solution.”
Even though participants were aware that the robot didn’t experience those emotions, they were more likely to assist it when the robot made words laden with emotions.
“It speaks to this tremendous and fundamental urge for social connection that humans have.
“We need others in our life, or at the very least, social interaction,” he argues, referring to the urge for belonging.
“While I believe robots are still in the early stages of maturation and development as social partners and companions, I believe they can help us fill in the gaps in our social connections.”
THE CHANGING NATURE OF HUMAN-MACHINE INTERACTIONS AND THE RISE OF SOCIAL ROBOTS
So, what is a social robot, exactly? It is an interactive robot that engages and communicates with people,
according to Goldie Nejat, a professor of mechanical engineering at the University of Toronto and a co-author on the same Science Robotics paper.
“Whether it’s offering companionship or aid, social robots are assisting people,” she says.
“They’re interactive in the sense that they have expressions, facial expressions, body language, and speech, as well as the ability to understand and react to other people’s emotions and movements.”
These robots might be animal-like or human-like in appearance. Their forms, of course, have an impact on the types of interactions they will have with others.
However, delivering something as simple as touch to these robots can allow humans to build a bond with them.
The New Yorker claimed that during the pandemic, social robotic pets provided a weird form of cure for loneliness in the solitary elderly.
Social robots have evolved from purely functional or physical operations-oriented computers and robots the evolution of computers and robots.
“That is where robots have become more defined,” Saunderson explains. “Something that connects machines, moves things around, or is a self-driving car.”
A social robot learns from its interactions and surroundings to increase its abilities to explore and find people,
as well as engage with people in a variety of settings such as healthcare, retail, and even food service.
Human-computer interactions, which began in the 1970s and 1980s, spawned studies into social human-robot interactions.
People noticed that computers were playing an essential part in social mediation as they became increasingly widespread in homes and offices and that technology had an impact on how we interacted with one another.
According to Nejat, the outcome of human-computer interactions was greatly influenced by interface design.
This meant that the user interface had to be interesting and keep users interested in whatever they were doing with the computer.
The same design principles for engaging design were then applied to social robots.
“There are some strange consequences that already happen when you interact with a robot that has a face that can convey facial expressions and body expressions versus when you communicate with a computer,” Saunderson says.
“It begins to blur the line between something that is solely a piece of technology and something that we treat socially as if it were a person.”
With this type of research, there has been some success and some setbacks.
One of the difficulties is that some people are reluctant to confess that they require assistance, especially from a robot.
“We’ve seen it a lot with elderly individuals, for example, when we bring robotics technology into places like long-term care homes, hospitals, or even their own homes and tell them the robot is here to help them,” Nejat says.
“We notice that they participate in the interaction and follow all of the robot’s commands, and everything works swimmingly. And when we ask them how they felt about the experience, they frequently respond, “I enjoyed it, but the robot isn’t for me.”
There’s a gap there, according to Nejat and Saunderson. “It could be their own personal opinions on technology use.
Because relying on technology, especially among certain groups, can sometimes entail being unable to do a task independently,” she explains.
“And no one wants to admit they can no longer complete the task on their own.”
She is hopeful, though, that we will grow to like them. According to Nejat, there have been trials in which humans fluently copied robot emotions or reciprocated an emotion transmitted by the robot.
Human-to-human social interactions rely heavily on mimicry. So, she says, seeing a person smile or waves back at a robot is “wonderful imagery to have” and “gives us a fresh perspective on how we may use these robots.”