Trust in humans and robots: Economically similar but emotionally different

Trust in humans and robots: Economically similar but emotionally different
In the Human condition a human participant (Person 1) in the role of investor is paired with a human participant (Person 2) in the role of trustee. In the Robot1 condition a human (Person 1) in the role of investor is paired with a robot in the role of trustee. In the Robot2 condition a human participant (Person 1) in the role of investor is paired with a robot in the role of trustee that acts on behalf of a passive participant (Person 2). Credit: Chapman University

In research published in the Journal of Economic Psychology, scientists explore whether people trust robots as they do fellow humans. These interactions are important to understand because trust-based interactions with robots are increasingly common in the marketplace, workplace, on the road and in the home. Results show people extend trust similarly to humans and robots but people’s emotional reactions in trust-based interactions vary depending on partner type.

The study was led by Chapman University’s Eric Schniter, Ph.D. and Timothy Shields, Ph.D. along with University of Montreal’s Daniel Sznycer, Ph.D.

The researchers used an anonymous trust game experiment during which a human trustor decided how much of a $10 endowment to give to a trustee—a human, a robot, or a robot whose payoffs go to another human. The human trustor knows there were potential gains from the transfer and the trustee would determine whether to reciprocate by transferring back an amount. Robots were programmed to mimic previously observed reciprocation by human trustees.

It is well established that in trust games like this, most people make decisions that lead to both trustor and trustee benefit. After the interaction, participants rated various positive and negative emotions.

The experimental design allowed researchers to explain two important aspects of trust in explainable robots: how much humans trust robots compared to fellow humans and patterns of how humans react emotionally following interactions with robots versus other humans.

The experiment shows people extend similar levels of trust to humans and robots. This is not what we would find if humans blindly trusted or refused to trust robots. This would also not be the outcome if we believe people extend trust with the sole intention of improving other humans’ welfare, since trusting a robot does not improve another person’s welfare.

The result is consistent with the view that people extend trust for both monetary gain and to discover information about human behavioral propensities. Through their trust interactions with the robots, participants learned about the cooperativeness of fellow humans.

Social emotions are more than feelings—they regulate social behavior. More specifically, social emotions such as guilt, gratitude, anger, and pride affect how we treat others and influence how others treat us in trust-based interactions.

Participants in this experiment experienced social emotions differently depending on whether their partner was a robot or human. A failure to reciprocate the trustor’s investment in the trustee triggered more anger when the trustee was a human than when the trustee was a robot. Similarly, reciprocation triggered more gratitude when the trustee was a human than when the trustee was a robot.

Further, participants’ emotions finely discriminated among robot types. They reported feeling more intense pride and guilt when the robot trustee’s payoff went to a human than when the robot acted alone.

Given that initial trust did not differ across partner type, but social emotions did, a distinct possibility is that trust re-extension in repeated interactions will differ when the partner is a human, a robot, or a robot linked to a human beneficiary.

In the future, driving will present interaction opportunities where it will matter whether decisions are being made by humans or robots and if they serve humans or not. Some cars used for delivery or pickups may drive without human occupants, other cars will drive with passive human occupants and yet other cars will be driven by human drivers. Analogous interactions occur with automated or robotic check-in agents, bank tellers, surgeons, etc.

Partnerships with consistent reciprocators may consolidate into stronger, more productive partnerships when the reciprocators are fellow humans, because humans elicit more gratitude than robots do. Conversely, partnerships with inconsistent reciprocators may be more stable when the reciprocators are robots, because robots elicit less anger than humans do. Further, humans experienced pride and guilt more intensely in interactions where robots served a beneficiary, which suggests people will be more likely to re-extend trust to similar partners.

The human cognitive architecture evolved to have enough structure and content to promote our ancestors’ survival and reproduction while also having the flexibility to navigate novel challenges and opportunities. These features enable humans to design and rationally interact with artificial intelligence and robots. Still, interactions with automata, and science’s ability to explain these interactions are imperfect because automata lack the psychophysical cues that we expect in an interaction and are often guided by unexplainable or unintuitive decision logics.

Leave a Reply

Your email address will not be published. Required fields are marked *