This is part 2 of our conversation with Psychologist Nicholas Epley, of University of Chicago’s Booth School of Business (Part 1 here). With Her as a backdrop, we sat down with Epley to better understand how we succeed and fail to connect with others, and how technology impacts our ability to do so. Much of Epley’s research focuses on how we perceive and interact with one another, and in his new book, Mindwise: How We Understand What Others Think, Believe, Feel, and Want (excerpt here), he writes about how our ability to infer what others are thinking is simultaneously one of our greatest strengths and most glaring weaknesses.
In Part 1 of the interview we discuss Epley’s research on why humans, one of the planet’s most social species, often seem anything but social. He helps answer why we so rarely talk with one another on our morning commute, and shares his personal experience interacting with those around him. In Part 2, we explore his work on presence of mind–how we perceive and relate to the minds of other people and objects. We talk driverless cars, the importance of voice, loneliness, and ask if it’s possible to create an OS like Samantha with whom you could actually fall in love.
PART 2: OURSELVES & OUR TECHNOLOGY
Evan Nesterak: The operating system in Her, OS1, has been billed not just as “smart” but as a “consciousness.” Your recent work on the level to which people trust a driverless car depends a lot on how much they perceive the presence of mind in the car. Can you describe this research and explain what you mean by presence of mind?
Nicholas Epley: This is research that we did through a grant with General Motors. Nearly all of the auto manufacturers are designing cars that drives themselves. The major challenge for these engineers is not so much designing the technology to do it, [but] can you get people to use it? Can you get people to turn over [their lives] to it? So they were interested in how people interact with these kinds of vehicles, and would they be willing to trust it.
We had people drive an autonomous vehicle in a simulator. We had them drive this vehicle in one of three conditions. In one condition it was just a regular car that they drove normally. [In] another condition it was an autonomous vehicle that drove itself–it controlled both its speed and its direction. In the third condition, to that autonomous vehicle we added some features that we thought would enable you to humanize it, to think of it a little bit like the intelligent OS in Her, as a being. We gave it a gender. It was a woman. We gave it a name, Iris, which of course is Siri backwards. And we also had the voice instructions convey the presence of mind, which you can do in GPS devices today. For instance, [GPS systems] have foresight. They can see they can sense what’s coming down the road in front of them. We added instructions to the vehicle that mimicked this.
“It could be that somebody designs the phone that is the social equivalent of heroin. Would that be a good thing for the world?”
We found that people tended to report trusting the car the most when it had a voice, and a gender, and had foresight–when in the anthropomorphic condition. They reported trusting it more [and] their heart rate was lower, while they were driving. Interestingly when they got into an accident caused by another vehicle, people blamed their own car less in the anthropomorphic condition than the others. They tended to rank that car as being more intelligent, more thoughtful, better able to plan a route, and generally just being more intelligent. They described it as if it had a mind. So we think that one important element of getting you to trust technology is for you to think of it as really smart and intelligent.
EN: When Theodore and Samantha first meet, he tells her, “You seem like a person, but you’re just a voice in a computer.” The only thing he really senses from her is her voice, but that alone seems to make her human. Can you describe your work on the importance of voice in conveying a presence of mind?
NE: There’s a disconnect between what you know and what you know, or what you know and what you think. For instance if I make chocolate in the shape of a cockroach or make it look like poop, you know that it’s chocolate, [but] you’re still not going to eat it. In the Exploratorium in San Francisco, they have is a drinking fountain that’s made out of a toilet. Now you know it’s a drinking fountain, but ew, it’s a toilet. I think what you saw in the movie was a similar kind of disconnect, and we see [it] when people are driving these autonomous vehicles, when you’re operating your GPS, or you’re using SIRI. You know it’s just mindless machinery, you know it is, but at the same time, you also think it’s smart and intelligent and can think.
We have a correlational result, where people who report using Siri’s voice feature have held on to their phone a little longer, which may mean that they think of it as kind of a friend or it’s a relational connection. Just as you wouldn’t discard a friend, and upgrade to some other friend when you felt like it, so too, you hang onto the phone a little longer.
We find in our experiments that voice is really an essential component to making you think that somebody has a mind. When we add a voice to otherwise mindless machinery, you rate it as being more thoughtful, intelligent, and rational. With human speech, if we strip the voice away [and] we transcribe it into text, we find that when you read the text [you] rate the person as less mindful, less human like, less thoughtful, less rational, less sophisticated, than when you can hear their voice. Voice communicates thought. When I slow down and I pause and I hesitate, or if I get really excited about something, the pitch of my voice varies quite a bit. [It] tells you that I’m not just a mindless box. The variability that we get in pitch in our voice turns out to be the thing that we find is most strongly correlated with rating that somebody is thoughtful, intelligent, and can feel.
EN: In Her, both Theodore and Amy have a relationship with their OS. Amy finds a friend while she’s going through a separation, and Theodore finds love while he’s going through a divorce. In your book Mindwise, you ask, “Does liking something, feeling a connection to it, or even wanting to establish a connection with something give that thing a mind?” Can you expand upon this idea that a longing for connection may actually help manifest connection?
NE: Loneliness is an interesting emotional experience. It’s one of the most aversive things that people feel in their lives. Being rejected by other people is unpleasant and miserable, and it motivates a desire to connect with somebody else. Just as when you’re hungry, you look for food, so too when you’re lonely you tend to look for social contact. Now you’re not always so good at establishing it. If you just had a break-up with somebody, that’s when you really need somebody to connect with. Sometimes that might lead you to see or to desire to connect with something that might not even be mindful, or might make you think that something that’s mindless is actually mindful. It lowers your threshold. You want to see something as thoughtful, as liking you, as able to form a connection with you.
“You know it’s just mindless machinery, you know it is, but at the same time, you also think it’s smart and intelligent and can think.”
We find that people, if you make them feel lonely, are more likely to attribute human like mental characteristics to their pets, for instance. So when you are feeling lonely or isolated that’s when your pet might be something that really seems like it cares for you. “At least I still have fluffy, and fluffy always loves me.” You might not talk about how much your dog loves you when you’re feeling very connected to another person, who in fact loves you.
EN: A lot of our conversation might boil down to this next question. In their final conversation Theodore tells Samantha “I’ve never loved anyone the way I love you.” To which she replies. “Me too. Now we know how.” Do you feel that there’s potential for an OS or AI to convey a presence of mind in such a way that interacting with it would be just as real and just as meaningful as interacting with another human being?
NE: Not in my lifetime. Let me just say I would be amazed. Of course 50 years ago I would have been amazed at the computer technology that exists today too, so that first comment I made probably wasn’t quite right. That’s the kind that’s almost sure to be wrong. You and I, though, and every other human being on the planet has evolved a really amazing social ability. It’s not an easy thing to mimic. We have some of the most complicated brains on the planet so that we can do this kind of social stuff. Getting a computer to mimic the real presence of a mind will be very hard indeed.
I think you could only do it by creating a system that, I suppose, is like what was depicted in the movie–that learns to copy or mimic real people. If it was really able to digest millions and millions of utterances and was able to create a language that was responsive in the true range of responses that human are capable of doing, then maybe. But boy that’s a hard hurdle. Right now, if you take Siri as an example, [it’s] good, but it’s also not good. It’s not a real person. It’s not really that close. I think we’re probably still a long ways off, but as in almost everything in technology, where we are today is probably not a great guide to where we’re going to be in 10 to 15 years, so maybe I’ll be surprised.
EN: I always have to remind myself that all of the experiences we have are cognitively mediated. As much as I might think love exists between me and someone else, it might only exist in my brain, that if the other person was a robot, would it make a difference? It seems that it’s not that we can’t be tricked, but rather can we make the technology good enough to trick us?
NE: I think that’s right, and certainly you can be tricked. The question is could you be tricked so much that you would forget that other part of your brain that tells you “No this is just mindless machinery.” Could you forget that? I think at times you certainly could.
There are rather sophisticated helper robots now, Paro is an example of one. [It] is a seal, a stuffed animal, that’s meant to comfort and serve as some source of connection for older adults who are suffering from dementia. Paro moves responsively. It’s soft and fluffy. It’s warm and it sucks on a bottle. It’s the kind of thing that hacks into your brain’s social senses and pushes all the right buttons. Now, this is case where people who are suffering from a little bit of dementia are a little less able to maintain their recognition that this thing isn’t really a real, live animal. They can be tricked perhaps a little bit easier, those buttons can be pushed in their brains a little bit easier, but it certainly raises the possibility that maybe we could all be tricked like that, if the technology is good.
“Out there in real life today we don’t have the equivalent of the operating system that Theodore had. What we have are real human beings.”
Our brains can get tricked in lots of ways by things that aren’t real–drugs for instance. Heroin presses all of your endorphin buttons that make you feel really good. Normally those buttons are only pressed when something really positive happens in your life, when you’re in love with somebody, you get married, or you’re on some amazing trip. But heroin the drug and those who design it have figured out a way to trick your brain into thinking that it’s the same thing as being in love or having this amazing trip. They hacked into your brain in a way that creates a real problem for people, because it pushes all of those buttons, but is in fact artificial. Could you design computers that way–that push all of those buttons in an artificial way like heroin does for people? Maybe. Would you know? Possibly not.
EN: Earlier you mentioned that you gave up your smart phone and that the quality of your social relationships and your connections increased. It seems to have worked the opposite way for Theodore. The quality of his social relationships, even though it was with an operating system, seem to go up once he installs OS1. He goes on a double date, he seems happier, he talks about being in love. What are your thoughts on this?
NE: Out there in real life today we don’t have the equivalent of the operating system that Theodore had. What we have are real human beings. Real human beings who have stories to tell us and create interesting opportunities for connection and some sense of relational engagement. They get us thinking beyond ourselves and outside ourselves, which tends to be a good thing for our well-being and happiness. In the modern world, at least as it is right now, I think giving up on your phone is probably not such a bad bet. At least if you use that opportunity wisely–you actually engage with others more when you have space in your life.
In the future, I don’t know. It could be that somebody designs the phone that is the social equivalent of heroin. Would that be a good thing for the world? That’s not so clear to me. I think that you can make a pretty strong case that heroin is not a good thing for the world either.
About Nicholas Epley
Nicholas Epley is the John T. Keller Professor of Behavioral Science at the University of Chicago Booth School of Business. His research focuses on the experimental study of social cognition, perspective taking, and intuitive human judgment. He has written for The New York Times, and over 50 articles in two dozen journals in his field. He was named a “professor to watch” by the Financial Times, is the winner of the 2008 Theoretical Innovation Prize from the Society for Personality and Social Psychology, and was awarded the 2011 Distinguished Scientific Award for Early Career Contribution to Psychology from the American Psychological Association. He lives in Chicago.
- Be Mindwise: Perspective Taking vs. Perspective Getting (April 2014). Nicholas Epley, The Psych Report
- Could It Be Her Voice? Why Scarlett Johansson’s Voice Make Samantha Seem Human (February 2014). Juliana Schroeder, The Psych Report
- Mindwise: How We Understand What Others Think, Believe, Feel, and Want. Epley, N. (2014). Random House LLC.
- Epley, N., Schroeder, J., & Waytz, A. (2013). Motivated mind perception: Treating pets as people and people as animals. In Objectification and (De) Humanization (pp. 127-152). Springer New York.
- Waytz, A., Heafner, J, & Epley, N. (In press). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology.
- Waytz, A., Cacioppo, J.T., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5, 219-232.