In this episode, Byron and Daniel talk about magic, robots, Alexa, optimism, and ELIZA.
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Daniel Wilson. He is the author of the New York Times best-selling Robopocalypse, and its sequel, Robogenesis, as well as other books, including How to Survive A Robot Uprising, A Boy And His Bot, and Amped. He earned a PhD in Robotics from Carnegie-Mellon University, and a master’s degree in AI and robotics, as well. His newest novel, The Clockwork Dynasty, was released in August 2017. Welcome to the show, Daniel.
Daniel H Wilson: Hi, Byron. Thanks for having me.
So how far back—the earliest robots—I guess they began in Greek myth, didn’t they?
Yeah, so it’s something I have been thinking a lot about, because automatons play a major part in my novel The Clockwork Dynasty. I started thinking about, how far back does this desire to build robots, or lifelike machines, really go? Yeah, and if you start to look at history, you’ll see that we have actual artifacts from the last few hundred years.
And before that, we have a lot stories. And before that, we have mythology, and it does go all the way back to Greek mythology. People might remember that Hephaestus supposedly built tripod robots to serve the gods on Mount Olympus.
They had to chain them up at night, didn’t they, because they would wander off?
I don’t remember that part, but it wouldn’t surprise me. Yeah, that was written somewhere. Someone reported that they had visited, and that was true. I think there was the giant bronze robot that guarded… I think it was Crete, that was called Talos? That was another one of Hephaestus’s creations. So yeah, there are stories about lifelike machines that go all the way back into prehistory, and into mythology.
I think even in the story of Prometheus, in its earliest tellings, it was a robot eagle that actually flew down and plucked his liver out every day?
Oh, really… I didn’t remember that. I always, of course, loved the little robots from Clash of the Titans, you know the robot owl… do you remember his name?
No.
Bobo, or something.
That’s funny. So, those were not, even at the time, considered scientific devices, right? They were animated by magic, or something else. Nobody looked at a bunch of tools and thought, “A-ha, I can build a mechanical device here.” So where do you think it came from?
Well, you know, I think obviously human beings are really fascinated with themselves, right? Think about Galatea, and creating sculptures, and creating imitations of ourselves, and of animals, of course. It doesn’t surprise me at all that people have been trying to build this stuff for a really long time; what is kind of interesting to consider is to look at how it’s evolved over centuries and centuries.
Because you’re right; one thing that I have found doing research for this novel is that—it’s really fascinating to me—our concept of the scientific method, and the idea of the world as a machine, and that we can pick up the pieces and build new things. And we can figure out underlying physical principles, and things like that. That’s a relatively new viewpoint, which human beings haven’t really had for that long.
Looking at automatons, I saw that there’s this sort of pattern, in that the longer we build these things, they really are living embodiments of the world as the machine, right? If you start to look at the automatons being built during the Middle Ages, the medieval times, and then up through to the beginning of the Industrial Revolution, you see that people like Descartes, and philosophers who really helped us, as a civilization, solidify our viewpoint of the way nature works, and the way that science works—they were inspired by automatons, because they showed a living embodiment of what it would be like if an animal were made out of parts.
Then you go and dissect a real animal, and you start to think, “Wait, maybe I can figure this out. Maybe it’s not just, ‘God created it, walk away from it; it is what it is.'” Maybe there’s actually some rule or rhyme under this, and we can figure it out. I think that these kinds of machines actually really helped propel our civilization towards the technological age that we live in right now, because these philosophers were able to see this playing out.
Sorry, not to prattle on too long, but one thing I also really love about, specifically, medieval times is the notions of how this stuff works were very set down, but they were also very magical. There were different types of magic, that’s what I really loved in my research. Finding that whenever you see something like an aqueduct functioning, they would think of that as a natural kind of magic, whereas if you had some geometry, or pure math, they would think of that as a celestial type of magic.
But underneath all of it there were always angels or demons, and always there were the suspicions of a Necromantic art, that this lifelike thing is animated by a spirit of the dead. There’s so much magic and mystery that was laced into science at the time, that I think it really hindered the ability to develop iterative scientific advancements, at the time.
So picking up on that a bit, late eighteenth century, you’ve got Frankenstein. Frankenstein was a scientific creation, right? There was nothing magical about that. Can you think of an example before Frankenstein where the animating force was science-based?
The animating force behind some kind of creature, or like lifelike automaton? Yeah, I really can’t. I can think of lots of examples of stuff like Golem, or something like that, and they are all kind of created by magic, or by deities. I’m trying to think… I think that all of those ideas really culminated right around the time of the Industrial Revolution, and that was really reflective of their time. Do you have any examples?
No. What do you know about Da Vinci’s robot?
Not much. I know that he had a lot of sketches for various mechanical devices.
He, of course, couldn’t build it. He didn’t have the tools, but obviously what Da Vinci would have made would have been a purely scientific thing, in that sense.
Sure, but even if it were, that doesn’t mean that other people wouldn’t have applied the mindset that, whatever his inventions were, they were powered by natural magic, or some kind of deity or spirit. It’s kind of funny, because people back then were able to completely hold both of those ideas in their heads at once.
They could completely believe the idea that whatever they were creating was magical, and at the same time, they were doing science. It’s such an interesting thing to contemplate, being able to do science from that mentality.
Let’s go to the 1920s. Talk to us about the play that gives us the word “robot.”
Wow, this is like a quiz. This is great. So, you’re talking about R.U.R., the Čapek play. Yeah, Rossum’s Universal Robots—it’s a play from the ’20s in which, you know, a scientist creates a robot, and a race of robots. And of course, what do they do, they rise up and overthrow humanity and they kill every single one of us. It’s attributed as being the place where the term “robot” was coined, and yeah, it plays out in the way that a lot of the stories about robots have played out, ever since.
One of the things that is interesting about R.U.R. is that, so often, we use robots differently in our stories, based on whatever the context is, of what’s going on in the world at the time, because robots are really reflections of people. They are kind of this distorted mirror that we hold up to ourselves. At that time, you know, people were worried about the exploitation of the working class. When you look at R.U.R., that’s pretty much what those robots embodied.
They are the children of men, they are working class, they rise up and they destroy their rulers. I think the lesson there was clear for everybody in the 1920s who went to go see that play. Robots represent different things, depending on what’s going on. We’ve seen lots of other killer robots, but they’ve embodied or represented lots of other different evils and fears that we have, as people.
Would you call that 1920s version of a robot a fully-formed image in the way we think of them now? What would have been different about that view of robots?
Well, no. Those robots, they just looked like people, but I don’t even think there was the idea that they were made of metal, or anything like that. I think that that sort of image of the pop culture robot evolved more in the ’40s, ’50s, and ’60s, with pulp science fiction, when we started thinking of them as “big metal men”—you know, like Gort from The Day the Earth Stood Still, or Robby, or all of these giant hunks of metal with lights and things on them—that are more consistent with the technology of that time, which was the dawn of rocket ships and stuff like that, and that kind of science fiction.
From what I recall, in R.U.R., they aren’t mechanical at all. They are just like people, except they can’t procreate.
The reason why I ask you if you thought they were fully modern: let me just read you this quote from the play, and tell me what it sounds like to you. This is Harry Domin, he’s one of the characters, and he says:
“In ten years, Rossum’s Universal Robots will produce so much corn, so much cloth, and so much of everything that things will be practically without price. There will be no poverty, all work will be done by living machines. Everyone will be free from worry, and liberated from the degradation of labor. Everyone will live only to perfect himself.”
Yeah, it’s a utopian post-economy. Of course, it’s built on the back of slaves, which I think is the point of the play—we’re all going to have great lives, and we’re going to be standing right on the throats of this race of slaves that are going to sacrifice everything so we can have everything.
I guess I am struck by the fact that it seems very similar to what people’s hope for automation is right now—”The factories will run themselves.” Who was it that said, “The factory of the future will only have two employees—a man and a dog. The man’s job will be to feed the dog, and the dog’s job will be to keep the man from punching the machines.”
I’ve been cooking up a little rant about this, lately, honestly. I might as well launch into it. I think that’s actually a really naïve and childish view of a future. I’m starting to realize it more and more as I see the technology that we are receiving. This is sort of the first fruit, right?
Because we’ve only just gotten speech recognition to a level that’s useful, and gesture recognition, and maybe a little bit of natural language, and some computer vision, and then just general AI pattern recognition—we’re just now getting useful stuff from that, right?
We’re getting stuff like Alexa, or these mapping algorithms that can take us from one place to another, and Facebook and Twitter are choosing what they think would be most interesting to us, and I think this is very similar to what they’re describing in R.U.R., is this perfect future where we do nothing.
But doing nothing is not perfect. Doing nothing sucks. Doing nothing robs a person of all their ability and all their potential—it’s not what we would want. But a child, a person who just stumbled upon a treasure trove of this stuff, that’s what they’d think; that’s like the first wish you’d make, that would then make the rest of your life hell.
That’s what we are seeing now, what I’ve been calling the “candy age” of artificial intelligence, where people—researchers and technologists—are going, “What do people want? Let’s give them exactly what they say they want.”
Then they do, and then we don’t know how to get around in the cities that we live, because we depend on a mapping algorithm. We don’t know the viewpoints that our neighbors have, because we’ve never actually read an article that doesn’t tell us exactly what our worldview already is, there are a million examples. Talking to Alexa, I don’t have to say “please” or “thank you.” I just order it around, and it does whatever I say, and delivers whatever I ask for.
I think that, and hope that, as we get a little bit more of a mature view on technology, and as the technology itself matures, we can reach a future in which the technology doesn’t deliver exactly what we want, exactly when we want it, but the technology actually makes us better, in whatever way it can. I would prefer that my mapping algorithm not just take me to my destination, I want it to help me know where stuff is myself. I want it to teach me, and make me better.
Not just give me something, but make me better. I think that, potentially, that is the future of technology. It’s not a future where we’re all those overweight, helpless people from Wall-E leaning back in floating chairs, doing nothing and totally dependent on a machine. I think it’s a future where the technology makes us stronger, and I think that’s a more mature worldview and idea of the future.
Well, you know, the quote that I read though, he said that “everybody will spend their time perfecting themselves.” And I assume you’ve seen Star Trek before?
Sure, yes.
There’s an episode where the Enterprise thaws some people out from the twentieth century, and one of the guys—his name is Offenhouse—he’s talking about what’s the challenge in a world where there are no material needs or hunger, and all of that? And Picard said, the challenge is to become a better person, and make the most of it. So that’s also part of the narrative as well, right?
Yeah, and I think that slots in kind of well with the Alexa example, you know? Alexa is this AI that Amazon has built that—oh God, and mine’s talking to me right now because I keep saying her name—is this AI that sits in your house and you tell it what to do, and you don’t have to be polite to it. And this is kind of interesting to contemplate, right?
If your future with technology is a place where you are going to hone your sense of being the best version of yourself that you can be, how are you going to do that if you’re having interactions with lifelike machines in which you don’t have to behave ethically?
Where it’s okay to shout at Alexa—sorry, I’ve got to whisper her name—who, by the way, sounds exactly like a woman, and has a woman’s voice, and is therefore implicitly teaching you via your interaction with her that it’s okay to shout at that type of a voice.
I think it’s not going to be mutually exclusive—where the machines take over everything and you are free to be by yourself—because technology is a huge part of our life. We are going to have to work with technology to be the best versions of ourselves.
I think another example you can find easily is just looking at athletes. You don’t gauge how fast a runner is by putting them on a motorcycle; they run. They’re human. They are perfecting something that’s very human. And yet, they are doing it in concert with extreme levels of technology, so that when they do stand on the starting mark, ideally under the same conditions that every other human has stood on a starting mark for the last, however long, and the pistol goes off, and they start running, they are going to run faster than any human being who ever ran before.
The difference is that they are going to have trained with technology, and it’s going to have made them better. That’s kind of the non-mutually-exclusive future that I see, or that I end up writing science fiction about, since I’m not actually a scientist and I don’t have to do any of this stuff.
Let me take that idea and run with it for just a minute. Just to set this up for the listener, in the 1960s, there was a man named Weizenbaum, who wrote a program named ELIZA. ELIZA was kind of a therapy bot—I guess we would think of it now—and you would say something like, “I’m having a bad day,” and it would say, “Why are you having a bad day?” And you would say, “I’m having a bad day because of my boyfriend,” and it would say, “What about your boyfriend is making you have a bad day?”
It’s really simple, and uses a few linguistic rules. And Weizenbaum saw people engaging with it, and even though they knew it was a machine, he saw them form an emotional attachment—they would pour their heart out to it, they would cry. And he turned on AI, as it were. He deleted ELIZA and said, when the computer says, “I understand,” it’s just a lie, because there’s no “I” and no understanding.
He distinguished between choosing and deciding. He said, “Deciding is something a computer can do, but choice is a human thing.” He was against using computers as substitutes for people, especially anything that involved empathy. Is your observation about Alexa that we need to program it to require us to say please, or we need to not give it a personality, or something different?
Absolutely, we need to just figure out ethical interactions and make sure that our technology encourages those. And it’s not about the technology. No one cares about whether or not you’re hurting Alexa’s feelings; she doesn’t have any feelings. The question is, what kind of interactions are you setting up for yourself, and what kind of behaviors are you implicitly encouraging in yourself?
Because we get to choose the environments that we are in. The difference between when ELIZA was written and now is that we are surrounded by technology. Every minute of our lives has got technology. At that time, you can say, “Oh, let’s erase the program, this is sick, this is messed up.” Well guess what, man, that’s not the world anymore.
Every teenager has a real social network, and then they have a virtual social network, that’s bigger and stranger and more complex, and possibly more rewarding than the real people that are out there. That’s the environment that we live in now. It’s not a choice to say “turn it off,” right? We’re too far. I think that the answer is to make sure that technologists remember that this is a dimension that they have to consider while they create technology.
That’s kind of a new thing, right? We didn’t have to use to worry about consumer products—are people going to fall in love with a toaster, are people going to get upset when the toaster goes kaput, are people going to curse at the toasters and become worse versions of themselves? That wasn’t an issue then, but it is an issue now, because we are having interactions with lifelike artifacts. Therefore, ethical dimensions have to be considered. I think it’s a fascinating problem, and I think it’s something that is going to really make people better, in the end.
Assuming we do make machines that simulate emotions—you can have a bot best friend, or what have you—do you think that that is something that people will do, and do you think that that is healthy, and good, and positive?
It’s going to be interesting to see how that shakes out. Talking in terms of decision versus choice; one thing that’s always stuck with me is a moment in the movie AI, when Gigolo Joe—who is exactly what he sounds like, and he’s a robot—he looks this woman in the eyes, and he says, “You are the most beautiful woman in the world.” Immediately, you look at that, and you go, he’s just a robot, that doesn’t mean anything.
He just said, “You’re the most beautiful woman in the world,” but his opinion doesn’t mean anything, right? But then you think about it for another second, and you realize, he means it. He means that with every fiber of his being, and there’s no human alive, that could probably look at that exact woman, at that exact moment, and say, “You’re the most beautiful woman alive,” and really mean it. So, there’s value there.
You can see how that value exists when you see complete earnestness versus how a wider society might attribute a zero value to the whole thing, but at least he means it. So yeah, I can kind of see both sides of this. I’m judging now from the environment that I live in right now, the context of the world that I have; I don’t think it would be a great idea. I wouldn’t want my kids to just have virtual friends that are robots, or whatever, but you never know.
I can’t make that call for people twenty years from now. They could be living in a friggin’ apocalypse, where they don’t have access to human beings and the only thing that they’ve got are virtual characters to be friends with. I don’t know what the future is going to bring. But I can definitely say that we are going to have interactions with lifelike machines, there are going to be ethical dimensions to those interactions; technologists had better figure out ways to make sure those interactions make us better people, and not monsters.
You know, it’s interestingly an old question. Do you remember that original Twilight Zone episode about the guy who’s on the planet by himself—I think he’s in prison—and they leave him a robot. He gets a pardon, or something, and they go to pick him up, and they only have room for him, not the robot, and he refuses to leave the robot.
So, he just stays alone on the planet. It’s kind of interesting that fifty years ago, we looked ahead and that was a real thing that people thought about—are synthetic emotions as valuable to a human as real ones? I assume you think we are definitely going to face that—as a roboticist—we certainly are going to build things that can look you in the eye, and tell you that you are beautiful, in a very convincing way.
Yes. I have a very humanist kind of viewpoint on this. I don’t think technology means anything without people. I think that technology derives its value entirely from how much it matters to human beings. It’s the part of me that gets very excited about this idea of the robot that looks you in the eye and says, “I love you,” but I’m not interested in replacing human relationships that I have.
I don’t know how many friends you have, but I have a couple of really good friends. That’s all I can handle. I have my wife, and my kids, and my family. I think most people aren’t looking to add more and replace all their friends with machines, but what I get excited about is how storytelling is going to evolve. Because all of us are constantly scouring books and movies and television, because we are looking for glimpses of those kinds of emotional interactions and relationships between people, because we feed on that, because we are human beings and we’re designed to interact with each other.
We just love watching other human beings interact with each other. Having written novels and comic books and screenplays and the occasional videogame, I can’t wait to interact with these types of agents in a storytelling setting, where the game, the story, is literally human interaction.
I’ve talked about this a little bit before, and some examples I’ve cooked up, like… What if it’s World War I, and you’re in No Man’s Land, and there are mortars streaking out of the sky, blowing up, and your whole job for this story is to convince your seventeen-year-old brother to get out of the crater and follow you to the next crater before he gets killed, right? The job is not to carry a videogame gun and shoot back. Your job is to look him in the eye, and beg him, and say, “I’m begging you, you have to get up, you have to be strong enough to come with me and go over here, I promised mom you would not die here!” You convince him to get up and go with you over the hill to the next crater, and that’s how you pass that level of that story, or that’s how you move through that storytelling world.
That level of human interaction with an artificial agent, where it’s looking at me, and it can tell whether I mean it, and it can tell if there’s emotion in my voice, and it can tell if I’m committed to this, and it can also reflect that back to me accurately, through the actions of this artificial agent—man, now that is going to be a really fascinating way to engage in a story. And I think, it has—again, like I’ve been harping on—it has the ability to make people better through empathy, through sharing situations that they get to experience emotionally, and then understand after that.
Thinking about replacing things is interesting, and often depressing. I think it’s more interesting to think about how we are going to evolve, and try out new things, and have new experiences with this type of technology.
Let’s talk a little bit about life and intelligence. So, will the robots be alive? Do you think we are going to build living machines? And by asking you the question, I am kind of implicitly asking you to define “life.”
Sorry, let’s back up. The question is: Do we think we’re going to build perfectly lifelike machines?
No. Will we build machines that are alive—whether they look human or not, I’m not interested in—will there be living machines?
That’s interesting, I mean—I only find that interesting in a philosophical way to contemplate. I don’t really care about that question. Because at the end of the day, I think Turing had it right. If we are talking about human-like machines, and we are going to consider whether they are alive—which would probably mean that they need rights, and things like that—then I think the proof is just in the comparison.
I’m making the assumption that every other human is conscious. I’m assuming that I’m conscious, because I’m sitting here feeling what executive function feels like, but, I think that that’s a fine hoop to jump through. Human-like level of intelligence: it’s enough for me to give everyone else the benefit of the doubt, it’s enough for them to give me the benefit of the doubt, so why wouldn’t I just use that same metric for a lifelike machine?
To the extent that I have been convinced that I’m alive, or that anybody is alive, I’m perfectly willing to be convinced that a machine is alive, as well.
I would submit, though, that it is the farthest thing from a philosophical question, because, as you touched on, if the machine is alive, then it has certain rights? You can’t have it plunge your toilet, necessarily, or program it to just do your bidding. Nobody thinks the bots we have now are alive. Nobody worries—
—Well, we currently don’t have a definition of “life” that everyone agrees on, period. So, throwing robots into that milieu, is just… I don’t know…
We don’t have to have a definition. We can know the endpoints, though. We know a rock is not alive, and we know a human is alive. The question isn’t, are robots going to walk in some undefined grey area that we can’t figure out; the question is, will they actually be alive? And if they’re alive, are they conscious? And if they’re conscious, then that is the furthest thing from a philosophical question. It used to be a philosophical question, when you couldn’t even really entertain the question, but now…
I’m willing to alter that slightly. I’ll say that it’s an academic question. If the first thing that leads off this whole chain is, “Is it alive?” and we have not yet assigned a definition to that symbol—A-L-I-V-E—then it becomes an academic discussion of what parameters are necessary in order to satisfy the definition of “alive.”
And that is not really very interesting. I think the more interesting thing is, how are we actually going to deal with these things in our day-to-day lives? So from a very practical, concrete manner, like… I walk up to a robot, the robot is indistinguishable from a human being—which, that’s not a definition of alive, that’s just a definition—then how am I going to behave, what’s my interaction protocol going to be?
That’s really fun to contemplate. It’s something that we are contemplating right now. We’re at the very beginning of making that call. You think about all of the thought experiments that people are engaging in right now regarding autonomous vehicles. I’ve read a lot lately about, “Okay, we got a Tesla here, it’s fully autonomous, it’s gotta go left or right, can’t do anything else, but there’s a baby on the left, and an elderly person on the right, what are we going to do? It’s gotta kill somebody; what’s going to happen?”
The fact is, we don’t know anything about the moral culpability, we don’t know anything about the definitions of life or of consciousness, but we’ve got a robot that’s going to run over something, and we’ve got to figure out how we feel about it. I love that, because it means that we are going to have to formalize our ethical values as a society.
I think that’s something that’s very good for us to consider, and we are going to have to pack that stuff into these machines, and they are going to continue to evolve. My feeling is that I hope that by the time we get to a point where we can sit in armchairs and discuss whether these things are alive, they’ll of course already be here. And hopefully, we will have already figured out exactly how we do want to interact with these autonomous machines, whether they are vehicles or human-like robots, or whatever they are.
We will hopefully already have figured that out by the time we smoke cigars and consider what “aliveness” is.
The reason I ask the question… Up until the 1990s, veterinarians were taught not to use anesthetic when they operated on animals. The theory was—
—And on babies. Human babies. Yeah.
Right. That was scientific consensus, right? The question is, how would we have known? Today, we would look at that and say, “That dog really looks like it’s hurting.” Therefore, we would be intensely curious to know it. And of course we call that sentience, the ability to sense something, generally pain, and we base our laws all on it.
Human rights arrived, in part, because we are sentient. And animal cruelty law arrived because the animals are sentient. And yet, we don’t get in trouble for using antibiotics on bacteria because, they are not deemed to be sentient. So all of a sudden we are going to be confronted by something that says, “Ouch, that hurt.” And either it didn’t, and we should pay that no mind whatsoever, or it did hurt, which is a whole different thing.
To say, “Let’s just wait until that happens, and then we can sit around and discuss it academically” is not necessarily what I’m asking—I’m asking how will we know when that moment changes? It sounds like you are saying, we should just assume, if they say they hurt, we should just assume that they do.
By extension, if I put a sensor on my computer, and I hold a match to it, and it hits five hundred degrees, and it says “ouch,” I should assume that it is in pain. Is that what you’re saying?
No, not exactly. What I’m saying is that there are going to be a lot of iterations before we reach a point where we have a perfectly lifelike robot that is standing in front of you and saying, “Ouch.” Now, what I said about believing it when it says that, is that I hold it to the same bar that I hold human beings to: which is to say, if I can’t tell the difference between it and a human being, then I might as well give it the benefit of the doubt.
That’s really far down the line. Who knows, we might not ever even get there, but I assume that we would. Of course, that’s not the same standard that I would hold a CPU to. I wouldn’t consider the CPU as feeling pain. My point is, every iteration that we have, until we reach that perfectly lifelike human robot that’s standing in front of us and saying, “You hurt my feelings, you should apologize,” is that the emotions that these things exhibit are only meaningful insomuch as they affect the human beings that are around them.
So I’m saying, to program a machine that says, “Ouch you hurt my feelings, apologize to me,” is very important, as long as it looks like a person. And there is some probability that by interacting with it as a person, I could be training myself to be a serial killer without knowing it, if it didn’t require that I treat it with any moral care.
Is that making any sense? I don’t want to kick a real dog, and I don’t want to kick a perfectly lifelike dog. I don’t think that’s going to be good for me.
Even if you can argue that one dog doesn’t feel it, and the other dog does. In the case that one of the dogs is a robot, I don’t care about that dog actually getting hurt—it’s a robot. What I care about is me training myself to be the sort of person who kicks a dog. So I want that robot dog to not let me kick it—to growl, to whimper, to do whatever it does to invoke whatever the human levers are that you pull in order to make sure that we are not serial killers… if that makes any sense.
Let me ask in a different way, a different kind of question. I call a 1-800 number of my airline of choice, and they try to route me into the automated system, and I generally hit zero, because… whatever.
I fully expect that there is going to be a day, soonish, where I may be able to chat with a bot and do some pretty basic things without even necessarily knowing that it’s a bot. When I have a person that I’m chatting with, and they’re looking something up, I make small talk, ask about the weather, or whatnot.
If I find myself doing that, and then, towards the end of the call I figure out that this isn’t even a person; I will have felt tricked, and like I wasted my time. There’s nothing there that heard me. We yell at the TV—
—No. You heard you. When you yell at the TV, you yell for a reason. You don’t yell at an empty room for no reason, you yell for yourself. It’s your brain that is experiencing this. There’s no such thing as anything that you do which doesn’t get added up and go into your personality, and go into your daily experiences, and your dreams, and everything that eventually is you.
Whatever you spend your time doing, that’s going to have an impact on who you are. If you’re yelling at a wall, it doesn’t matter—you’re still yelling.
Don’t you think that there is something different about interacting with a machine and interacting with a human? We would by definition do those differently. Kicking the robot dog, I don’t think that’s going to be what most people do. But if the Tesla has to go left or go right, and hit a robot dog or a real dog… You know which way it should go, right?
Clearly the Tesla, we don’t care what decision it makes. We’re not worried about the impact on the Tesla. The Tesla would obviously kill a dog. If it was a human being who had a choice to kill a robot dog or a real dog, we would obviously choose the robot dog, because it would be better for the human being’s psyche.
We could have fun playing around with gradations, I guess. But again, I am more interested in real practical outcomes, and how to make lifelike artifacts that interact with human beings ethically, and what our real near-term future with that is going to look like. I’m just curious, what’s the future that you would like to see? What kind of interactions would you prefer to have—or none at all—with lifelike machines?
Well, I’m far more interested—like you—with what’s going to happen, and how we are going to react to it. It’s going to be confusing, though, because we’re used to things that speak in a human voice being a human.
I share some of Weizenbaum’s unease—not necessarily quite to the extent—but some unease that if we start blurring the lines between what’s human and what’s not, that doesn’t necessarily ennoble the machine. It may actually be to our own detriment. We’ve had to go through thousands of years of civilization to get something we call human rights, and we do them because we think there is something uniquely special about humans, or at least about life.
To just blithely say, “Let’s start extending that elsewhere,” I think it diminishes and maybe devalues it. But, enough with that; let me ask you a different one. What do you see? You said you’re far more interested in what the near-future holds. So, what does the near future hold?
Well, yeah, that’s kind of what I was ranting about before. Exactly what you were saying; I really agree with you strongly that these interactions, and what happens with us and our machines, puts a lot of power strongly in the hands of the people that make this technology. Like this dopamine reflex, mouse-pushing-the-cocaine-button way that we check our smartphone; that’s really good for corporations. That’s not necessarily great for individuals, you know?
That’s what scares me. If you ask me what is worrisome about the potential future interactions we have with these machines, and whether we should at all, a lot of it boils down to: are corporations going to take any responsibility for not harming people, once they start to understand better how these interactions play out? I don’t have a whole lot of faith in the corporations to look out for anyone’s interests but their own.
But if once we start understanding what good interactions look like… maybe as consumers, we can force these people to make these products that are hopefully going to make us better people.
Sorry, I got a little off into the weeds there. That’s my main fear. And as a little aside, I think it’s absolutely vital that when we are talking to an AI, or when we are interacting with a lifelike artificial machine, that that interaction be out in the open. I want that AI to tell me, “Hi, I’m automated, let’s talk about car insurance.”
Because you’re right, I don’t want to sit there and talk about weather with that thing. I don’t want to treat it exactly like I would a human being—unless it’s like fifty years from now, and these things are incredibly smart, and it would be totally worthwhile to talk to it. It would be like having a conversation with your smart aunt, or something.
But I would want that information upfront. I would want it to be flagged. Because I’d want to know if I’m talking to something that’s real or not—my boundaries are going to change depending on that information. And I think it’s important.
You have a PhD in Robotics, so what’s going to be something that’s going to happen in the near future? What’s something that’s going to be built that’s really just going to blow our minds?
Everyone’s always looking for something totally new, some sort of crazy app that’s going to come out of nowhere and blow our minds. It’s highly doubtful that anything is going to happen within the next five years, because science is incredibly iterative. Where you often see real breakthroughs is not some atomic thing being created completely new, that blows everybody away… But often, when you get connections between two things that already exist, and then you suddenly realize, “Oh wow! Peanut butter and jelly! Here we go, it’s a whole new world!”
This Alexa thing, the smart assistants that are now physically manifesting themselves in our homes, in the places where we spend most of our time socially—in our kitchens, in my office, where I’m at right now—they have established a beachhead in our homes now.
They started on our phones, and they’re in some of our cars, and now they’re in our homes, and I think that as this web spreads, slowly, and they add more ability to these personal AI assistants, and my conversations with Alexa get more complex, and there starts to become a dialogue… I think that slow creep is going to result in me sort of snapping to attention in five years and going, “Oh, holy crap! I just talked about what’s the best present to buy for my ten-year-old daughter with Alexa, based on the last ten years that I’ve spent ordering stuff off of Amazon, and everything she knows about me!”
That’s going to be the moment. I think it’s going to be something that creeps up on us, and it’s gonna show up in these monthly updates to these devices, as they creep through our houses, and take control of more stuff in our environments, and increase their ability to interact with us at all times.
It’ll be your Weizenbaum moment.
It’ll be a relationship moment, yeah. And I’ll know right then whether I value that relationship. By the way, I just wrote a short story all about this called “Iterations.” I joined the XPRIZE Science Fiction Advisory Council, and they’re really focused on optimistic futures. They brought together all of these science fiction authors and said, “Write some stories twenty years in the future with optimism, Utopias… Let’s do some good stuff.”
I wrote a story about a guy who comes back twenty years later, he finds his wife, and realizes that she has essentially been carrying on a relationship with an AI that’s been seeded with all of his information. She, at first, uses it as a tool for her depression at having mysteriously lost her husband, but now it’s become a part of her life. And the question in the story is, is that optimistic? Or is that a pessimistic future?
My feeling is that people use technology to survive, and we can’t judge them for it. We can’t tell them, “You’re living in a terrible dystopia, you’re a horrible person, you don’t understand human interaction because you spend all your time with a machine.” Well, no…if you’ve got severe depression, and this is what keeps you alive, then that’s an optimistic future, right? And who are we to judge?
You know, I don’t know. I keep on writing stories about it. I don’t think I’ll ever get any answers out of myself.
Isn’t it interesting that, you know, Siri has a name. Alexa—I have to whisper it, too, I have them all, so I have to watch everything that I say—has a name, Microsoft has Cortana, but Google is the “Google Assistant”—they didn’t name it; they didn’t personify it.
Do you have any speculation—I mean, not any first-hand knowledge—but would you have any speculation as to why that would be the case? I mean, I think Alexa, it’s got a hard “x” and it’s a reference to the library of Alexandria.
Yeah, that’s interesting. Well, also you literally want to choose a series of phonemes that are not high frequency, because you don’t want to constantly be waking the thing up. What’s also interesting about Alexa, is that it’s a “le” sound, which is difficult for children to make, so kids can’t actually use Alexa—I know this from extreme experience. Most of them can’t say “Alexa,” they say “Awexa” when they’re little, and so she doesn’t respond to little kids, which is crucial because little kids are the worst, and they’re always telling her to play these stupid songs that I don’t want to hear.
Can’t you change the trigger word, actually?
I think you can, but I think you’re pretty limited. I think you can change it to Echo.
Right.
I’m not sure why exactly Google would make that decision—I’m sure that it was a serious decision. It’s not the decision that every other company made—but I would guess that it’s not the greatest situation, because people like to anthropomorphize the objects that they interact with; it creates familiarity, and it also reinforces that this is an interaction with a person… It has a person’s name, right?
So, if you’re talking to something, what do we talk to? What’s the only thing that we’ve ever talked to in the history of humankind that was able to respond in English? Friggin’, another human being, right? So why would you call that human being “Google”? It doesn’t make any sense. Maybe they just wanted to reinforce their brand name, again and again and again, but I do think it’s a dumb decision.
Well, I notice that you give gender to Alexa, every time you refer to it.
She has a female name, and a female voice, so of course I do.
It’s still not an “it.”
If I was defining “it” for a dictionary or something, I would obviously define the entity Alexa as an “it,” but she’s intentionally piggybacking on human interaction, which is smart, because that’s the easiest way to interact, that’s what we have been evolved to do. So I am more than happy to bend to her wishes and utilize my interaction with her as naturally as I can, because she’s clearly trying to present herself as a female voice, living in a box in my kitchen. And so I’m completely happy, of course, to interact with her in that way, because it’s most efficient.
As we draw to the end here, you talked about optimism, and you came to this conclusion on different ways the future may unfold and that it may be hard to call the ball on whether that’s good or bad. But those nuances aside, generally speaking, are you optimistic about the future?
I am. I’m frighteningly optimistic. In everything I see, I have some natural level of optimism that is built into me, and it is often at odds with what I am seeing in the world. And yet it’s still there. It’s like trying to sit on a beach ball in a swimming pool. You can push it down, but it floats right back to the surface.
I feel like human beings make tools—that’s the most fundamental thing about people—and that part of making tools is being afraid of what we’ve made. That’s also a really great innate human instinct, and probably the reason that we’ve been around as long as we have been. I think every new tool we build—every time it’s more powerful than the one before it—we make a bigger bet on ourselves being a species worthy of that tool.
I believe in humanity. At the end of the day, I think that’s a bet worth making. Not everybody is good, not everybody is evil, but I think in the end, in the composition, we’re going to keep going forward, and we’re going to get somewhere, someday.
So, I’m mostly just excited, I’m excited to see what the future is going to bring.
Let’s close up talking about your books really quickly. Who do you write for? Of all the people listening, you would say, “The people that like my books are…”?
The people who are very similar to me, I guess, in taste. Of course, I write for myself. I get interested in something, I think a lot about it, sometimes I’ll do a lot of research on it, and then I write it. And I trust that someone else is going to be interested in that. It’s impossible for me to predict what people are going to want. I can’t do it. I didn’t go get a degree in robotics because I wanted to write science fiction.
I like robots, that’s why I studied robots, that’s why I write about robots now. I’m just very lucky that there’s anybody out there that’s interested in reading this stuff that I’m interested in writing. I don’t put a whole lot of thought into pleasing an audience, you know? I just do the best I can.
What’s The Clockwork Dynasty about? And it’s out already, right?
Yeah, so it’s out. It’s been out a couple weeks, and I just got back from a book tour, which is why I might be hoarse from talking about it. So the idea behind The Clockwork Dynasty is… It’s told in two parts: one part is set in the past, and the other part is set in the present. In the past, it imagines a race of humanlike machines built from automatons that are serving the great empires of antiquity, and they’re blending in with humanity, and hiding their identity.
And then in the present day, these same automatons are still alive, and they’re running out of power, and they’re cannibalizing each other in order to stay alive. An anthropologist discovers that they exist, and she goes on this Indiana Jones-style around-the-world journey to figure out who made these machines in the distant past, and why, and how to save their race, and resupply their power.
It’s this really epic journey that takes place over thousands of years, and all across Russia, and Europe, and China, and the United States; and I just had a hell of a good time writing it, because it’s all my favorite moments of history. I love clockwork automatons. I’ve always loved court automatons that were built in the seventeenth century, and around then… And yeah, I just had a great time writing it.
Well I want to thank you so much for taking an hour, to have maybe the most fascinating conversation about robots that I think I’ve ever had, and I hope that we can have you come back another time.
Thank you very much for having me, Byron. I had a great time.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.