In this episode, Byron and Nikola talk about singularity, consciousness, transhumanism, AGI and more.
Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nikola Danilov. Nikola started the Singularity Weblog, and hosts the wildly popular singularity.fm podcast. He has been called the “Larry King of the singularity.” He writes under the name Socrates, or to the Bill & Ted fans out there, Socrates. Welcome to the show, Nikola.
Nikola Danaylov: Thanks for having me, Byron, it’s my pleasure.
So let’s begin with, what is the singularity?
Well, there are probably as many definitions and flavors as there are people or experts in the field out there. But for me, personally, the singularity is the moment when machines first catch up and eventually surpass humans in terms of intelligence.
What does that mean exactly, “surpass humans in intelligence”?
Well, what happens to you when your toothbrush is smarter than you?
Well, right now it’s much smarter than me on how long I should brush my teeth.
Yes, and that’s true for most of us—how long you should brush, how much pressure you should exert, and things like that.
It gives very bad relationship advice, though, so I guess you can’t say it’s smarter than me yet, right?
Right, not about relationships, anyway. But about the duration of brush time, it is. And that’s the whole idea of the singularity, that, basically, we’re going to expand the intelligence of most things around us.
So now we have watches, but they’re becoming smart watches. We have cars, but they’re becoming smart cars. And we have smart thermostats, and smart appliances, and smart buildings, and smart everything. And that means that the intelligence of the previously dumb things is going to continue expanding, while unfortunately our own personal intelligence, or our intelligence as a species, is not.
In what sense is it a “singularity”?
Let me talk about the roots of the word. The origin of the word singularity comes from mathematics, where it basically is a problem with an undefined answer, like five divided by zero, for example. Or in physics, where it signifies a black hole. That’s to say a place where there is a rupture in the fabric of time-space, and the laws of the universe don’t hold true as we know them.
In the technological sense, we’re borrowing the term to signify the moment where humanity stops being the smartest species on our planets, and machines surpass us. And therefore, beyond that moment, we’re going to be looking into a black hole of our future. Because our current models fail to be able to provide sufficient predictions as to what happens next.
So everything that we have already is kind of going to have to change, and we don’t know which way things are going to go, which is why we’re calling it a black hole. Because you cannot see beyond the event horizon of a black hole.
Well if you can’t see beyond it, give us some flavor of what you think is going to happen on this side of the singularity. What are we going to see gradually, or rapidly, happen in the world before it happens?
One thing is the “smartification” of everything around us. So right now, we’re still living in a pretty dumb universe. But as things come to have more and more intelligence, including our toothbrushes, our cars—everything around us—our fridges, our TVs, our computers, our tables, everything. Then that’s one thing that’s going to keep happening, until we have the last stage where, according to Ray Kurzweil, quote, “the universe wakes up,” and everything becomes smart, and we end up with different things like smart dust.
Another thing will be the merger between man and machine. So, if you look at the younger generation, for example, they’re already inseparable from their smartphones. It used to be the case that a computer was the size of a building—and by the way, those computers were even weaker in terms of processing power than our smartphones are today. Even the Apollo program used a much less powerful machine to send astronauts to the moon than what we have today in our pockets.
However, that change is not going to stop there. The next step is that those machines are going to actually move inside of our bodies. So they used to be inside of buildings, then they went on our body, in our pockets, and are now becoming what’s called “wearable technology.” But tomorrow it will not be wearable anymore, because it will be embedded.
It will be embedded inside of our gut, for example, to monitor our microbiome and to monitor how our health is progressing; it will be embedded into our brains even. Basically, there may be a point where it becomes inseparable from us. That in turn will change the very meaning of the definition of being human. Not only at the sort of collective level as a species, but also at the personal level, because we are possibly, or very likely, going to have a much bigger diversification of the understanding of what it means to be a human than we have right now.
So when you talk about computers becoming smarter than us, you’re talking about an AGI, artificial general intelligence, right?
Not necessarily. The toothbrush example is artificial narrow intelligence, but as it gets to be smarter and smarter there may be a point where it becomes artificial general intelligence, which is unlikely, but it’s not impossible. And the distinction between the two is that artificial general intelligence is equal or better than human intelligence at everything, not only that one thing.
For example, a calculator today is better than us in calculations. You can have other examples, like, let’s say a smart car may be better than us at driving, but it’s not better than us at Jeopardy, or speaking, or relationship advice, as you pointed out.
We would reach artificial general intelligence at the moment when a single machine will be able to be better at everything than us.
And why do you say that an AGI is unlikely?
Oh no, I was saying that an AGI may be unlikely in a toothbrush format, because the toothbrush requires only so many particular skills or capabilities, only so many kinds of knowledge.
So we would require the AGI for the singularity to occur, is that correct?
Yeah, well that’s a good question, and there’s a debate about it. But basically the idea is that anything you can think of which humans do today, that machine would be equal or better at it. So, it could be Jeopardy, it could be playing Go. It could be playing cards. It could be playing chess. It could be driving a car. It could be giving relationship advice. It could be diagnosing a medical disease. It could be doing accounting for your company. It could be shooting a video. It could be writing a paper. It could be playing music or composing music. It could be painting an impressionistic or other kind of piece of art. It could be taking pictures equal or better than Henri Cartier-Bresson, etc. Everything that we’re proud of, it would be equal or better at.
And when do you believe we will see an AGI, and when would we see the singularity?
That’s a good question. I kind of fluctuate a little bit on that. Depending on whether we have some kind of general sort of global-scale disaster like it could be nuclear war, for example—right now the situation is getting pretty tense with North Korea—or some kind of extreme climate-related event, or a catastrophe caused by an asteroid impact; falling short of any of those huge things that can basically change the face of the Earth, I would say probably 2045 to 2050 would be a good estimate.
So, for an AGI or for the singularity? Or are you, kind of, putting them both in the same bucket?
For the singularity. Now, we can reach human-level intelligence probably by the late 2020’s.
So you think we’ll have an AGI in twelve years?
Probably, yeah. But you know, the timeline, to me, is not particularly crucial. I’m a philosopher, so the timeline is interesting, but the more important issues are always the philosophical ones, and they’re generally related to the question of, “So what?” Right? What are the implications? What happens next?
It doesn’t matter so much whether it’s twelve years or sixteen years or twenty years. I mean, it can matter in the sense that it can help us be more prepared, rather than not, so that’s good. But the question is, so what? What happens next? That’s the important issue.
For example, let me give you another crucial technology that we’re working on, which is life extension technology, trying to make humanity “amortal.” Which is to say we’re not going to be immortal—we can still die if we get ran over by a truck or something like that—but we would not be likely to die from general causes of death that we see today, which are usually old-age related.
As an individual, I’m hoping that I will be there when we develop that technology. I’m not sure I will still be alive when we have it, but as a philosopher what’s more important to me is, “So what? What happens next?” So yeah, I’m hoping I’ll be there, but even if I’m not there it is still a valid and important question to start considering and investigating right now—before we are at that point—so that we are as intellectually and otherwise prepared for events like this as possible.
I think the best guesses are, we would live to about 6,750. That’s how long it would take for some, you know, Wile E Coyote kind of piano-falling-out-the-top-floor-of-a-building-and-landing-on-you thing to happen to you, actuarially-speaking.
So let’s jump into philosophy. You’re, of course, familiar with Searle’s Chinese Room question. Let me set that up for the listeners, and then I’ll ask you to comment on it.
So it goes like this: There’s a man, we’ll call him the librarian. And he’s in this giant room that’s full of all of these very special books. And the important part, the man does not speak any Chinese, absolutely no Chinese. But people slide him questions under the door that are written in Chinese.
He takes their question and he finds the book which has the first symbol on the spine, and he finds that book and he pulls it down and he looks up the second symbol. And when he finds the second symbol and it says go to book 24,601, and so he goes to book 24,601 and looks up the third symbol and the fourth and the fifth—all the way to the end.
And when he gets to the end, the final book says copy this down. He copies these lines, and he doesn’t understand what they are, slides it under the door back to the Chinese speaker posing the question. The Chinese speaker picks it up and reads it and it’s just brilliant. I mean, it’s absolutely over-the-top. You know, it’s a haiku and it rhymes and all this other stuff.
So the philosophical question is, does that man understand Chinese? Now a traditional computer answer might be “yes.” I mean, the room, after all, passes the Turing test. Somebody outside sliding questions under the door would assume that there’s a Chinese speaker on the other end, because the answers are so perfect.
But at a gut level, the idea that this person understands Chinese—when they don’t know whether they’re talking about cholera or coffee beans or what have you—seems a bit of a stretch. And of course, the punchline of the thing is, that’s all a computer can do.
All a computer can do is manipulate ones and zeros and memory. It can just go book to book and look stuff up, but it doesn’t understand anything. And with no understanding, how can you have any AGI?
So, let me ask you this? How do you know that that’s not exactly what’s happening right now in my head? How do you know that me speaking English to you right now is not the exact process you described?
I don’t know, but the point of the setup is: If you are just that, then you don’t actually understand what we’re actually talking about. You’re just cleverly answering things, you know, it is all deterministic, but there’s, quote, “nobody home.” So, if that is the case, it doesn’t invalidate any of your answers, but it certainly limits what you’re able to do.
Well, you see, that’s a question that relates very much with consciousness. It relates to consciousness, and, “Are you aware of what you’re doing,” and things like that. And what is consciousness in the first place?
Let’s divide that up. Strictly speaking, consciousness is subjective experience. “I had an experience of doing X,” which is a completely different thing than “I have an intellectual understanding of X.” So, just the AGI part, the simple part of: does the man in the room understand what’s going on, or not?
Let’s be careful here. Because, what do you mean by “understand”? Because you can say that I’m playing chess against a computer. Do I understand the playing of chess better than a computer? I mean what do you mean by understand? Is it not understanding that the computer can play equal or better chess than me?
The computer does not understand chess in the meaningful sense that we have to get at. You know, one of the things we humans do very well is we generalize from experience, and we do that because we find things are similar to other things. We understand that, “Aha, this is similar to that,” and so forth. A computer doesn’t really understand how to play chess. It’s arguable that the computer is even playing chess, but putting that word aside, the computer does not understand it.
The computer, that program, is never going to figure out baccarat any more than it can figure out how many coffee beans Colombia should export next year. It just doesn’t have any awareness at all. It’s like a clock. You wind a clock, and tick-tock, tick-tock, it tells you the time. We progressively add additional gears to the clockwork again and again. And the thesis of what you seem to be saying is that, eventually, you add enough gears so that when you wind this thing up, it’s smarter than us and it can do absolutely anything we can do. I find that to be, at least, an unproven assumption, let alone perhaps a fantastic one.
I agree with you on the part that it’s unproven. And I agree with you that it may or may not be an issue. But it depends about what you’re going for here, and it depends on the computer you’re referring to, because we have the new software that was invented by AlphaGo to play Go. And that actually learned to play the program exactly based on the previous games—that’s to say, on the previous experience by other players. And then that same kind of approach of learning from the past, and coming up with new creative solutions to the future was then implemented in a bunch of other fields, including bioengineering, including medicine, and so on.
So when you say the computer will never be able to calculate how many beans that country needs for next season, actually it can. That’s why it’s getting more and more generalized intelligence.
Well, let me ask that question a slightly different way. So I have, hypothetically, a cat food dish that measures out cat food for my cat. And it learns, based on the weight of the food in it, the right amount to put out. If the cat eats a lot, it puts more out. If the cat eats less, it puts less out. That is a learning algorithm, that is an artificial intelligence. It’s a learning one, and it’s really no different than AlphaGo, right? So what do you think happens from the cat dish—
—I would take issue with you saying it’s really no different from AlphaGo.
Hold on, let me finish the question; I’m eager to hear what you have to say. What happens, between the cat food AI and AlphaGo and an AGI? At what point does something different happen? Where does that break, and it’s not just a series of similar technologies?
So, let me answer your question this way… When you have a baby born, it’s totally dumb, stupid, blind, and deaf. It lacks complete self-awareness. Its unable to differentiate between itself and its environment, and it lacks complete self-awareness for probably the first, arguably, year-and-a-half to two years. And there’s a number of psychological tests that can be administered as the child develops. Usually girls, by the way, do about three to six months better, or they develop personal awareness faster and earlier than boys, on average. But let’s say the average age is about a year-and-a-half to two years—and that’s a very crude estimation, by the way. The development of AI would not be exactly the same, but there will be parallels.
The question you’re raising is a very good question. I don’t have a good answer because, you know, that can only happen with direct observational data—which we don’t have right now to answer your question, right? So, let’s say tomorrow we develop artificial general intelligence. How would we know that? How can we test for that, right? We don’t know.
We’re not even sure how we can evaluate that, right? Because just as you suggested, it could be just a dumb algorithm, processing just like your algorithm is processing how much cat food to provide to your cat. It can lack complete self-awareness, while claiming that it has self-awareness. So, how do we check for that? The answer is, it’s very hard. Right now, we can’t. You don’t know that I even have self-awareness, right?
But, again, those are two different things, right? Self-awareness is one thing, but an AGI is easy to test for, right? You give a program a list of tasks that a human can do. You say, “Here’s what I want you to do. I want you to figure out the best way to make espresso. I want you to find the Waffle House…” I mean, it’s a series of tasks. There’s nothing subjective about it, it’s completely objective.
Yes.
So what has happened between the cat food example, to the AlphaGo, to the AGI—along that spectrum, what changed? Was there some emergent property? Was there something that happened? Because you said the AlphaGo is different than my cat food dish, but in a philosophical sense, how?
It’s different in the sense that it can learn. That’s the key difference.
So does my cat food thing, it gives the cat more food some days, and if the cat’s eating less, it cuts the cat food back.
Right, but you’re talking just about cat food, but that’s what children do, too. Children know nothing when they come into this world, and slowly they start learning more and more. They start reacting better, and start improving, and eventually start self-identifying, and eventually they become conscious. Eventually they develop awareness of the things not only within themselves, but around themselves, etc. And that’s my point, is that it is a similar process; I don’t have the exact mechanism to break down to you.
I see. So, let me ask you a different question. Nobody knows how the brain works, right? We don’t even know how thoughts are encoded. We just use this ubiquitous term, “brain activity,” but we don’t know how… You know, when I ask you, “What was the color of your first bicycle?” and you can answer that immediately, even though you’ve probably never thought about it, nor do you have some part of your brain where you store first bicycles or something like that.
So, assuming we don’t know that, and therefore we don’t really know how it is that we happen to be intelligent. By what basis do you say, “Oh, we’re going to build a machine that can do something that we don’t even know how we do,” and even put a timeline on it, to say, “And it’s going to happen in twelve years”?
So there are a number of ways to answer your question. One is, we don’t necessarily need to know. We don’t know how we create intelligence when we have babies, too, but we do it. How did it happen? It happened through evolution; so, likewise, we have what are called “evolutionary algorithms,” which are basically algorithms that learn to learn. And the key point, as Dr. Stephen Wolfram proved years ago in his seminal work Mathematica, from very simple things, very complex patterns can emerge. Look at our universe; it emerged from tiny little, very simple things.
Actually I’m interviewing Lawrence Krauss next week, he says it emerged from nothing. So from nothing, you have the universe, which has everything, according to him at least. And we don’t know how we create intelligence in the baby’s case, we just do it. Just like you don’t know how you grow your nails, or you don’t know how you grow your hair, but you do it. So, likewise, just one of the many different paths that we can take to get to that level of intelligence is through evolutionary algorithms.
By the way, this is what’s sometimes referred to as the black box problem, and AlphaGo is a bit of an example of that. There are certain things we know, and there are certain things we don’t know that are happening. Just like when I interviewed David Ferrucci, who was the team leader behind Watson, we were talking about, “How does Watson get this answer right and that answer wrong?” His answer is, “I don’t really know, exactly.” Because there are so many complicated things coming together to produce an answer, that after a certain level of complexity, it becomes very tricky to follow the causal chain of events.
So yes, it is possible to develop intelligence, and the best example for that is us. Unless you believe in that sort of first-mover, God-is-the-creator kind of thing, that somebody created us—you can say that we kind of came out of nothing. We evolved to have both consciousness and intelligence.
So likewise, why not have the same process only at the different stratum? So, right now we’re biologically-based; basically it’s DNA code replicating itself. We have A, C, T, and G. Alternatively, is it inconceivable that we can have this with a binary code? Or even if not binary, some other kind of mathematical code, so you can have intelligence evolve—be it silicone-based, be it photon-based, or even organic processor-based, be it quantum computer-based… what have you. Right?
So are you saying that there could be no other stratum, and no other way that could ever hold intelligence other than us? Then my question to you will be, well what’s the evidence of that claim? Because I would say that we have the evidence that it’s happened once. We could therefore presume that it could not be necessarily limited to only once. We’re not that special, you know. It could possibly happen again, and more than once.
Right, I mean it’s certainly a tenable hypothesis. The Singularians, for the most part, don’t treat it as a hypothesis, they treat it as a matter of faith.
That’s why I’m not such a good Singularitarian.
They say, “We have achieved consciousness and an AGI. We have a general intelligence. Therefore, we must be able to build one.” You don’t generally apply that logic to anything else in life, right? There is a solar system, therefore we must be able to build one. There is a third dimension, we must be able to build one.
With almost nothing else in life do you do it, and yet people who talk about the singularity, and are willing to put a date on it, by the way, to them there’s nothing up for debate. Even though all the things that are required for it are completely unknown, how we achieved them.
Let me give you Daniel Dennett’s take on things, for example. He says that consciousness doesn’t exist. That it is self-delusion. He actually makes a very, very good argument about it, per se. I’ve been trying to get him on my podcast for a while. But he says it’s total self-fabrication, self-delusion. It doesn’t exist. It’s beside the point, right?
But he doesn’t deny that we’re intelligent though. He just says that what we call “consciousness” is just brain activity. But he doesn’t say, “Oh, we don’t really have a general intelligence, either.” Obviously, we’re intelligent.
Exactly. But that’s kind of what you’re trying to imply with the machines, because they will be intelligent in the sense that they will be able to problem-solve anything that we’re able to problem-solve, as we pointed out—whether it’s chess, whether it’s cat food, whether it’s playing or composing the tenth symphony. That’s the point.
Okay, well that’s at least unquestionably the theory.
Sure.
So let’s go from there. Talk to me about Transhumanism. You write a lot about that. What do you think we’ll be able to do? And if you’re willing to say, when do you think we’ll be able to do it? And, I mean, a man with a pacemaker is a Transhuman, right? He can’t live without it.
I would say all of us are already cyborgs, depending on your definition. If you say that the cyborg is an organism consisting of, let’s say, organic and inorganic parts working together in a single unit, then I would answer that if you have been vaccinated, you’re already a cyborg.
If you’re wearing glasses, or contact lenses, you’re already a cyborg. If you’re wearing clothes and you can’t survive without them, or shoes, you’re already a cyborg, right? Because, let’s say for me, I am severely short-sighted with my eyesight. I’m like, -7.25 or something crazy like that. I’m almost kind of blind without my contacts. Almost nobody knows that, unless people listen to these interviews, because I wear contacts, and for all intents and purposes I am as eye-capable as anybody else. But take off my contacts and I’ll be blind. Therefore you have one single unit between me and that inorganic material, which basically I cannot survive without.
I mean, two hundred years ago, or five hundred years ago, I’d probably be dead by now, because I wouldn’t be able to get food. I wouldn’t be able to survive in the world with that kind of severe shortsightedness.
The same with vaccinations, by the way. We know that the vast majority of the population, at least in the developed world, has at least one, and in most cases a number of different vaccines—already by the time you’re two years old. Viruses, basically, are the carriers for the vaccines. And viruses straddle that line, that gray area between living and nonliving things—the hard-to-classify things. They become a part of you, basically. You carry those vaccine antibodies, in most cases, for the rest of your life. So I could say, according to that definition, we are all cyborgs already.
That’s splitting a hair in a very real sense though. It seems from your writing you think we’re going to be doing much more radical things than that; things which, as you said earlier, call into question whether or not we’re even human anymore. What are those things, and why does that affect our definition of “human”?
Let me give you another example. I don’t know if you’ve seen in the news, or if your audience has seen in the news, a couple of months ago the Chinese tried to modify human embryos with CRISPR gene-editing technology. So we are not right now at the stage where, you know… It’s been almost 40 years since we had the first in vitro babies. At the time, basically what in vitro meant was that you do the fertilization outside of the womb, into a petri dish or something like that. And then you watch the division process begin, and then you select—by basically visual inspection—what looks to be the best-fertilized egg, simply by visual examination. And that’s the egg that you would implant.
Today, we don’t just observe; we actually we can preselect. And not only that, we can actually go in and start changing things. So it’s just like when you’re first born, you start learning the alphabet, then you start reading full words; then you start reading full sentences; and then you start writing yourself.
We’re doing, currently, exactly that with genetics. We were starting to just identify the letters of the alphabet thirty, or forty, or fifty years ago. Then we started reading slowly; we read the human genome about fifteen years ago. And now we’re slowly starting to learn to write. And so the implication of that is this: how does the meaning of what it means to be human change, when you can change your sex, color, race, age, and physical attributes?
Because that’s the bottom line. When we can go and make changes at the DNA level of an organism, you can change all those parameters. It’s just like programming. In computer science it’s 0 and 1. In genetics it’s ATCG, four letters, but it’s the same principle. In one case, you’re programming a software program for a computer; in the other case, you’re programming living organisms.
But in that example, though, everybody—no matter what race you are—you’re still a human; no matter what gender you are, you’re still a human.
It depends how you qualify “human,” right? Let’s be more specific. So right now, when you say “humans,” what you mean actually is Homo sapiens, right? But Homo sapiens has a number of very specific physical attributes. When you start changing the DNA structure, you can actually change those attributes to the point where the result doesn’t carry those physical attributes anymore. So are you then Homo sapiens anymore?
From a biological point of view, the answer will most likely depend on how far you’ve gone. There’s no breakpoint, though, and different people will have a different red line to cross. You know for some, just a bit. So let’s say you and your wife or partner want to have a baby. And both of you happen to be carriers of a certain kind of genetic disease that you want to avoid. You want to make sure, before you conceive that baby, the fertilized egg doesn’t carry that genetic material.
And that’s all you care about, that’s fine. But someone else will say, that’s your red line, whereas my red line is that I want to give that baby the good looks of Brad Pitt, I want to give it the brain of Stephen Hawking, and I want to give it the strength of a weightlifter, for example. Each person who is making that choice would go for different things, and would have different attributes that they would choose to accept or not to accept.
Therefore, you would start having that diversification that I talked about in the beginning. And that’s even before you start bringing in things like neural cognitive implants, etc.—which would be basically the merger between men and machine, right? Which basically means that you can have both parallel developments of biotech or genetics. Our biological evolution and development, accelerated, on the other hand; and on the other hand, you can have the merger of that with the acceleration and evolution and improvement of computer technology and neurotech. When you put those two things together, you end up with a final entity which is nothing like what we are today, and it definitely would not fit the definition of being human.
Do you worry, at some level, that it’s taken us five thousand years of human civilization to come up with this idea that there are things called human rights? That there are these things you don’t do to a person no matter what. That you’re born with them, and because you are human, you have these rights.
Do you worry that, for better or worse, what you’re talking about will erode that? That we will lose this sense of human rights, because we lose some demarcation of a human is.
That’s a very complicated question. I would suggest people read Yuval Harari’s book Homo Deus on that topic, and the previous one was called Sapiens. Those two are probably the best two books that I’ve read in the last ten years. But basically, the idea of human rights is an idea that was born just a couple hundred years ago. It came to exist with humanism, and especially liberal humanism. Right now, if you see how it’s playing out, humanism is kind of taking what religion used to do, in the sense of that religion used to put God in the center of everything—and then, since we were his creation, everything else was created for us, to serve us.
For example the animal world, etc., and we used to have the Ptolemaic idea of the universe, where the earth was the center, and all of those things. Now, what humanism is doing is putting the human in the center of the universe, and saying humanity has this primacy above everything else, just because of our very nature. Just because you are human, you have human rights.
I would say that’s an interesting story, but if we care about that story we need to push it even further.
In our present context, how is that working out for everyone else other than humanity? Well the moment we created humanism and invented human rights, we basically made humanity divine. We took the divinity from God, and gave it to humanity, but we downgraded everybody else. So animals which, back in the day—let’s say the hunter-gatherer society—we considered ourselves to be equal and on par with the animals.
Because you see, one day I would kill you and eat you, next day maybe a tiger would eat me. That’s how the world was. But now, we downgraded all the animals to machine—they don’t have consciousness, they don’t have any feelings, they lack self-awareness—and therefore we can enslave and kill them any way we wish and like.
So as a result, we pride ourselves on our human rights and things like that, and yet we enslave and kill seventy to seventy-five billion animals every year, and 1.3 trillion sea organisms like fish, annually. So the question then is, if we care so much about rights, why should they be limited only to human rights? Are we saying that other living organisms are incapable of suffering? I’m a dog owner, I have a seventeen-and-a-half-year-old dog. She’s on her last leg. She actually had a stroke last weekend.
I can tell you that she has taught me that she possesses the full spectrum of happiness and suffering that I do, pretty much. Even things like jealousy, and so on, she demonstrated to me multiple times, right? Yet, we today use that idea of humanism and human rights to defend ourselves and enslave everybody else.
I would suggest it’s time to expand that and say, first, to our fellow animals, that we need to include them, that they have their own rights, first of all. Second of all, that possibly rights should not be limited to organic organisms, and should not be called human or animal rights, but they should be called intelligence rights, or even beyond intelligence—any kind of organism that can exhibit things like suffering and happiness and pleasure and pain.
Because obviously, there is a different level of intelligence between me and my dog—we would hope—but she’s able to suffer as much as I am, and I’ve seen it. And that’s true especially more for whales and great apes and stuff like that, which we have brought to the brink of extinction right now. We want to be special, that’s what religion does to us. That’s what humanism did with human rights.
Religion taught us that we’re special because God created us in his own image. Then humanism said there is no God, we are the God, so we took the place of God—we took his throne and said, “We’re above everybody else.” That’s a good story, but it’s nothing more than a story. It’s a myth.
You’re a vegan, correct?
Yes.
How far down would you extend these rights? I mean, you have consciousness, and then below that you have sentience, which is of course a misused word. People use “sentience” to mean intelligence, but sentience is the ability to feel something. In your world, you would extend rights at some level all the way down to anything that can feel?
Yeah, and look: I’ve been a vegan for just over a year and a couple of months, let’s say fourteen months. So, just like any other human being, I have been, and still am, very imperfect. Now, I don’t know exactly how far we should expand that, but I would say we should stop immediately at the level we can easily observe that we’re causing suffering.
If you go to a butcher shop, especially an industrialized farming butcher shop, where they kill something like ten thousand animals per day—it’s so mechanized, right? If you see that stuff in front of your eyes, it’s impossible not to admit that those animals are suffering, to me. So that’s at least the first step. I don’t know how far we should go, but we should start at the first steps, which are very visible.
What do you think about consciousness? Do you believe consciousness exists, unlike Dan Dennett, and if so where do you think it comes from?
Now you’re putting me on the spot. I have no idea where it comes from, first of all. You know, I am atheist, but if there’s one religion that I have very strong sympathies towards, that would be Buddhism. I particularly value the practice of meditation. So the question is, when I meditate—and it only happens rarely that I can get into some kind of deep meditation—is that consciousness mine, or am I part of it?
I don’t know. So I have no idea where it comes from. I think there is something like consciousness. I don’t know how it works, and I honestly don’t know if we’re part of it, or if it is a part of us.
Is it at least a tenable hypothesis that a machine would need to be conscious, to be an AGI?
I would say yes, of course, but the next step, immediately, is how do we know if that machine has consciousness or not? That’s what I’m struggling with, because one of the implications is that the moment you accept, or commit to that kind of definition, that we’re only going to have AGI if it has consciousness, then the question is, how do we know if and when it has consciousness? An AGI that’s programmed to say, “I have consciousness,” well how do you know if it’s telling the truth, and if it’s really conscious or not? So that’s what I’m struggling with, to be more precise in your answers.
And mind you, I have the luxury of being a philosopher, and that’s also kind of the negative too—I’m not an engineer, or a neuroscientist, so…
But you can say consciousness is required for an AGI, without having to worry about, well how do we measure it, or not.
Yes.
That’s a completely different thing. And if consciousness is required for an AGI, and we don’t know where human consciousness comes from, that at least should give us an enormous amount of pause when we start talking about the month and the day when we’re going to hit the singularity.
Right, and I agree with you entirely, which is why I’m not so crazy about the timelines, and I’m staying away from it. And I’m generally on the skeptical end of things. By the way, for the last seven years of my journey I have been becoming more and more skeptical. Because there are other reasons or ways that the singularity…
First of all, the future never unfolds the way we think it will, in my opinion. There’s always those black swan events that change everything. And there are issues when you extrapolate, which is why I always stay away from extrapolation. Let me give you two examples.
The easy example is when you have positive, or let’s say negative extrapolation. We have people such as Lord Kelvin—he was the president of the British Royal Society, one of the smartest people—who wrote a book in the 1890’s about how heavier-than-air aircraft are impossible to build.
The great H.G. Wells wrote, just in 1902, that heavier-than-air aircraft are totally impossible to build, and he’s a science fiction writer. And yet, a year later the Wright brothers, two bicycle makers, who probably never read Lord Kelvin’s book, and maybe didn’t even read any of H.G. Wells’ science fiction novels, proved them both wrong.
So people were extrapolating negatively from the past. Saying, “Look, we’ve tried to fly since the time of Icarus, and the myth of Icarus is a warning to us all: we’re never going to be able to fly.” But we did fly. So we didn’t fly for thousands of years, until one day we flew. That’s one kind of extrapolation that went wrong, and that’s the easy one to see.
The harder one is the opposite, which is called positive extrapolation. From 1903 to, let’s say, the late 1960s, we went from the Wright brothers, to the moon. People said—amazing people, like Arthur C. Clarke—said, well if we made it from 1903 to the late 1960s to the moon, by 2002 we will be beyond Mars; we will be outside of our solar system.
That’s positive extrapolation. Based on very good data for, let’s say, sixty-five years from 1903 to 1968—very good data—you saw tremendous progress in aerospace technology. We went to the moon several times, in fact, and so on and so on. So it was logical to extrapolate that we would be by Mars and beyond, today. But actually, the opposite happened. Not only did we not reach Mars by today, we are actually unable to get back to the moon, even. As Peter Thiel says in his book, we were promised flying cars and jetpacks, but all we got was 140 characters.
In other words, beware of extrapolations, because they’re true until they’re not true. You don’t know when they are going to stop being true, and that’s the nature of black swan sorts of things. That’s the nature of the future. To me, it’s inherently unknowable. It’s always good to have extrapolations, and to have ideas, and to have a diversity of scenarios, right?
That’s another thing which I agree with you on: Singularians tend to embrace a single view of the future, or a single path to the future. I have a problem with that myself. I think that there’s a cone of possible futures. There are certainly limitations, but there is a cone of possibilities, and we are aware of only a fraction of it. We can extrapolate only in a fraction of it, because we have unknown unknowns, and we have black swan phenomena, which can change everything dramatically. I’ve even listed three disaster scenarios—like asteroids, ecological collapse, or nuclear weapons—which can also change things dramatically. There are many things that we don’t know, that we can’t control, and that we’re not even aware of that can and probably will change the actual future from the future we think will happen today.
Last philosophical question, and then I’d like to chat about what you’re working on. Do you believe humans have free will?
Yes. So I am a philosopher, and again—just like with the future—there are limitations, right? So all the possible futures stem from the cone of future possibilities derived from our present. Likewise, our ability to choose, to make decisions, to take action, have very strict limitations; yet, there is a realm of possibilities that’s entirely up to us. At least that’s what I’m inclined to think. Even though most scientists that I meet and interview on my podcast are actually one level, or one degree or another degree, of determinist.
Would an AGI need to have free will in order to exist?
Yes, of course.
Where do you think human free will comes from? If every effect had a cause, and every decision had a cause—presumably in the brain—whether it’s electrical or chemical or what have you… Where do you think it comes from?
Yeah, it could come from quantum mechanics, for example.
That only gets you randomness. That doesn’t get you somehow escaping the laws of physics, does it?
Yes, but randomness can be sort of a living-cat and dead-cat outcome, at least metaphorically speaking. You don’t know which one it will be until that moment is there. The other thing is, let’s say, you have fluid dynamics, and with the laws of physics, we can predict how a particular system of gas, will behave within the laws of fluid dynamics. But it’s impossible to predict how a single molecule or atom will behave within that system. In other words, if the laws of the universe and the laws of physics set the realm of possibilities, then within that realm, you can still have free will. So, we are such tiny minuscule little parts of the system, as individuals, that we are more akin to atoms, if not smaller particles than that.
Therefore, we can still be unpredictable.
Just like it’s unpredictable, by the way, with quantum mechanics, to say, “Where is the electron located?” and if you try to observe it, then you are already impacting on the outcome. You’re predetermining it, actually, when you try to observe it, because you become a part of the system. But if you’re not observing it, you can create a realm of possibilities where it’s likely to be, but you don’t know exactly where it is. Within that realm, you get your free will.
Final question: Tell us what you’re working on, what’s exciting to you, what you’re reading about… I see you write a lot about movies. Are there any science fiction movies that you think are good ones to inform people on this topic? Just talk about that for a moment.
Right. So, let me answer backwards. In terms of movies—it’s been awhile since I’ve watched it, but I actually even wrote a review on in—one of the movies that I really enjoyed watching, it’s by the Wachowskis, and it’s called “Cloud Atlas.” I don’t think that movie was very successful at all, to be honest with you.
I’m not even sure if they managed to recover the money they invested in it, but in my opinion it was one of the top ten best movies I’ve ever seen in my life. Because it’s a sextet—so it had six plots progressing in a parallel fashion, in six different timelines. So six things happening in six different locations in six different epochs, with six different timelines, with tremendous actors, and it touched on a lot of those future technologies, and even the meaning of being human—what separates us from the others, and so on.
I would suggest people check out “Cloud Atlas.” One of my favorite movies. The previous question you asked was, what am I working on?
Mm-hmm.
Well, to be honest, I just finished my first book three months ago or something. I launched it on January 23rd I think. So I’ve been basically promoting my book, traveling, giving speeches, trying to raise awareness about the issues, and the fact that, in my view, we are very unprepared—as a civilization, as a society, as individuals, as businesses, and as governments.
We are going to witness a tremendous amount of change in the next several decades, and I think we’re grossly unprepared. And I think, depending on how we handle those changes, with genetics, with robotics, with nanotech, with artificial intelligence—even if we never reach the level of artificial general intelligence, by the way, that’s beside the point to me—just the changes we’re going to witness as a result of the biotech revolution can actually put our whole civilization at risk. They’re not just only going to change the meaning of what it is to be human, they would put everything at risk. All of those things converging together, in the narrow span of several decades basically, I think, create this crunch point which could be what some people have called a “pre-singularity future,” which is one possible answer to the Fermi Paradox.
Enrico Fermi was this very famous Italian mathematician who, a few decades ago, basically observed that there are two-hundred billion galaxies just in the observable realm of the universe. And each of those two-hundred billion galaxies has two-hundred billion stars. In other words, there’s almost an endless number of exoplanets like ours—which are located in the Goldilocks area, where it’s not too hot or too cold—which can potentially give birth to life. The question then is, if there are so many planets and so many stars and so many places where we can have life, where is everybody? Where are all the aliens? There’s a diversity of answers to that question. But at least one of those possible scenarios, to explain this paradox, is what’s referred to as the pre-singularity future. Which is to say, in each civilization, there comes a moment where its technological prowess surpasses its capacity to control it. Then, possibly, it self-destructs.
So in other words, what I’m saying is that it may be an occurrence which happens on a regular basis in the universe. It’s one way to explain the Fermi Paradox, and it’s possibly the moment that we’re approaching right now. So it may be a moment where we go extinct like dinosaurs; or, if we actually get it right—which right now, to be honest with you, I’m getting kind of concerned about—then we can actually populate the universe. We can spread throughout the universe, and as Konstantin Tsiolkovsky said, “Earth is the cradle of humanity, but sooner or later, we have to leave the cradle.” So, hopefully, in this century we’ll be able to leave the cradle.
But right now, we are not prepared—neither intellectually, nor technologically, nor philosophically, nor ethically, not in any way possible, I think. That’s why it’s so important to get it right.
The name of your book is?
Conversations with the Future: 21 Visions for the 21st Century.
All right, Nikola, it’s been fascinating. I’ve really enjoyed our conversation, and I thank you so much for taking the time.
My pleasure, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.