Close

Voices in AI – Episode 18: A Conversation with Roman Yampolskiy

In this episode Byron and Roman discuss the future of jobs, Roman’s new field of study, “Intellectology”, consciousness and more.

Byron Reese: This is Voices in AI, brought to you by Gigaom, and I’m Byron Reese. Today, our guest is Roman Yampolskiy, a Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab, and an author of many books, including Artificial Superintelligence: A Futuristic Approach.

His main areas of interest are AI safety and cyber security. He is the author of over one hundred publications, and his research has been cited by over a thousand scientists around the world.

Welcome to the show.

Roman Yampolskiy: Thank you so much. Thanks for inviting me.

Let’s just jump right in. You’re in the camp that we have to be cautious with artificial intelligence, because it could actually be a threat to our survival. Can you just start right there? What’s your thinking? How do you see the world?

It’s not very different than any other technology. Any advanced technology can be used for good or for evil purposes. The main difference with AI is that it’s not just a tool, it’s actually an independent agent, making its own decisions. So, if you look at the safety situation with other independent agents—take for example, animals—we’re not very good at making sure that there are no accidents with pit bulls, for example.

We have some approaches to doing that. We can put them on a leash, put them in a cage, but at the end of the day, if the animal decides to attack, it decides to attack. The situation is very similar with advanced AI. We try to make it safe, beneficial, but since we don’t control every aspect of its decision-making, it could decide to harm us in multiple ways.

The way you describe it, you’re using language that implies that the AI has volition, it has intentionality, it has wants. Are you suggesting this intelligence is going to be conscious and self-aware?

Consciousness and self-awareness are meaningless concepts in science. That is nothing we can detect or measure. Let’s not talk about those things. I’m saying specific threats will come from the following: one, is mistakes in design. Just like with any software, you have computer bugs; you have misaligned values with human values. Two, purposeful design of malevolent AI. There are people who want to hurt others—hackers, doomsday cults, crazies. They will, on purpose, design intelligent systems to destroy, to kill. The military is a great example; they fund lots of research in developing killer robots. That’s what they do. So, those are some simple examples.

Will AI decide to do something evil, for the sake of doing evil? No. Will it decide to do something which has a side effect of hurting humanity? Quite possible.

As you know, the range on when we might build an artificial general intelligence varies widely. Why do you think that is, and do you care to kind of throw your hat in that lottery, or that pool?

Predicting the future is notoriously difficult. I don’t see myself as someone who has an advantage in that field, so I defer to others. People, like Ray Kurzweil, who have spent their lives building those prediction curves, exponential curves. With him being Director of Engineering at Google, I think he has pretty good inside access to the technology, and if he says something like 2045 is a reasonable estimate, I’ll go with that.

The reason people have different estimates is the same reason we have different betting patterns in the stock market, or horses, or anything else. Different experts give different weights to different variables.

You have advocated research into, quote, “boxing” artificial intelligence. What does that mean, and how would you do it?

In plain English, it means putting it in prison, putting it in a controlled environment. We already do it with computer viruses. When you study a computer virus, you put it in an isolated system which has no access to internet, so you can study its behavior in a safe environment. You control the environment, you control inputs, outputs, and you can figure out how it works, what it does, how dangerous it is.

The same makes sense for intelligence software. You don’t want to just run a test by releasing it on the internet, and seeing what happens. You want to control the training data going in. That’s very important. We saw some terrible fiascos with the recent Microsoft Chat software being released without any controls, and users feeding it really bad data. You want to prevent that, so for that reason, I advocate having protocols, environments in which AI researchers can safely test their software. It makes a lot of sense.

When you think about the great range of intellectual ability, from the smallest and simplest creatures, to us, is there even an appropriate analogy for how smart a superintelligence could be? Is there any way for us to even think about that?

Like, when my cat leaves a mouse on the back porch, everything that cat knows says that I’m going to like that dead mouse, right? Its entire view of the world is that I’m going to want that. It doesn’t have, even remotely, the mental capability to understand why I might not.

Is an AI, do you think, going to be that far advanced, where we can’t even communicate in the same sort of language, because it’s just a whole different thing?

Eventually, yes. Initially, of course, we’ll start with sub-human AI, and slowly it will get to human levels, and very quickly it will start growing almost exponentially, until it’s so much more intelligent. At that point, as you said, it may not be possible for us to understand what it does, how it does it, or even meaningfully communicate with it.

You have launched a new field of study, called Intellectology. Can you talk about what that is, and why you did that? Why you thought there was kind of a missing area in the science?

Sure. There seems to be a lot of different sub-fields of science, all of them looking at different aspects of intelligence: how we can measure intelligence, build intelligence, human intelligence versus non-human intelligence, animals, aliens, communicating across different species. Forensic science tells us that we can look at an artifact, and try to deduce the engineering behind it. What is the minimum intelligence necessary to make this archeological artifact?

It seems to make sense to bring all of those different areas together, under a single umbrella, a single set of terms and tools, so they can be re-used, and benefit each field individually. For example, I look a lot at artificial intelligence, of course. And studying this type of intelligence is not the same as studying human intelligence. That’s where a lot of mistakes come from, assuming that human drives, wants and needs will be transferred.

This idea of a universe of different possible minds is part of this field. We need to understand that, just like our planet is not the middle of the universe, our intelligence is not the middle of that universe of possible minds. We’re just one possible data point, and it’s important to generalize outside of human values.

So it’s called Intellectology. We don’t actually have a consensus definition on what intelligence is. Do you begin there, with “this is what intelligence is”? And if so, what is intelligence?

Sure. There is a very good paper published by one of the co-founders of DeepMind, which surveys, maybe, I don’t know, a hundred different definitions of intelligence, and tries to combine them. The combination sounds something like “intelligence is the ability to optimize for your goals, across multiple environments.” You can say it’s the ability to win in any situation, and that’s pretty general.

It doesn’t matter if you are a human at a college, trying to get a good grade, an alien on another planet trying to survive, it doesn’t matter. The point is if I throw a mind into that situation, eventually it learns to do really well, across all those domains.

We see AIs, for example, capable of learning multiple videos games, and performing really well. So, that’s kind of the beginning of that general intelligence, at least in artificial systems. They’re obviously not at the human level yet, but they are starting to be general enough, where we can pick up quickly what to do in all of those situations. That’s, I think, a very good and useful definition of what intelligence is, one we can work with.

One thing you mentioned in your book, Artificial Superintelligence, is the notion of convincing robots to worship humans as gods. How would you do that, and why that? Where did that idea come from?

I don’t mention it as a good idea, or my idea. I kind of survey what people have proposed, and it’s one of the proposals. I think it comes from the field of theology, and I think it’s quite useless, but I mention it for the sake of listing all of the ideas people have suggested. Me and a colleague, we published a survey about possible solutions for dealing with super-intelligent systems, and we reviewed some three hundred papers. I think that was one of them.

I understand. Alright. What is AI Completeness Theory?

We think that there are certain problems which are fundamental problems. If you can do one of those problems, you can do any problem. Basically, you are as smart as a human being. It’s useful to study those problems, to understand what is the progress in AI, and if we’ve got to that level of performance. So, in one of my publications, I talk about the Turing Test as being a fundamental first AI complete problem. If you can pass the Turing Test, supposedly, you’re as intelligent as a human.

The unrestricted test, obviously not the five-minute version of that, or whatever is being done today. If that’s possible, then you can do all of the other problems. You can do computer vision, you can do translation, maybe you can even do computer programming.

You also write about machine ethics and robot rights. Can you explore that, for just a minute?

With regards to machine ethics, the literature seems to be, basically, everyone trying to propose that a certain ethical theory is the right one, and we should implement it, without considering how it impacts everyone who disagrees with the theory. Philosophers have been trying to come up with a common ethical framework for millennia. We are not going to succeed in the next few years, for sure.

So, my argument was that we should not even try to pick one correct ethical theory. That’s not a solution which will make all of us happy. And for each one of those ethical theories, there are actually problems, well-known problems, which if a system with that type of power is to implement that ethical framework, that’s going to create a lot of problems, a lot of damage.

With regards to rights for robots, I was advocating against giving them equal rights, human rights, voting rights. The reasoning is quite simple. It’s not because I hate robots. It’s because they can reproduce almost infinitely. You can have a trillion copies of any software, almost instantaneously, and if each of them has voting rights, that essentially means that humanity has no rights. We give away human rights. So, anyone who proposes giving that type of civil rights to robots is essentially against human rights.

That’s a really bold statement. Let’s underline that, because I want to come back to it. But in order to do that, I want to return to the first thing I asked you, or one of the earlier things, about consciousness and self-awareness. You said these aren’t really scientific questions, so let’s not talk about them. But at least with self-awareness, that isn’t the case, is it?

I mean, there’s the red dot test—the mirror test—where purportedly, you can put a spot on an animal’s forehead while it’s asleep, and if it gets up and sees that in a mirror, and tries to wipe it off, it therefore knows that that thing in the mirror is it, and it has a notion of self. It’s a hard test to pass, but it is a scientific test. So, self-awareness is a scientific idea, and would an artificial intelligence have that?

We have a paper, still undergoing the review process, which surveys every known test for consciousness, and I guess you include self-awareness with that. All of them measure different correlates of consciousness. The example you give, yes, animals can recognize that it’s them in the mirror, and so we assume that also means they have similar consciousness to ours.

But it’s not the same for a robot. I can program a robot to recognize a red dot, and assume that it’s on its own forehead, in five minutes. It’s not, in any way, a guarantee that it has any conscious or self-awareness properties. It’s basically proving that we can detect red dots.

But all you are saying is we need a different test for AI self-awareness, not that AI self-awareness is a ridiculous question to begin with.

I don’t know what the definition of self-awareness is. If you’re talking about some non-material spiritual self-consciousness thing, I’m not sure what it does, or why it’s useful for us to talk about it.

Let’s ask a different question, then. Sentience is a word which is commonly misused. It’s often used to mean intelligent, but it simply means “able to sense something,” usually pain. So, the question of “is a thing sentient” is really important. Up until the 1990s, in the United States, veterinarians were taught not to anesthetize animals when they operated on them, because they couldn’t feel pain—despite their cries and writhing in apparent agony.

Similarly, it wasn’t until twenty or so years ago that babies, human babies, weren’t anesthetized, to do open heart surgery on them, because again, the theory was that they couldn’t feel pain. Their brains just weren’t well-developed. The notion of sentience, we put it right near rights, because we say, “If something can feel pain, it has a right not to be tortured.”

Wouldn’t that be an equivalent with artificial intelligence? Shouldn’t we ask, “Can it feel pain?” And if it can, you don’t have to say, “Oh yeah, it should be able to vote for the leaders.” But you can’t torture it. That would be just a reasonable thing, a moral thing, an ethical thing to say. If it can feel, then you don’t torture it.

I can easily agree with that. We should not torture anyone, including any reinforcement learners, or anything like that. To the best of my knowledge, there are two papers published on the subject of computer pain, good papers, and both say it’s impossible to do right now.

It’s impossible to measure, or it’s impossible for a computer to feel pain right now?

It’s impossible for us to program a computer to feel pain. Nobody knows how to do it, how to even start. It’s not like with, let’s say pattern recognition, we know how to start, we have some results, we get ten percent accuracy so we work on it and get to fifteen percent, forty percent, ninety percent. With artificial pain, nobody knows how to even start. What’s the first line of code you write for that? There is no clue.

With humans, we assume that other humans feel pain because we feel pain, and we’ve got similar hardware. But there is not a test you can do to measure how much pain someone is in. That’s why we show patients those ten pictures of different screaming faces, and ask, “Well, how close are you to this picture, or that one?” This is all a very kind of non-scientific measurement.

With humans, yes, obviously we know, because we feel it, so similar designs must also experience that. With machines, we have no way of knowing what they feel, and no one, as far as I know, is able to say, “Okay, I programmed it so it feels pain, because this is the design we used.” There are just no ideas for how something like that can be implemented.

Let’s assume that’s true, for a moment. The way, in a sense, that you get to human rights, is you start by saying that humans are self-aware, which as you say, we can all self-report that. If we are self-aware, that implies we have a self, and implying we have a self means that that self can feel, and that’s when you get sentience. And then, you get up to sapience, which is intelligence. So, we have a self, that self can feel, and therefore, because that self can suffer, that self is entitled to some kind of rights.

And you’re saying we don’t know what that would look like in a computer, and so forth. Granting all of that, for just a moment, there are those who say that human intelligence, anything remotely like human intelligence, has to have those building blocks, because from self-awareness you get consciousness, which is a different thing.

And consciousness, in part, embodies our ability to change focus, to be able to do one thing, and then, for whatever reason, do a different thing. It’s the way we switch, and we go from task to task. And further, it’s probably the way we draw analogies, and so forth.

So, there is a notion that, even to get to intelligence, to get to superintelligence, there is no way to kind of cut all of that other stuff out, and just go to intelligence. There are those who say you cannot do that, that all of those other things are components of intelligence. But it sounds like you would disagree with that. If so, why would that be?

I disagree, because we have many examples of humans who are not neurotypical. People, for example, who don’t experience pain. They are human beings, they are intelligent, they certainly have full rights, but they never feel any pain. So that example—that you must feel pain in order to reach those levels of intelligence—is simply not true. There are many variations on human beings, for example, not having visual thinking patterns. They think in words, not in images, like most of us. So, even that goes away.

We don’t seem to have a guaranteed set of properties that a human being must have to be considered human. There are human beings who have very low intelligence, maybe severe mental retardation. They are still human beings. So, there are very different standards for, a) getting human rights, and, b) having all those properties.

Right. Okay. You advocate—to use your words from earlier in this talk—putting the artificial intelligence in a prison. Is that view—we need to lock it up before we even make it—really, in your mind, the best approach?

I wouldn’t be doing it if I didn’t think it was. We definitely need safety mechanisms in place. There are some good ideas we have, for how to make those systems safer, but all of them require testing. Software requires testing. Before you run it, before you release it, you need a test environment. This is not controversial.

What do you think of the OpenAI initiative, which is the idea that as we’re building this we ought to share and make it open source, so that there’s total transparency, so that one bad actor doesn’t get an AGI, and so forth? What are your thoughts on that?

This helps to distribute power amongst humans, so not a single person gets all the power, but a lot of people have access. But at the same time, it increases danger, because all the crazies, all the psychopaths, now get access to the cutting-edge AI, and they can use it for whatever purposes they want. So, it’s not clear cut whether it’s very beneficial or very harmful. People disagree strongly on OpenAI, specifically.

You don’t think that the prospects for humans to remain the dominant species on this planet are good. I remember seeing an Elon Musk quote, he said, “The only reason we are at the top is because we’re the smartest, and if we’re not the smartest anymore, we’re no longer going to be on top.” It sounds like you think something similar to that.

Absolutely, yes. To paraphrase, or quote directly, from Bill Joy, “The future may not need us.”

What do you do about that?

That’s pretty much all of my research. I’m trying to figure out if the problem of AI control, controlling intelligent agents, is actually solvable. A lot of people are working on it, but we never have actually established that it’s possible to do. I have some theoretical results of mine, and from other disciplines, which show certain limitations to what can be done. It seems that intelligence, and how controllable something is, are inversely related. The more intelligent a system becomes, the less control we have over it.

Things like babies have very low intelligence, and we have almost complete control over them. As they grow up, as they become teenagers, they get smarter, but we lose more and more control. With super-intelligent systems, obviously, you have almost no control left.

Let’s back up now, and look at the here and now, and the implications. There’s a lot of debate about AI, and not even talking about an AGI, just all the stuff that’s wrapped up in it, about automation, and it’s going to replace humans, and you’re going to have an unemployable group of people, and social unrest. You know all of that. What are your thoughts on that? What do you see for the immediate future of humanity?

Right. We’re definitely going to have a lot of people lose their jobs. I’m giving a talk for a conference of accountants soon, and I have the bad news to share with them, that something like ninety-four percent of them will lose their jobs in the next twenty years. It’s the reality of it. Hopefully, the smart people will find much better jobs, other jobs.

But for many, many people, who don’t have education, or maybe don’t have cognitive capacity; they will no longer be competitive in this economy, and we’ll have to look at things like unconditional basic income, unconditional basic assets, to, kind of prevent revolutions from happening.

AI is going to advance much faster than robots, which have all these physical constraints, and can’t just double over the course of eighteen months. Would you be of the mind that mental tasks, mental jobs, are more at risk than physical jobs, as a general group?

It’s more about how repetitive your job is. If you’re doing something the same, whether it’s physical or mental, it’s trivial to automate. If you’re always doing something somewhat novel, now that’s getting closer to AI completeness. Not quite, but in that direction, so it’s much harder.

In two hundred and fifty years, this country, the West, has had had economic progress, we’ve had technological revolutions which could, arguably, be on the same level as the artificial intelligence revolution. We had mechanization, the replacement of human power with animal power, the electrification of industry, the adoption of steam, and all of these appeared to be very disruptive technologies.

And yet, through all of that, unemployment, except for the Great Depression, never has bumped out of four to nine percent. You would assume, if technology was able to rapidly displace people, that it would be more erratic than that. You would have these massive transforming industries, and then you would have some period of high unemployment, and then that would settle back down.

So, the theory around that would be that, no, the minute we build a new tool, humans just grab that thing, and use it to increase their own productivity, and that’s why you never have anything outside of four to nine percent unemployment. What’s wrong with that logic, in your mind?

You are confusing tools and agents. AI is not a tool. AI is an independent agent, which can possibly use humans as tools, but not the other way around. So, the examples of saying we had previous singularities, whether it’s cultural or industrial, they are just wrong. You are comparing apples and potatoes. Nothing in common.

So, help me understand that a little better. Unquestionably, technology has come along, and, you know, I haven’t met a telephone switchboard operator in a long time, or a travel agent, or a stockbroker, or typewriter repairman. These were all jobs that were replaced by technology, and whatever word you put on the technology doesn’t really change that simple fact. Technology came out, and it upset the applecart in the employment world, and yet, unemployment never goes up. Help me understand why AI is different again, and forgive me if I’m slow here.

Sure. Let’s say you have a job, you nail roofs to houses, or something like that. So, we give you a new tool, and now you can have a nail gun. You’re using this tool, you become ten times more efficient, so nine of your buddies lose jobs. You’re using a tool. The nail gun will never decide to start a construction company, and go into business on its own, and fire you.

The technology we’re developing now is fundamentally different. It’s an agent. It’s capable—and I’m talking about the future of AI, not AI of today—it’s capable of self-improvement. It’s capable of cross-domain learning. It’s as smart, or smarter, as any human. So, it’s capable of replacing you. You become a bottleneck in that hybrid system. You no longer hold the gun. You have nothing to contribute to the system.

So, it’s very easy to see that all jobs will be fully automated. The logic always was, the job which is automated is gone, but now we have this new job which we don’t know how to automate, so you can get a new, maybe better, job doing this advanced technology control. But if every job is automated, I mean, by definition, you have one hundred percent unemployment. There are still jobs, kind of prestige jobs, because it’s a human desire to get human-made art, or maybe handmade items, expensive and luxury items, but they are a tiny part of the market.

If AI can do better in any domain, humans are not competitive, so all of us are going to lose our jobs. Some sooner, some later, but I don’t see any job which cannot be automated, if you have human level intelligence, by definition.

So, your thesis is that, in the future, once the AI’s pass our abilities, even a relatively small amount, every new job that comes along, they’ll just learn quicker than we will and, therefore, it’s kind of like you never find any way to use it. You’re always just superfluous to the system.

Right. And the new jobs will not be initially designed for a human operator. They’ll be basically streamlined for machines, in the first place, so we won’t have any competitive advantage. Right now, for example, our cars are designed for humans. If you want to add a self-driving component to it, you have to work with the wheel and brake pedals and all that, to make it switch.

Whereas, if from the beginning, you’re designing it to work with machines; you have smart roads, smart signs, humans are not competitive at any point. There is never an entry point where a human has a better answer.

Let me do a sanity check at this point, if I could. So, humans have a brain that has a hundred billion neurons, and countless connections between it, and it’s something we don’t really understand very well. And it perhaps has emergent properties which give us a mind, that give us creativity, and so forth, but it’s just simple emergence.

We have this thing called consciousness. I know you say it’s not scientific, but if you believe that you’re conscious, then you have to grapple with the fact that whatever that is, is a requisite for you being intelligent.

So, we have a brain we don’t understand, an emergent mind we don’t understand, a phenomenon of “consciousness” which is the single fact we are most aware of in our own life, and all of that makes us this. Meanwhile, I have simple pieces of hardware that I’m mightily delighted when they work correctly.

What you’re saying is… It seems you have one core assumption, which is that in the end, the human brain is a machine, and we can make a copy of that machine, and it’s going to do everything a human machine can do, and even better. That, some might argue, is the non-scientific leap. You take something we don’t understand, that has emergent properties we don’t understand, that has consciousness, which we don’t understand, and you say, “oh yes, it’s one hundred percent certain we’re going to be able to exceed our own intelligence.”

Kevin Kelly calls that a Cargo Cult. It’s like this idea that, oh well, if we just build something just like it, it’s going to be smarter than us. It smacks to some of being completely unscientific. What would you say to that?

One, it’s already smarter than us, in pretty much all domains. Whatever you’re talking about, playing games, investing in the stock market… You take a single domain where we know what we’re doing, and it seems like machines are either already at a human level, or quickly surpassing it, so it’s not crazy to think that this trend will continue. It’s been going on for many years.

I don’t need to fully understand the system to do better than a system. I don’t know how to make a bird. I have no idea how the cells in a bird work. It seems to be very complex. But, I take airplanes to go to Europe, not birds.

Can you explain that sentence that you just said, “Domains where we know what we are doing”? Isn’t that kind of the whole point, is that there’s this big area of things where we don’t know what we’re doing, and where we don’t understand how humans have the capabilities? How are they able to solve non-algorithmic problems? How are humans able to do the kind of transferred learning we do, where we know one thing, in one domain, and we’re really good at applying it in others?

We don’t know how children learn, how a two-year-old gets to be a full AGI. So, granted, in the domains where we know what we are doing, all six of them… I mean look, let’s be real: just to beat humans at one game, chess, took a multi-billion-dollar company spending untold millions of dollars, all of the mental effort of many, many people, working for years. And then you finally—and it’s one tiny game—get a computer that can do better than a human.

And you say, “Oh, well. That’s it, then. We’re done. They can do anything, now.” That seems to extrapolate beyond what the data would suggest.

Right. I’m not saying it’s happening now. I’m not saying computers today are capable of those things. I’m saying there is not a reason for why it will not be true in the maybe-distant future. As I said, I don’t make predictions about the date. I’m just pointing out that if you can pick a specific domain of human activity, and you can explain what they do in that domain—it’s not some random psychedelic performance, but actually “this is what they do”—then you have to explain why a computer will never be able to do that.

[36:38 – 36:43 remove awkward pause]

Fair enough. Assuming all of that is going to happen, that gradually, one thing by one thing by one thing, computers will shoot ahead of us, and obsolete us, and I understand you’re not picking dates, but presumably, we can stack-rank the order of things to some very coarse degree… The most common question I get from people is, “Well, what should I study? What should my kids study, in order to be relevant, to have jobs in the future?”

You’re bound to get that question, and what would you say to it?

That goes directly to my paper on AI completeness. Basically, what is the last job to be automated? It’s the person doing AI research. Someone who is advancing machine learning. The moment machines can automate that, there are no other jobs left. But that’s the last job to go.

So, definitely study computer science, study machine learning, study artificial intelligence. Anything which helps you in those fields—mathematics, physics—will be good for you. Don’t major in areas, in domains, which we already know will be automated by the time you graduate. As part of my job I advise students, and I would never advise someone to become a cab driver.

It’s funny, Mark Cuban said, and he’s not necessarily in the field, but he has really interesting thoughts about it. And he said that if he were starting over, he would be a philosophy major, and not pursue a technical job, because the technical jobs are actually probably the easiest things for machines to do. That’s kind of in their own backyard. But the more abstract it is, in a sense, the longer it would take a computer to be able to do it. What would you say to that?

I agree. It’s an awesome job, and if you can get one of those hundred jobs in the world, I say go for it. But the market is pretty small and competitive, whereas for machine learning, it’s growing exponentially. It’s paying well, and you can actually get in.

You mentioned the consciousness paper you’re working on. When will that come out?

That’s a finished draft, and it’s just a survey paper of different methods people propose to detect or measure consciousness. It’s under review right now. We’re working on some revisions. But basically, we reviewed everything we could find in the last ten to fifteen years, and all of them measure some side effect of what people or animals do. They never actually try to measure consciousness itself.

There is some variance which deals with quantum physics, and collapse of wave functions, to Copenhagen interpretations, and things like that; but even that is not well-defined. It’s more of a philosophical kind of an argument. So, it seems like there is this concept, but nobody can tell me what it does, why it’s useful, and how to detect it or measure it.

So, it seems to be somewhat unscientific. Saying that, “Okay, but you feel it in you,” is not an argument. I know people who say, “I hear the voice of Jesus speaking to me.” Should I take that as a scientific theory, and study it? Just because someone is experiencing it doesn’t make it a scientific concept.

Tantalize us a little bit with some of the other things you’re working on, or some of the exciting things that you might be publishing soon.

As I said, I’m looking at, kind of, limitations of what we can do in the AI safety field. One problem I’m looking at is this idea of verifiability. What can be verified scientifically, specifically in mathematical proofs and computer software? Trying to write very good software, with no bugs, is kind of a fundamental holy grail of computer science, computer security, cyber security. There is a lot of very good work on it, but it seems there are limitations on how good we can get. We can remove most bugs, but usually not all bugs.

If you have a system which makes a billion decisions a second, and there is a one in a billion chance that it’s getting something wrong, those mistakes quickly accumulate. Also, there is almost no successful work on how to do software verification for AI in novel domains, systems capable of learning. All of the verification work we know about is for kind of deterministic software, and specific domains.

We can do airplane autopilot software, things like that, and verify it very well, but not something with this ability to learn and self-improve. That’s a very hard-to-open area of research.

Two final questions, if I can. The first one is—I’m sure you think through all of these different kinds of scenarios; this could happen or that could happen—what would happen, in your view, if a single actor, be it a company or a government, or what have you; a single actor invented a super-intelligent system? What would you see the ripple effects of that being?

That’s basically what singularity is, right? We get to that point where machines are the ones inventing and discovering, and we can no longer keep up with what’s going on. So, making a prediction about that is, by definition, impossible.

The most important point I’d like to stress—if they just happen to do it somehow, by some miracle, without any knowledge or understanding of safety and control, just created a random very smart system, in that space of possible minds—there is almost a guarantee that it’s a very dangerous system, which will lead to horrible consequences for all of us.

You mentioned that the first AGI is priceless, right? It’s worth countless trillions of dollars.

Right. It’s basically free labor of every kind—physical, cognitive—it is a huge economic benefit, but if in the process of creating that benefit, it destroys humanity, I’m not sure money is that valuable to you in that scenario.

The final question: You have a lot of scenarios. It seems your job is to figure out, how do we get into this future without blowing ourselves up? Can you give me the optimistic scenario; the one possible way we can get through all of this? What would that look like to you? Let’s end on the optimistic note, if we can.

I’m not sure I have something very good to report. It seems like long-term, everything looks pretty bleak for us. Either we’re going to merge with machines, and eventually become a bottleneck which will be removed, or machines will simply take over, and we’ll become quite dependent on them deciding what to do with us.

It could be a reasonably okay existence, with machines treating us well, or it could be something much worse. But short of some external catastrophic change preventing development of this technology, I don’t see a very good scenario, where we are in charge of those god-like machines and getting to live in paradise. It just doesn’t seem very likely.

So, when you hear about, you know, some solar flare that just missed the Earth by six hours of orbit or something, are you sitting there thinking, “Ah! I wish it had hit us, and just fried all of these things. It would buy humanity another forty years to recover.” Is that the best scenario, that there’s a button you could push that would send a giant electromagnetic pulse and just destroy all electronics? Would you push the button?

I don’t advocate any terrorist acts, natural or human-caused, but it seems like it would be a good idea if people smart enough to develop this technology, were also smart enough to understand possible consequences, and acted accordingly.

Well, this has been fascinating, and I want to thank you for taking the time to be on the show.

Thank you so much for inviting me. I loved it.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.