Jay Iorio is a technology strategist for the IEEE Standards Association, specializing in the emerging technologies of virtual worlds and 3D interfaces. In addition to being a machinimatographer, Iorio manages IEEE Island in Second Life and has done extensive building and environment creation in Second Life and OpenSimulator.
What follows is an interview between Jay Iorio and Byron Reese, author of the book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. They discuss artificial intelligence and virtual and augmented reality.
Byron Reese: Synthetic reality, is that a term that you use internally and is that something we’re going to hear more about as a class or concept? Or is that just useful in your line of work?
Jay Iorio: That’s sort of a term that I use internally in my own mind, it doesn’t really come from anywhere. I’m trying to think of a term that includes all of the illusory technologies: virtual reality, augmented reality, everything along the Milgram spectrum and the technologies that also contribute to that. So that it doesn’t just become a playback mechanism; that in fact it becomes a part of the interaction with the physical space and with other people and so forth.
So I would say that specifically what I’m talking about is AR (augmented reality) in the context of a sensor network, in the context of what we’re calling the internet of things (IoT), so that the street becomes aware, it becomes aware that you’re there. It knows your history, knows what you bought. It knows, because of biometric devices for example, it knows your blood sugar. It’s monitoring your gait, it’s inferring a lot about… from the data that it’s picking up in you, from you in real time. Integrating that with the physical world, so that the augmented reality becomes the display for this highly intelligent system, this adaptive system. You and I could walk down the same street in Austin for example and see very different things. Not even getting into… “I don’t like that style of architecture,” it’s going to occlude that from my vision or changed into mid-century modern or something. But content, the traditional streams that we’re used to now, the electronic streams and so forth could be integrated into the built environment. So that in a sense it looks like your personalized desktop, it still looks like Fourth Street, but it’s your Fourth Street and this would be a fairly powerful AI system that was continually feeding you information that it thought you wanted, correcting for it and so forth. It could dim street signage; it could change things. It could do the hospital thing of follow blue for “To Obstetrics.” You know it could give you guidance or the more conventional uses for AR if you can call them conventional.
But, I think where it really comes alive is that it starts to anticipate,like a lot of online systems are starting to do today. But I think we’re seeing just the foothills of a mountain range. They’re trying to predict your commercial behavior. They’re trying to predict what you like. They’re trying to learn more about you and that can… everybody focuses on the possible negatives of that and the invasiveness but there are also enormous positives to it and there are ways that you know we can guide that development. I think the street becomes in a sense a personal valet. The city becomes a response instead of an inert collection of buildings. It becomes a part of your body, in a sense an extension of your body. If it knows your blood sugar is a certain way, it will dim the lights for the doughnut shop, or well you know, you could take it to an extreme where in a sense it becomes an illusion that’s based on reality. But it’s such an enhanced illusion that in a sense it’s almost approaching virtual reality.
So that’s all like kind of science fiction sounding stuff right from where we are today. Where I call my airline and say my frequent flyer number and it doesn’t get it right. What time frame are you talking about to have that experience of the world?
Well, I mean we know it isn’t going to happen on one Monday morning. So we’re already seeing pieces of it the way…
No. But, to get that fulfilled vision of my environment that I am in is all around me. Everything I see and touch and feel is somehow enlivened by this technology.
I think the first step is going to be the mainstreaming of full vision AR.
Let’s start with that step, what does that mean? Full vision AR?
I would say that a big step forward from the existing ones. You take the meta visor for example or the hollow lens, something like that. I think that’s probably the latest we’ve got right now and it’s not bad. But, I think there are discoveries in the pipeline that are really reducing it to this, and it could be contact lenses, ultimately it could be implants.
When you say this, you mean your glasses?
My glasses, yes I’m sorry.
That’s all right and you’re talking about the new ones that are coming out from… Are you referring to any specific product?
Well, I know that, I think Intel.
Do you think that’s going to be projected on your eye? You’re going to see it as in the lens or -?
That I don’t know. I’m going to leave that to the engineers you know that I think that it could well be…
So someday you get a pair of glasses or head contacts that convey that information to you through a means we don’t have down yet, and you think that that’s going to be the first step, that you’ll have the blank slate as it were?
The first step I think will be to take what we currently do on our smartphones and extend it to that realm. So basically the selling point is that it’s hands-free. It’s full-time, it’s always there. It will get rid of this 2018 gesture. I think that will… the phone is sort of an interim step. It wasn’t intended as an interim step. It wasn’t intended to be used the way it’s being used now. You know this is everybody’s computer at this point and I don’t think anybody thought that 10 years ago.
I wonder if people still do that thing with their thumb and pinky when they’re you know when they’re doing the phone thing because it like doesn’t make any sense. Like when will the banana be displaced as the comedic substitute for the telephone? I guess it would become something, anyway keep going.
It’s true. It’s like the fact that you can’t hang up on it.
I know, I remember in the second Spidermanmovie with Tobey Maguire, there’s a scene where the villain’s talking to somebody and hangs up. And he hears a dial tone and immediately it was just jarring to me, like you know you don’t hear when, somebody hangs up their cellphone. There is no dial tone.
That’s right yeah.
It’s like they had to have some audio indicator that there was no longer a person on the other end because otherwise you’re like “hello, hello?”
The drama has been removed from the phone.
I know, so keep going. The first step is it takes over what our phones do.
I think so and I know people who work in AR and a few artists who actually do, be public spectacles with AR. And you know the problem is you have to hold your phone up, but the real problem is discovery; you have to know that it’s there in the first place. And I think that AR explodes when you no longer have to discover it when it’s just there and then the further step of when it’s feeding you. It isn’t giving us all the same stuff. It knows that you like modern art and so it’s feeding you that, the public art becomes much more harmonious with what you like and so forth.
Do you have shared experiences then anymore?
It’s a good question.
And, is that not an isolating technology? When we go for a walk down the street and I see Art Deco and you see something else?
It is ironic that this ultra-connectivity technology, this web of technologies could… Its most easily used to do exactly what you’re saying which is to give us exactly what we want. And that is the ethical issue that I’m most focused on which is the needs of people, we want what we want. We want to be comfortable. We want certain things; we want to get what we want. The commercial marketplace wants to give it to us and live only to those impulses. I think we’ll end up with what you’re talking about. Which is a lot of sort of gated communities, a more insular way of looking at life so that you’re getting everything you like, but you don’t really understand other people. You’re not experiencing the real city as you walk down the street. You’re experiencing an illusion that’s coming largely from your own mind and behavior. So one of the issues I’d really like to address, not necessarily today, won’t be solved today, but over the long term what I’m looking at is: how you introduce randomness, serendipity, happy accidents, the kinds of things that in a very structured world like the one I’m describing, that stuff tends to either be filtered out or predictable.
Presumably the algorithms would be good enough that it says: “I’m going to find what we both have in common and then we can have a shared experience that we both [like], and it may not be your favorite or my favorite, but the music, the song that’s playing is at least something we both like.”
That’s right. I mean what I’m afraid of losing in that environment, is, I live in in Los Angeles. I used to live in New York City, I like big cities. I like the craziness of them, I like the fact that every day you’re going to experience something that you never have predicted and you might not have wanted, I’ll hear musicians playing a genre that if anybody asked me the day before… it’s “I really don’t like that stuff, it’s not for me.” And then I find myself stopping and listening and then as a musician, I find myself being influenced by a genre I never… This to me is the beauty of the cities, that you ride in a subway as unpleasant as it can be. You’re constantly confronting the full humanity and I think there’s something very humanizing about that. It makes you more open-minded, makes you realize that not everybody believes the way you do. And you know, you even see a primitive version of it on Facebook, for example, where the illusion is created to an extent that the world is much more like you than it really is. That you’re confronting all the whole fake news idea. But basically the idea that you’re being presented with content that that makes you feel good about what you already believe. And I’m disturbed by that. I think that’s very destructive.
I have a policy that I don’t read any book I agree with. I’m serious because it’s like I spend that time and then I get to the end I’m like yeah, that’s just what I thought. So I literally only read things that…so I’m an optimist about the future, so I only read pessimistic views and so forth. So, let me ask you a question. Let’s say we get some form of AI that is… we won’t even say whether it’s an AGI or whether it’s conscious or anything like that. But, it gets Siri or some equivalent technology. It’s so good that it laughs at your jokes and tells you things and you converse with it and all of that, and you regard it as a friend. Maybe it manifests in a robot that’s vaguely humanoid, I don’t know. And let’s say that those become your best friends, and then you know then you find one that’s your spouse and then you just deal with those all day and you never deal with another person. Because those people never let you down and always like… why is that bad? I mean at human level you say doesn’t sound… but why is that bad? Why not just live that life around people that make you feel good about yourself and tell jokes you like? And you had all the stuff in common, why deal with other people?
Well, we do that to an extent already, and even before any of these electronic tools we found communities and you always want to hang out with people that you know you have a similar worldview, where you get each other’s jokes and so forth. So you’re not constantly arguing about basic assumptions. But there’s a difference I think between in the analog world, knowing that I’m living and hanging out with a community of people, like-minded people. You know, we’re all in the ballpark but being aware that right across that highway are people who don’t share any of our assumptions. And we really look at the world quite differently. So is it a good thing or a bad thing to be aware of them and to have to interact with them? I think, with no evidence, but I think it’s a good thing to interact with people you disagree with.
Well that’s people’s gut reaction, but try, and I heard your caveat with no evidence, but try to justify it.
If you don’t encounter things you don’t like, it’s like a muscle that doesn’t encounter resistance. It never develops. It requires friction, I think, for humans because I think that’s the way we’ve evolved is that we evolved in a very complex diverse society and we have to find our way through that and our identity I think is constructed. We construct our identity based largely on how we see ourselves in the midst of that. So it might not be bad, it might just lead to humans who are less able to handle diverse opinions, new ideas, and inventions. They might be less tolerant of eccentricity, of artists, of people who by nature, inventors and artists, people who break the mold. If you’re so accustomed to the world being exactly as you like, it might be very difficult for you to accept a revolutionary concept or a work of art that’s startling and offensive maybe at first. But you grow by accepting those things and incorporating them into your identity. So I would say that it’s good to throw a lot of stuff at people and let them sort it out.
Say here, you get these two robots to choose from: you know this one is exactly like what you want, this one however has body odor and tells offensive jokes that that just really offend you at every level and you really should pick that one.
Well it’s sort of the movie Herfor example, you know this is your perfect companion, and because she was intelligent, she evolved to grow to him and so forth just like a human would. I’m not saying necessarily surround yourself with the obnoxious or what you find uncomfortable. On the other hand, don’t surround yourself necessarily with everybody who agrees with you all the time. It leads to an intellectual inflexibility and a cultural inflexibility.
Do you think human evolution has ended now because the strong don’t necessarily survive any better than the weak, and the intelligent don’t necessarily reproduce more or have higher survival rates than the less [intelligent]? Is human evolution over and the only betterment we’re going to have now is through machines?
I don’t think so. I think that humans as organisms continue to evolve. I think that the strongest is not the physically strongest, because any tiger could knock a weightlifter out. I mean compared to other species, we’re very weak. I would say if you interpret strength for humans as having the characteristics of harmonizing society, cooperativeness, collaboration and so forth, I would see those as the human strengths and I would see those as having as very refined evolutions of our temperaments. Human strength is not individual despite our mythology. Yes, inventors come up with ideas, yes, artists come up with ideas and so forth. And those tend to happen individually, but the real changes tend to happen with a lot of people collaborating, some of whom don’t even know they’re collaborating but they’re participating in a movement. So I would say that the highest point of human evolution is something like empathy, understanding of people who are very different and so forth, that’s human strength. And I would say that is something that’s what allows us to survive, not our physical strength. We don’t really have any physical strength to speak of.
So your contention is that ethical, I mean that empathetic, people with empathy will reproduce more than people without it over the long run?
I don’t think though, there are too many issues with reproduction. I don’t think that will be the case, but the numbers don’t necessarily dictate the influence that has on society.
So let’s get back to our narrative. We have our cellphone [that] has migrated to a hands-free device that we can effortlessly interact with, and you assume that people want to do that based on how they’re willing… it is true that taking the elevator up here I noticed everybody whipped out their phone. It’s like “What am I going to do for the next thirty-four stories of elevator time? I’ve got to pass this time some way.” And so your contention is that there is a latent desire for that because people want to have it on 24/7?
I think so. I think that if I had to come up with a one gut justification for this, it would be, and I know this is not visual, but I’m making the gesture of playing with your phone with two thumbs. That is the fact that, I think it’s an obsession with me. I go into a crowd in an airport a hotel and I count the people who are using phones and the ones who aren’t, and it’s always over 50% of people who are like this. Especially if you consider it, count the laptops. So there’s a need, it could be an obsession, it could be… who knows where it’s coming from. But there is definitely a need to look at this thing all day and who wouldn’t rather strap it to their head and have it be full fidelity and high definition and overlays that don’t look cartoonish, that actually look like they’re fixed and integrated with the environment and so forth. And be able to do all the things you can do on your phone. You get your mail, your messages, you take photographs and whatever.
I have been to North Korea several times and there is no internet. There is no cell phone reception, there is nothing. And I find that the most isolating aspect of it all… like you know I cuddle up to like the warmth of this thing that’s… it’s almost like, I don’t know, I feel untethered and adrift when I don’t have it. And I wonder did it awaken something in me because I wouldn’t have felt that way when I was younger, because I didn’t have the device? Or did it change me, did it somehow weaken me, that now I need it? Or did it awaken this latent desire to want to be connected to a world of information? What do you think?
I think we might have a lot of latent desires that technology hasn’t given us an avenue for and this is one of them. When I was a kid, there was no such thing as email so being without it… so what? You wouldn’t even have been able to explain to me what this phone does. You know, I mean you’d have to explain the internet. You have to explain all the protocols, it’s an amazing amount of history that we’ve got in our pockets. So we didn’t know in the 15th Century, would people have been doing this? Yeah. I think they would have. I think it’s human. I think that you’ve got a little device here that is magical. It carries… it’s your portal to the world. It’s a computer that you can carry on you. It makes me wonder what other technologies could evolve that show that we have other desires that aren’t being met or that we could become addicted to. I mean, it’s not the right word, but habituated to, it becomes essential.
Why would you make that distinction between habituation versus addiction?
Well, because I think of addiction as a drug, but it’s really the same thing yeah it is…
Because I have withdrawal symptoms if I’m cut off from it.
That’s true and in fact we’ve seen in the last year that some Facebook original designers have started to come clean and talk about how that is deliberately, addictively designed. That’s not surprising in a way, it’s, I mean from Facebook’s standpoint you want to keep people using it and that’s where the information about people comes from and so forth. So it’s understandable, but you know we have become addicted to something that is actually very useful. I guess that’s my reluctance to use the word addiction. I think of addiction as to something bad, but you could be addicted to something good too I suppose.
So we have our device and now we transport into the future, and you said the street is aware and I assume you mean that colloquially not literally the street is not conscious.
The street couldn’t really be conscious but the sense, the sensors and the interaction between the sensors and the databases and that there’s a whole web of intelligence I guess you could call it, that will create the illusion that in a sense, I think that the city is responding. That building changed because of something I bought. My health changed and so that facade looks different, the artwork looks different. It’s something now to make me feel more relaxed because it knows I’m very nervous and it knows that I have a heart condition or the opposite, or what have you. The city could become your doctor for most things. It’s constantly diagnosing you. It’s looking at your heart rate continuously, you know an automated vehicle could show up on the sidewalk when you think you’re having indigestion, and it realizes that you’re having a heart attack, so it takes you immediately to the hospital. And starts treating you as soon as it comes in contact with you. I mean the healthcare benefits are just staggering over the next generation.
So you’re an ethicist and you think about the ethics of all of this stuff?
I’m an amateur ethicist.
Fair enough. I don’t know how you go pro… Regardless, tell me some ethical considerations that we may not have thought about, or we had that you want to weigh in on, so what sorts of questions are outstanding?
I’m going to avoid AI by itself because that becomes, well in a way I can’t avoid AI because this whole thing is basically run on machine learning. I would say that the biggest ethical concern I have at this point is that this amazing collection of technologies not be used to de-nature the human experience. Not to make it seem as though life is simpler than it is. There are no people I dislike. There are no people with political views I disagree with. There are no genres of music or movies that I don’t like. I’m not exposed to any of that and it makes me happy. That I find to be a very dangerous thing. It leads to the fabric coming apart I think. So that’s one of my concerns. The commercial motivation of a lot of the AI, the Facebook and Google and so forth, is potentially problematic because there are other values in society that are more conducive to holding the fabric together, appreciating other people’s experiences and points of view and so forth. You know that are not…
Fair enough. So let’s take the first one of those two, that somehow is bubbling… goes to a whole new dimension where it isn’t just “Here are suggested stories for you.” But people and all experiences contrary to your current preferences are off limits, and you say that pulls the fabric apart because it dissolves community. I don’t have any reason at all to empathize with you because you had absolutely nothing in common with me. Is that how you’re seeing it?
Something like that. Everybody I know disagrees with you, so how could you possibly be right? You know? As opposed to: there are lots of people with a range of points of view and they very idiosyncratically… and sometimes they’re full of contradictions and so forth. And I think to become a full member of the community, you have to sort of appreciate the messiness of people. And a lot of these technologies are naturally inclined, I think, to shave off the messiness and to make it seem like it’s a lot more, you know…
So run both scenarios. Run the worst case and then tell me why that’s not going to happen?
The worst case would be if it were used, I think, if a system like this were used in society where there was no tradition of democratic values. I think that’s very dangerous, because then your primary motivation becomes efficiency and that’s not a very good way to organize society, I don’t think. Society is inherently inefficient and the freer people are, the less efficient it is. Efficiency is never really the goal of a democratic republic. But an authoritarian State with these technologies could create an extremely obedient population that would govern itself in a sense. They would not need to be censored, they would not need to be told that this was inappropriate or so they would know better. They would know that. They would behave. And that might lead to industrial efficiency but it doesn’t lead to a human freedom or any kind of society that I think any of us would feel comfortable living in. I think that’s a natural tendency especially in certain countries where it’s basically a way to enhance authority. That’s one scenario and that could happen here. That’s a very portable model that doesn’t necessarily apply to China or Gulf States or other states they might be thinking of it. It could apply to Western Europe, it could apply to North America. The temptation is going to be high to assert authority through a system like this I think.
On the other hand, it can be incredibly liberating for people, first from a health care standpoint. It basically puts you in your doctors hands all the time. You’re constantly being watched and assuming that this does it in a secure fashion that people are comfortable with. If recreation, entertainment, being exposed to different locations in a physically utterly believable way: travel, education, just one field after another. There’s hardly a field that isn’t revolutionized by this kind of thing. And very positively it really takes the resources and it expands them very openly to people. Everybody becomes empowered in a certain way, but I think that takes the guidance in the development of these systems. And those are the kinds of questions I’m trying to raise with software developers, for example, of people working these technologies. Think of how you can push towards the second scenario instead of the first scenario, and it’s a difficult thing, and it might actually go contrary to some of the, you know the commercial needs of developing AI and mixed reality and so forth. So it’s not easy and there’s no obvious answer. It could go in a lot of different directions.
It’s interesting because as I sit here I think about it: There’s a whole different mindset that says “the great thing about these technologies is they let you find your tribe. You are not alone. There are people like you and these technologies will let you find and have community with those like you whether they’re/it’s spread all over the world. They may be older and they may be this and then may be that, and you will find your place.” But you are describing tribalism in a really kind of dystopian sense like where would you…?
That’s a really good point, it’s one of the paradoxes of these technologies, that they’re very liberatory but they’re potentially restrictive. And the tribal mentality I mean that’s a fantastic thing about… well the Internet itself is the ability basically to form communities without respect to geography as you say, age, any demographic considerations and that’s fantastic, that’s unprecedented. It’s a matter of degree, I think. You know I’m heavily involved with people who are interested in the various things I’m interested in and so forth. But just as you try to read books that you disagree with, I try to find people that I disagree with. I try to emphasize that these tribes are not the world for me, even if I want to make them that. There is an incredibly diverse population out there and once you wrap your head around that, I think if you end up actually dealing with your tribe in a more intelligent way. You know what I mean there? That the more you see of human diversity the better it is, even when you’re in a group that’s heavily circumscribed by interest or one factor or another. So there are tribal utopias and tribal dystopias. I think it’s almost a sliding scale. But I think what changes the utopian to a dystopia is that you realize that this isn’t the sum total. You don’t become satisfied by living in a world that’s just like you – as tempting as that is.
I wonder though if there is such a world. If I’m really into banks shaped like pigs, and I find the Bank shaped like Pig society and I connect with 19 other people. They’re not going to agree with me about anything else. And so there aren’t all bubbles just one or two dimensional and people are so rich and multi-dimensional that there’s really no way to completely… I mean you can isolate yourself from people who have vastly different economic situations than you who live in abject poverty in another part of the world, but that already happens. So how is this any different than I live in a neighborhood and everybody in my neighborhood is, to your point, in some way very similar to me. They’ve all chosen to live there and afford a house of that kind and so forth. But on the other hand, not at all like me. And so how are you saying technology says “oh no, you’re finding your own clones. And when you find your own clones you’ll completely cut off the rest of the world.”
That’s a good point. I think in the physical world we actually do that; you know the people in your neighborhood for example. You have a lot in common as you say, you also have a lot that you disagree with, but if you’re digitally creating communities it might be one of those things where you’re focusing on the similarities to the point where you really want a homogeneous community. It gives you more tools to eliminate the pieces you don’t want. I’m not saying that that’s necessarily going to happen, but you look at Facebook, which is very primitive compared to what we’re talking about. It’s still on a screen. It’s still basically text-based. You know it’s really, we think of it as current, but when you’re talking about this stuff, it’s not really. It’s an old-fashioned system in a way and even that, even with text, which is very abstract, it still manages to convince people to focus strictly on the things they have in common. It pulls you away, I mean you know the effects that it has on public discussion of politics for example, people are looking for it. Again what you said about reading books that you don’t agree with, you’re looking to confirm and when you confirm, suddenly you’re right. It isn’t just my opinion, it becomes more difficult to compromise with people. So you know we see it happening in that world and yes, within groups on Facebook or in digital groups, you’ll find differences. But they tend to get very narrow cast. You know this is a world view, kind of, that is shared by the group. So it makes it easier to craft a group, but that same impulse is going to be there and maybe one of the solutions is to belong to a lot of different groups so that they overlap and don’t narrow cast your identity in a sense. Don’t think that, “well I’m this and this and therefore these [are] the only people I deal with.” Because believe me with this and this, you’re going to find a lot of people that disagree with you, it’s just – people are complicated. So anything that we can do to encourage that, what would be the word, “hetero-genization” I guess. That sort of throwing surprises in there. Surprises I think are good for people especially intellectual surprises.
One in every 10 of your friends on Facebook should be randomly assigned to you.
You know I’ve never heard that but some, something like that or something. I mean often we get that with the relatives.
Yeah that crazy Uncle Eddie, who comes to the cook-out… So let’s talk about your second concern the commercial factors and you’ve alluded to, your concern that the incentives are, with Facebook, to make the technology sticky. But I think you probably mean something much more philosophical or broader or maybe not. Tell me the dystopian narrative of how the forces of free enterprise make a dystopia using these technologies?
Well it’s another one of those paradoxes is that the marketplace that exists is, to a large extent responsible for these technologies that are being developed. At the same time, the motivation of the individual companies, I mean take Google and Facebook for example, and their motivation is to gather data about us and sell it to advertisers. There are other models that would be possible, but that’s the one that the marketplace naturally leads to. I mean if I were running Google I’d be doing the same thing, it’s almost unavoidable. So it’s useful to know what information is being gathered for what purposes, how it’s integrated with other information for what purposes and so on. I think that the commercial motivation is to give people what they want and it’s very hard to sell castor oil to people. You know you’re going to say, “well this product you should buy because this app you’re not going to like it but it’s good for you,” nobody’ going to buy that. So there has to be, you know, some built-in incentive to. I think really what we have to do is replicate the real world more fully. So thirty years from now, when a virtual environment becomes indistinguishable from a physical world, a lot of these problems might disappear because, you kind of embrace the values of a diverse civilization and you imprint that. I don’t think that’s what the companies are doing right now. I think they’re saying, “well we need to gather data because this is… the accumulation of data is really our business model.” So that’s a fundamental conflict I think in a utopian vision of these technologies is that, I would argue to the corporations that are doing this, that ultimately there’s greater profitability and greater adoption and less pushback. If you do the right thing, leaving that undefined for the moment, but if you don’t necessarily… without exploitation, you get a lot more buying into this system. You get people who really throw themselves into it with more security for example, less hack-ability. So yeah there is and I’m not picking on a market system, because any governmental system, any economic system is going to bring its own slant to how they do things.
Do you think that life can, because you just said something, I’m still back at “when these systems become indistinguishable from reality?” And it seems implicit in that some machine learning does a very simple thing. It studies the past and assumes the static world and the future is going to be like the past and it looks for patterns in the past and it projects those into the future. Do you think everything about our existence came [from] human creativity? You know I look at a Banksy piece of graffiti and I think, “Could a machine learning system have studied anything in the past and produced that?” So if not, everything can be learned that way. Can a world be built that is therefore indistinguishable from this world?
I think large parts of it can be made indistinguishable. I mean certainly this environment that this conference downstairs, you know South by Southwest, could be made virtual and it could be just as immersive as it is now. The problem comes with invention, with people who with artisans, inventors, creators, people who don’t do what was done yesterday, people who break the pattern. And I’m wondering about a future form of AI that is able to do that. I don’t know how it would. I think a lot of that is biologically rooted, I think there is an urge in a person to create that’s a very hard, and creation involves doing something that hasn’t been done before, not completely divorced from reality. It has to be familiar, but it has to be, it has to break certain rules of the past. Major changes, all of these inventions really involve a deviation from what happened last week. So that’s a piece, the creative piece that is still I think in the realm of humans.
Let me pose another question to you. This is something I’m mulling about as we speak, and I would love to get your thoughts on it. So I often have a narrative that goes like this: If you want to teach a computer to tell the difference between the dog and cat, you need X-million images labeled dog and X-million labeled cat and it does it all. And then I say, you know the interesting thing is, people can be trained on a sample size of one. So if I take that stuffed animal which you’ve never seen before and I said okay find it in these twenty photos. And sometimes it’s upside-down. Sometimes it’s covered in peanut butter. Sometimes it’s underwater or sometimes it’s frozen in a block of ice. You’re like: “it’s there, it’s there” and we call that transfer learning. We don’t know how we do it. We don’t know how to teach computers to do it. So but then people say “aha” here’s the part I want your thoughts on: “You have a lifetime of experience of seeing things that are smeared with substances and perhaps frozen in glass and all of that.” And that seems to be the answer, and then I say, “Ha-ha, you don’t have to show a five-year-old a million cats. You can show a five-year-old three cats and they can pick cats out, and they don’t have a lifetime of experiencing things like cats.” But then they see the Manx, it doesn’t have a tail, and they say “oh it’s a cat without a tail,” like they know that. And that’s a little kid who hasn’t lived a life of absorbing all of this thing. So two-part question as they say part one is how do you think that child gets trained on such a small amount of data, and second, could the answer be it’s the same way that birds in isolation know how to build a nest? Somehow that is encoded in us in a way that we don’t even understand how that would happen?
My answer to both of them is, I don’t know. And that’s a real interesting speculation on that, the birds. The part B, why do children, why can children do that? I don’t know. There are certain things that the human brain, the human mind does that I don’t know how you would code.
Are you saying I don’t know how you would code it or I don’t know if that can be coded?
I don’t know if it can be coded.
Interesting, so you might be one of those people who says general intelligence may not be possible.
I go both ways on that one. I think there are certain things that we do, metaphor, analogy, seeing relationships, intuition, certain very human ways of thinking, I don’t know how much of that can be systematized.
So the counter-argument, the one I hear all the time is you are a machine, your brain is a machine, your brain is subject to the laws of physics that can therefore be modeled in a machine and therefore it can do everything human can do. I mean that’s the logic is that…
Yeah, I have trouble, I understand the point of that, I think it’s reductive. I think that a machine is something that humans create and we didn’t create this, a machine we understand. This we didn’t. This grew. This evolved. This is full of mysteries and un-examinable pieces. We don’t know why we come up with what we come up with. What motivates an inventor to come up with something? Well, okay he has an idea, but there’s more than that. Is he proving that high school teacher wrong? Is he showing his dad, “yes I can do this?” There are all kinds of personal things that they might not even know they’re motivated by, that are, that require being alive. If there’s no sexuality, if there’s no desire, if there’s no irrationality, how can you be fully human? And if you want general intelligence on that level, do you have to program a simulation of that in there? Does it have to believe that it’s alive? Does it have to believe that it’s mortal? Does human life have the same… if we live to 200, how valuable would human life be? Isn’t the preciousness of it, that it’s finite? It is all too short that it follows an arc. Does a machine have to have that same physiological basis? How much of this is rooted in our existence as creatures? Does it have to think it is? Does it have to be really human and alive in order to do the kinds of things that we think of as quintessentially human, like great music or invent smartphones or build cities?
It isn’t just that you know you can do it and you know how to do it, you have to want to do it, and it has to consume your life. Are you willing to do that? Well why a machine would do that where’s this motivation coming from? I only have five years to live… you know what I mean, how can a machine know that? I want to attract a certain person to me. Does a machine want to do that? It has no need for that, no understanding…? So a lot of this stuff is very squishy human stuff that is evolved. And I think that if you’re going to get general intelligence you might have to grow it. Because if you have something that’s alive, it has a sense of self in a way. It has a sense of survival. It knows it’s going to die in a certain way.
Well interestingly, life is an incredibly low bar, and I think the only reason you can say computer viruses aren’t alive, is because… and it’s interesting because life doesn’t have a consensus definition. Death doesn’t have one. Intelligence doesn’t have one. Creativity doesn’t have one, which either mean to me that we don’t know what they are, or the term itself is meaningless. I don’t know which of those. But life is a really low bar because… the reason we don’t say computer viruses are not alive is simply because they’re non-biological and right now most definitions require biology. But a virus we generally regard to be alive, a bacterium we do, and yet those don’t have any of those. You’re talking about something more than being alive, right, you’re talking about consciousness?
Consciousness, although well, consciousness let’s say in silicon as opposed to consciousness in some wet petri dish that’s actually grown tissue, for example. Let’s say you have the same kind of general intelligence imbued in both of those. I think the one that’s alive is going to get you closer to a replication of the physical world that we know.
Do you think humans are unique in our level of consciousness?
On this planet? I think that’s impossible to know. I can’t put myself in the head of a macaque. You know I don’t know. I suspect that that every living creature has a sense of itself, in the sense that…
Yeah, a tree can’t move but it will turn to face the Sun, it will respond to the environment. An animal definitely will avoid threat, fire.
We derive the notion of human rights and enact laws against animal abuse because we feel that they are entities, that they can feel, that they have a self. If you say a tree has that, have you not undermined the basis by which you say humans have human rights?
No, I would say that a plant, I know this is going to sound arbitrary. A plant is probably in a different category. I would say that, in fact I would say a lizard is probably in a different category you know. I hate to be species-ist but you know I think that we’re talking about higher mammals pretty much. And as inferred from their behavior: complex, social structures and so forth. Trees don’t do that.
Isn’t it fascinating that up until the ‘90s the conventional wisdom among veterinarians was that animals don’t feel pain?
Sure and they operated open-heart surgeries on babies in the 90s without anesthesia because they said they can’t feel pain either. And the theory goes that if you take a Paramecium and you poke it with something, it moves away and you don’t infer it has a nervous system and it felt that. And yet and so they say that’s all the dog that gets cut has, and that’s up until the 1990s, that was a standard of belief that animals didn’t feel pain.
You could I mean if you were willing to accept that logic you could also accept human surgery without you know I mean there’s no clear line there.
No, I’m not advocating that position…
I know you’re not.
It’s interesting to think that the problem… I think it was a position argued in part from convenience by people who use animals or raise animals and so forth. Because if they can’t feel pain then they don’t, you know…
Yeah, then who cares. We know dogs feel pain. Can they create sophisticated societies? No.
I use the very example in a book I have coming up shortly about this time my dog was running and jumped over this water faucet and tore her leg open. And she yelped and yelped and I said I wrote you know nobody could convince me my dog did not feel pain. But you noticed the way I described it that she seemed to feel pain, because I do have no way of knowing. That’s the oldest philosophical question on the books is you don’t know what anybody else feels or they exist or anything. It’s intractable, and the reason it interests me is because I’m deeply interested in whether computers can become conscious and more interested in how we would know if they were. So I would like that to be my last question for you. How would you know if a computer was conscious?
If I had to pin it down to one thing?
Well no. The computer says, “I am the world’s first conscious computer.” What do you say to it?
I would say “make me laugh.” You know let’s say do something that’s human and irrational.
Yeah, the net plays a recording of flatulence…
Okay, but that’s…
But you did it, you did it. It made you laugh.
Well the description of the machine doing that made me laugh. But if the machine actually did that I’d say, “that’s not funny.” It has to do something. Write a song. Do something that hasn’t been done before. If you’re just basing it on what happened last week, then I can be tricked and to believing that.
So you know they had these programs that write beatnik poetry. You know the dog sat on the step, bark, bark, eleven is an odd number indeed. You know write stuff like that, and they would say “well nobody’s ever written that poem before” and you’re like well there is a reason for that. They feed Bach into it and use machine learning to make Bach-ish. I think you can’t trick a musician, but the musicians are like “that’s kind of like Bach.” And so neither of those come anywhere near close to passing your bar I assume, and yet…
They didn’t invent it. That would be if the robot came up and played like Jimi Hendrix. I’d say that’s pretty good, but if he came up with that in 1967, that’s a whole different thing.
You know it’s interesting because we are recording this on the anniversary of the tournament between Alpha Go and Lee Sedol. And there was a move, move 37 in game 3 that people say was a creative move. It was a move that no human would have seen to make. Even Alpha Go said… Lee described it as people started talking about Alpha Go’s creativity on that day and subsequent to that they have systems that train themselves. So there’s no training on human games and there was one that trained itself to play chess, and what it’s doing are things no chess player would do. In one game it won, it sacrificed a queen and then a bishop in two consecutive minutes and won the game to secure a position. It hid a queen way back in one corner and people describe it as alien chess because it’s the first thing that wasn’t trained on this huge corpus of chess games we have. So is that getting near it?
It’s getting near it, that’s doing what a really creative person does which is to take the basic elements and not impose any of the preconceptions on top of it, sort of look at it fresh.
The question I ask, is that creativity? Or is that something that looks like creativity, or is there a difference between those two statements? That would be my last question for you.
That’s a hard one to say. You can imitate creativity by creating Bach-like music. Chess I’m not sure falls into the same category or a sophisticated game, Go or something. Because there is a certain set of possibilities, whereas in the arts, for example, or in invention there really isn’t. I mean, there are physical restrictions, but aside from that, it can go anywhere and although it seems like I’m splitting hairs basically…
These are hard. The challenge with languages that we’ve never had to – we’ve always been able to have a kind of colloquial understanding of all these concepts. Because we never had to say, “well how would you know if a computer could think?” How would you know that? Because the words just aren’t equipped for it, the language therefore I think limits our ability to imagine it. But what a fascinating hour it has been. I could go on for another hour but I won’t subject you to that. Thank you so much for this.
It’s my pleasure.