In this episode, Byron and Ali discuss AI’s impact on business and jobs.
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Ali Azarbayejani. He is the CTO and Co-founder of Cogito. He has 18 years of commercial experience as a scientist, an entrepreneur, and designer of world-class computational technologies. His pioneering doctoral research at MIT Media Labs in Probabilistic Modeling for 3-D Vision was the basis for his first startup company Alchemy 3-D Technology, which created a market in the film and video post-production industry for camera matchmoving software. Welcome to the show Ali.
Ali Azarbayejani: Thank you, Byron.
I’d like to start off with the question: what is artificial intelligence?
I’m glad we’re starting with some definitions. I think I have two answers to that question. The original definition of artificial intelligence I believe in a scholarly context is about creating a machine that operates like a human. Part of the problem with defining what that means is that we don’t really understand human intelligence very well. We have a pretty good understanding now about how the brain functions physiologically, and we understand that’s an important part of how we provide cognitive function, but we don’t have a really good understanding of mind or consciousness or how people actually represent information.
I think the first answer is that we really don’t know what artificial or machine intelligence is other than the desire to replicate human-like function in computers. The second answer I have is how AI is being used in industry. I think that that is a little bit easier to define because I believe almost all of what we call AI in industry is based on building input/output systems that are framed and engineered using machine learning. That’s really at the essence of what we refer to in the industry as AI.
So, you have a high concept definition and a bread and butter work-a-day working definition, and that’s how you’re bifurcating that world?
Yeah, I mean, a lot of people talk about we’re in the midst of an AI revolution. I don’t believe, at least in the first sense of the term, that we’re in an AI revolution at all. I think we’re in the midst of a machine learning revolution which is really important and it’s really powerful, but I guess what I take issue is with the term intelligence, because most of these things that we call artificial intelligence don’t really exhibit the properties of intelligence that we would normally think are required for human intelligence.
These systems are largely trained in the lab and then deployed. When they’re deployed, they typically operate as a simple static input/output system. You put in audio and you get out words. So, you put in video and you get out locations of faces. That’s really at the core of what we’re calling AI now. I think it’s really the result of advances in technology that’s made machine learning possible at large scale, and it’s not really a scientific revolution about intelligence or artificial intelligence.
All right, let’s explore that some, because I think you’re right. I have a book coming out in the Spring of 2018 which is 20,000 words and it’s dedicated to the brain, the mind and consciousness. It really tries to wrap around those three concepts. So, let’s go through them if you don’t mind for just a minute. You started out by saying with the brain we understand how it functions. I would love to go into that, but as far as I understand it, we don’t know how a thought is encoded. We don’t know how the memory of your 10th birthday party or what pineapple tastes like or any of that. We don’t know how any of that is actually encoded. We can’t write to it. We can’t read from it, except in the most very rudimentary sense. So do you think we really do understand the brain?
I think that’s the point I was actually making is that we understand the brain at some level physiologically. We understand that there’s neurons and gray matter. We understand a little bit of physiology of the brain, but we don’t understand those things that you just mentioned, which I refer to as the “mind.” We don’t really understand how data is stored. We don’t understand how it’s recalled exactly. We don’t really understand other human functions like consciousness and feelings and emotions and how those are related to cognitive function. So, that’s really what I was saying is, we don’t understand how intelligence evolves from it, although really where we’re at is we just understand a little bit of the physiology.
Yeah, it’s interesting. There’s no consensus definition on what intelligence is, and that’s why you can point at anything and say, “well that’s intelligent.” “My sprinkler that comes on when my grass is dry, that’s intelligent.” The mind is of course a very, shall we say, controversial concept, but I think there is a consensus definition of it that everybody can agree to, which is it’s all the stuff the brain does that doesn’t seem, emphasis on seem, like something an organ should be able to do. Your liver doesn’t have a sense of humor. Your liver doesn’t have an imagination. All of these things. So, based on that definition of creativity and not even getting to consciousness, not even experiencing the world, just these abilities. These raw abilities like to write a poem, or paint a great painting or what have you. You were saying we actually have not made any real progress towards any of that. That’s gotten mixed up in this whole machine learning thing. Am I right that you think we’re still at square one with that whole building artificial mind?
Yeah, I mean, I don’t see a lot of difference intellectually [between] where we are now from when I was in school in the late 80s and 90s in terms of theories about the mind and theories about how we think and reason. The basis for the current machine learning revolution is largely based on neural networks which were invented in the 1960s. Really what is fueling the revolution is technology. The fact that we have the CPU power, the memory, the storage and the networking — and the data — and we can put all that together and train large networks at scale. That’s really what is fueling the amazing advances that we have right now, not really any philosophical new insights into how human intelligence works.
Putting it out there for just a minute, is it possible that an AGI, a general intelligence, that an artificial mind, is it possible that that cannot be instantiated in machinery?
That’s a really good question. I think that’s another philosophical question that we need to wrestle with. I think that there are at least two schools of thought on this that I’m aware of. I think the prevailing notion, which is I think a big assumption, is that it’s just a matter of scale. I think that people look at what we’ve been able to do with machine learning and we’ve been able to do incredible things with machine learning so far. I think people think of well, a human sitting in a chair can sit and observe the world and understand what’s going on in the world and communicate with other people. So, if you just took that head and you could replicate what that head was doing, which would require a scale much larger than what we’re doing right now with artificial neural networks, then embody that into a machine, then you could set this machine on the table there or on the chair and have that machine do the same thing.
I think one school of thought is that the human brain is an existence proof that a machine can exist to do the operations of a human intelligence. So, all we have to do is figure out how to put that into a machine. I think there’s a lot of assumptions involved in that train of thought. The other train of thought, which is more along the lines of where I land philosophically, is that it’s not clear to me that intelligence can exist without ego, without the notion of an embodied self that exists in the world, that interacts in the world, that has a reason to live and a drive to survive. It’s not clear to me that it can’t exist, and obviously we can do tasks that are similar to what human intelligence does, but I’m not entirely sure that… because we don’t understand how human intelligence works, it’s not clear to me that you can create an intelligence in a disembodied way.
I’ve had 60-something guests on the show, and I keep track of the number that don’t believe we can actually build a general intelligence, and it’s I think 5. They are Deep Varma, Esther Dyson, people who have similar… more so I think they’re even more explicitly saying they don’t think we can do it. The other 60 guests have the same line of logic, which is we don’t know how the brain works. We don’t know how the mind works. We don’t know how consciousness works, but we do have one underlying assumption that we are machines, and if we are machines, then we can build a mechanical us. Any argument against that or any way to engage it, the word that’s often offered is magic. The only way to get around that is to appeal to magic, to appeal to something supernatural, to appeal to something unscientific. So, my question to you is: is that true? Do you have to appeal to something unscientific for that logic to break down, or are there maybe scientific reasons completely causal, system-y kind of systems by which we cannot build a conscious machine?
I don’t believe in magic. I don’t think that’s my argument. My argument is more around what is the role that the body around the brain plays, in intelligence? I think we make the assumption sometimes that the entire consciousness of a person, entire cognition, everything is happening from the neck up, but the way that people exist in the world and learn from simply existing in the world and interacting with the world, I think plays a huge part in intelligence and consciousness. Being attached to a body that the brain identifies with as “self,” and that the mind has a self-interest in, I think may be an essential part of it.
So, I guess my point of view on this is I don’t know what the key ingredients are that go into intelligence, but I think that we need to understand… Let me put it this way, I think without understanding how human consciousness and human feelings and human empathy works, what the mechanisms are behind that, I mean, it may be simply mechanical, but without understanding how that works, it’s unclear how you would build a machine intelligence. In fact, scientists have struggled from the beginning of AI even to define it, and it’s really hard to say you can build something until you can actually define it, until you actually understand what it is.
The philosophical argument against that would be like “Look, you got a finite number of senses and those that are giving input to your brain, and you know the old philosophical thought experiment you’re just a brain in a vat somewhere and that’s all you are, and you’re being fed these signals and your brain is reacting to them,” but there really isn’t even an external world that you’re experiencing. So, they would say you can build a machine and give it these senses, but you’re saying there’s something more than that that we don’t even understand, that is beyond even the five senses.
I suppose if you had a machine that could replicate atom for atom a human body, then you would be able to create an intelligence. But, how practical would it be?
There are easier ways to create a person than that?
Yeah, that’s true too, but how practical is a human as a computing machine? I mean, one of the advantages of the computer systems that we have, the machine learning-based systems that we call AI is that we know how we represent data. Then we can access the data. As we were talking about before, with human intelligence you can’t just plug in and download people’s thoughts or emotions. So, it may be that in order to achieve intelligence, you have to create this machine that is not very practical as a machine. So you might just come full circle to well, “is that really the powerful thing that we think it’s going to be?”
I think people entertain the question because this question of “are people simply machines? Is there anything that happens? Are you just a big bag of chemicals with electrical pulses going through you?” I think people have… emotionally engaging that question is why they do it, not because they want to necessarily build a replicant. I could be wrong. Let me ask you this. Let’s talk about consciousness for a minute. To be clear, people say we don’t know what consciousness is. This is of course wrong. Everybody agrees on what it is. It is the experiencing of things. It is the difference between a computer being able to sense temperature and a person being able to feel heat. It’s like that difference.
It’s been described as the last scientific question we don’t really know how to ask, and we don’t know what the answer would look like. I put eight theories together in this book I wrote. Do you have a theory, just even a gut reaction? Is it an emergent property? Is it a quantum property? Is it a fundamental law of the universe? Do you have a gut feel of what direction you would look to explain consciousness?
I really don’t know. I think that my instinct is along the lines of what I talked about recently with embodiment. My gut feel is that a disembodied brain is not something that can develop a consciousness. I think consciousness fundamentally requires a self. Beyond that, I don’t really have any great theories about consciousness. I’m not an expert there. My gut feel is we tend to separate, when we talk about artificial intelligence, we tend to separate the function of mind from the body, and I think that may be a huge assumption that we can do that and still have self and consciousness and intelligence.
I think it’s a fascinating question. About half of the guests on the show just don’t want to talk about it. They just do not want to talk about consciousness, because they say it’s not a scientific question and it’s a distraction. Half of them, very much, it is the thing, it’s the only thing that makes living worthwhile. It’s why you feel love and why you feel happiness. It is everything in a way. People have such widely [divergent views], like Stephen Wolfram was on the show, and he thinks it’s all just computation. To that extent, anything that performs computation, which is really just about anything, is conscious. A hurricane is conscious.
One theory is consciousness is an emergent property, just like you are trillions of cells that don’t know who you are and none of them have a sense of humor, you somehow have a distinct emergent self and a sense of humor. There are people who think the planet itself may have a consciousness. Others say that activity in the sun looks a lot like brain activity, and perhaps the sun is conscious, and that is an old idea. It is interesting that all children when they draw an outdoor scene they always put a smiling face on the sun. Do you think consciousness may be more ubiquitous, not unique to humans? That it may kind of be in all kinds of places, or do you just at a gut level think it’s a special human [trait], and other animals you might want to include in that characteristic?
That’s an interesting point of view. I certainly see how it’s a nice theory about it being a continuum I think is what he’s saying. That there’s some level of consciousness in the simplest thing. Yeah, I think this is more along… it’s just a matter of scale type of philosophy which is that at a larger scale that what emerges is a more complex and meaningful consciousness.
There’s a project in Europe you’re probably familiar with, the Human Brain Project, which is really trying to build an intelligence through that scale. The counter to it is the Open Worm Project which is they’ve sequenced the genome, of the Nematode worm and its brain has 302 neurons, and for 20 years people have been trying to model those 302 neurons in a computer to build, as it were, a digital functioning Nematode worm. By one argument they’re no closer to cracking that than they were 20 years ago. The scale question has its adherence at both extremes.
Let’s switch gears now and put that world aside and let’s talk about the world of machine learning, and we won’t call it intelligence anymore. It’s just machine learning, and if we use the word intelligence, it’s just a convenience. How would you describe the state of the art? As you point out, the techniques we’re using aren’t new, but our ability to apply them is. Are we in a machine learning renaissance? Is it just beginning? What are your thoughts on that?
I think we arein a machine learning renaissance, and I think we’re closer to the beginning than to the end. As I mentioned before, the real driver of the renaissance is technology. We have the computational power to do massive amounts of learning. We have the data and we have the networks to bring it all together and the storage to store it all. That’s really what has allowed us to realize the theoretical capabilities of complex networks as we model input/output functions.
We’ve done amazing things with that particular technology. It’s very powerful. I think there’s a lot more to come, and it’s pretty exciting the kinds of things we can do with it.
There’s a lot of concern, as you know, the debate about the impact that it’s going to have on employment. What’s your take on that?
Yeah, I’m not really concerned about that at all. I think that largely what these systems are doing is they’re allowing us to automate a lot of things. I think that that’s happened before in history. The concern that I have is not so much about removing jobs, because the entire history of the industrial revolution [is] we’ve built technology that has made jobs obsolete, and there are always new jobs. There’s so many things to do in the world that there’s always new jobs. I think the concern, if there’s any about this, is therateof change.
I think at a generational level, it’s not a problem. The next generation are going to be doing jobs that we don’t even know exist right now, or that don’t exist right now. I think the problems may be within a generation transformation. If you start automating jobs that belong to people who cannot be retrained in something else, but I think that there will always be new jobs.
Is that possible that there’s a person out there that cannot be retrained to do meaningful work? We’ve had 250 years of unending technological advance that would have blown the minds of somebody in 1750, and yet we don’t have anybody who… it’s like, no, they can’t do anything. Assuming that you have full use of your body and mind, there’s not a person on the planet that cannot in theory add economic value. All the more if they’re given technology to do it with. Do you really think that they’ll have people that “cannot be retrained”?
No, I don’t think it’s a “can” issue. I agree with you. I think that people can be retrained and like I said, I’m not really worried that there won’t be jobs for people to do, but I think that there are practical problems of the rate of change. I mean, we’ve seen it in the last decades in manufacturing jobs that a lot of those have disappeared overseas. There’s real economic pain in the regions of the country where those jobs were really prominent, and I don’t think there’s any theoretical reason why people can’t be retrained. Our government doesn’t really invest in that as much as it should, but I think there’s a practical problem that people don’t get retrained. That can cause shifts. I think those are temporary. I personally don’t see long term issues with transformations in technology.
It’s interesting because… I mean, this is a show about AI, which obviously holds it in high regard, but there have been other technologies that have been as transformative. An assembly line is a kind of AI. That was adopted really quickly. Electricity was adopted quickly, and steam was adopted. Do you think machine learning really is being adopted all that much faster, or is it just another equally transformative technology like electricity or something?
I agree with you. I think that it’s transformational, but I think it’s probably creating as many jobs as it’s automating away right now. For instance, in our industry, which is in contact centers, a big trend is trying to automate, basically to digitize a lot of the communications to take load off the telephone call center. What most of our enterprise customers have found with our contact centers is the more they digitize, their call volume actually goes up. It doesn’t go down. So, there’s kind of some conflicting evidence there about how much this is actually going to take away from jobs.
I am of the opinion I think anyone in any endeavor understands there’s always more to do than you have time to do. Automating things that can be automated I generally feel is a positive thing, and putting people to use in functions where we don’t know how to automate things, I think is always going to be an available path.
You brought up what you do. Tell us a little bit about Cogito and its mission.
Our mission is centered around helping people have better conversations. We’re really focused on the voice stream, and in particular our main business is in customer call centers where what we do is our technology listens to ongoing conversations, understands what’s going on in those conversations from an interactive and relationship point of view, from a behavioral point of view, and gives agents in real-time, feedback when conversations aren’t going well or when there’s something they can do to improve the conversation.
That’s where we get to the concept of augmented intelligence, which is using these machine learning endowed systems to help people do their jobs better, rather than trying to replace them. That’s a tremendously powerful paradigm. There’s trends, as I mentioned, towards trying to automate these things away, but often our customers find it more valuable to increase the competence of the people doing the jobs there because those jobs can’t be completely automated, rather than trying to automate away the simple things.
Hit rewind, back way up with Cogito because I’m really fascinated by the thesis that there’s all of this. There’s what you say and then there’s how you say it. That we’re really good with one half of that equation, but we don’t apply technology to the other half. Can you tell that story and how it led to what you do?
Yeah, imagine listening to two people having a conversation in a foreign language that you don’t understand. You can undoubtedly tell a lot about what’s going on in that conversation without understanding a single word. You can tell whether people are angry at each other. You can tell whether they’re cooperating or hostile. You can tell a lot of things about the interaction without understanding a single word. That’s essentially what we’re doing with the behavioral analysis of how you say it. So, when we listen to telephone conversations, that’s a lot of what we’re doing is we’re listening to the tenor and the interaction in the conversation and getting a feel for how that conversation is going.
I mean, you’re using “listen” here colloquially. There’s nothing really listening. There’s a data stream that’s being analyzed, right?
So, I guess it sounds like they’re like the parents [of] Charlie Brown, like “waa, wa waa.” So, it hears that and can figure out what’s going on. So, that sounds like a technology with broad applications. Can you talk about in a broad sense what can be done, and then why you chose what you did choose as a starting point?
It actually wasn’t the starting point. The application that originally inspired the company was more of a mental health application. There’s a lot of anecdotal understanding that people with clinical depression or depressed mood speak in a characteristic way. So the original inspiration for building the company and the technology was to use in telephone outreach operations with chronically ill populations that have very high rates of clinical depression and very low rates of detection and treatment of clinical depression. So, that’s one very interesting application that we’re still pursuing.
The second application came up in that same context, in the context of health and wellness call centers is the concept of engagement. A lot of the beneficial approach to health is preventative care. So, there’s been a lot of emphasis in healthcare on helping people quit smoking and have better diets and things like that. These programs normally take place over the telephone, and so there’s conversations, but they’re usually only successful when the patient or the member is engaged in the process. So, we used this sort of speech and conversational analysis to build models of engagement and that would allow companies to either react to under-engaged patients or not waste their time with under-engaged patients.
The third application, which is what we’re primarily focused on right now is agent interaction, the quality of agent interaction. There’s a huge amount of value with big companies that are consumer-oriented and particularly those that have membership relationships with customers in being able to provide a good human interaction when there are issues. So, customer service centers… and it’s very difficult if you have thousands of agents on the phone to understand what’s going on in those calls, much less improve it. A lot of companies are really focused on improvement. We’re the first system that allows these companies to understand what’s going on in those conversations in real-time, which is the moment of truth where they can actually do something about it. We allow them to do something about it by giving information not only to supervisors who can provide real-time coaching, but also to agents directly so that they can understand their own conversations are going south and be able to correct that and have better conversations themselves. That’s the gist of what we do right now.
I have a hundred questions all running for the door at once with this. My first question is you’re trying to measure engagement as a factor. How generalizable is that technology? If you plugged it into this conversation that you and I are having, does it not need any modification? Engagement is engagement is engagement, or is it like, Oh no, at company X it’s going to sound different than a phone call from company Y?
That’s a really good question. In some general sense an engaged interaction, if you took a minute of our conversation right now, it’s pretty generalizable. The concept is that if you’re engaged in the topic, then you’re going to have a conversation which is engaged, which means there’s going to be a good back and forth and there’s going to be good energy in the conversation and things like that. Now in practice, when you’re talking about in a call center context, it does get trickier because every call center has potentially quite different shapes of conversations.
So, one call center may need to spend a minute going through formalities and verification and all of that kind of business, and that part of the conversation is not the part you actually care about, but it’s the part where we’re actually talking about a meaningful topic. Whereas another call center may have a completely different shape of a conversation. What we find that we have to do, where machine learning comes in handy here, is that we need to be able to take our general models of engaged interactions and convert and adapt those in particular context to understanding engaged overall conversations. Those are going to vary from context to context. So, that’s where adaptive machine learning comes into play.
My next question is from person to person how consistent… no doubt if you had a recording of me for an hour, you could get a baseline and then measure my relative change from that, but when you drop in, is Bob X of Tacoma, Washington and Suzie Q of Toledo, do they exhibit consistent traits or attributes of engagement?
Yeah, there are certainly variations among people’s speaking style. You look at areas of the country, different dialects and things like that. Then you also look at different languages and those are all going to be a little bit different. When we’re talking about engagement at a statistical level, these models work really well. So the key is when thinking about product development for these, is to focus on providing tools that are effective at a statistical level. Looking at one particular person, your model may indicate that this person is not engaged, but maybe that is just their normal speaking style, but statistically it’s generalizable.
My next question is: is there something special about engagement? Could you, if you wanted to tell whether somebody’s amused or somebody’s intrigued or somebody is annoyed or somebody’s outraged? There’s a palette of human emotions. I guess I’m asking, engagement like you said, there are not so much tonal qualities you’re listening for, but you’re counting back and forths, that’s kind of a numbers [thing], not a…. So on these other factors, could you do that hypothetically?
Yeah, in fact, our system is a platform for doing exactly that sort of thing. Some of those things we’ve done. We build models for various emotional qualities and things like that. So, that’s the exciting thing is that once you have access to these conversations and you have the data to be able to identify these various phenomena, you can apply machine learning and understand what are the characteristics that would lead to a perception of amusement or whatever result you’re looking for.
Look, I applaud what you’re doing. Anybody who can be better phone support has my wholehearted support, but I wonder if this technology wouldn’t be heading is kind of an OEM thing where it’s put into caregiving robots, for instance, who need to learn how to read the emotions of the person they’re caring for and modulate what they say. It’s like a feedback loop to self-teaching kind of, just that use case. The robot caregiver that uses this [knows] she’s annoyed, he’s happy, or whatever, as a feedback loop. Am I way off in sci-fi land or is that no, that could be done?
No, that’s exactly right, and it’s an anticipated application of what we do. As we get better and better at being able to understand and classify useful human behaviors and then inferring useful human emotional states from those behaviors, that can be used in automated systems as well.
Frequent listeners to the show will know that I often bring up Weizenbaum and ELIZA. The setup is that Weizenbaum, back in the 60s, made this really simple chat bot that you would say, “I don’t feel good today,” and it would say “why don’t you feel good today?” “I don’t feel good today because of my mother.” “Why does your mother not make you not feel good?” It’s this real basic thing, but what he found was that people were connecting with it and this really disturbed him and so he unplugs it. He said, when the computer says “I understand,” it’s just a lie. That there’s no “I,” which sounds like you would agree with, and there’s nothing that understands anything. Do you worry that that is a [problem]? Weizenbaum would be: “that’s awful.” If that thing is manipulating an old person’s emotions, that’s just a terrible, terrible thing. What would you say?
I think it’s a danger. Yeah, I think we’re going to see that sort of thing happen for sure. I think people look at chat bots and say, “Oh look, that’s an artificial intelligence, that’s doing something intelligent” and it’s really not, as ELIZA proves. You can just have a little base system on the back and type stuff in and type stuff out. A verbal chat bot might use a speech-to-text as an input modality and text-to-speech as an output modality, but have also a rules based unit on the back, and it’s really doing nothing intelligent, but it can give the illusion of some intelligence going on because you’re talking to it and it’s talking back to you.
So, I think yeah, there will be bumps along that road for sure, in trying to build these technologies that, particularly when you’re trying to build a system to replace a human and trying to convince the user of the system that you’re talking to a human. That’s definitely sketchy ground.
Right. I mean, I guess it’s forgivable we don’t know, I mean, it’s all new. It’s all stuff we’re having to kind of wing it. We’re coming up towards the end of our time. I just have a couple of closing questions, which are: Do you read science fiction? Do you watch science fiction movies? Do you go to science fiction TV, and if so, is there any view of the future, any view of AI or anything like that that you look at and think, yeah that could happen someday?
Yeah, it’s really hard to say. I can’t think of anything. Star Warsof course used very anthropomorphized robots, and if you think of a system like HAL in 2001: A Space Odyssey,you could certainly simulate something like that. If you’re talking about information, being able to talk to HAL and have HAL look stuff up for you and then talk back to you and tell you what the answer is, that’s totally believable. Of course the twist in 2001: A Space Odysseyis that HAL ended up having a sense of self, sense of its own self and decided to make decisions. Yeah, I’m very much rooted in the present and there’s a lot of exciting things going on right now.
Fair enough. It’s interesting that you used Star Wars, which of course is a long time ago, because somehow or another you think the movie would be different if C3PO were named Anthony and R2D2 was named George.
That would just take on a whole different… giving them names is even one step closer to that whole thing. Data in Star Trekkind of walked the line. He had a name, but it was Data.
It’s interesting actually to look at the difference between C3PO and R2D2. You look at CP3O and it has the form of a human, and you can ask the question: “Why would you build a robot that has a form of a human?” R2D2 is a robot, which does, or could potentially do, exactly what C3PO does in the form of a whatever – cylinder. So, it’s interesting to look at the contrast and while they imagine there’s two different kinds of robots. One, which is very anthropomorphized, and one which was very mechanical.
Yeah, you’re right because the decision not to give R2 speech, it’s not like he didn’t have enough memory. He needed another 30MB of RAM or something. That also was something clearly deliberate. I remember reading that Lucas’s original wasn’t really going to use Anthony Daniels to voice it. He was going to get somebody who sounded like a used car salesman, kind of fast talking and all that, and that’s what the script is written for. I’m sure it’s a literary device, but like a lot of these things, I’m a firm believer that what comes out in science fiction isn’t predicting the future. It kind of makes it. Uhura had a Bluetooth device in her ear. So, it’s kind of like whatever the literary imagining of it is probably going to be what the scientific manifestation of it is to some degree.
Yeah, the concept of the self-fulfilling prophecy is definitely there.
Well, I tell you what, if people want to keep up with you and all this work you’re doing, do you write, yak on Twitter, how can people follow what you do?
We’re going to be writing a lot more in the future. Our website www.cogitocorp.com is where you’ll find the links to the things that we’re writing on, AI and the work we do here at Cogito.
Well, this has been fascinating. I’m always excited to have a guest who is willing to engage these big questions and take, as you pointed out earlier, a more contrarian view. So, thank you for your time Ali.
Thank you, Byron. It’s been fun, and thanks for having me on.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.