In this episode Byron and Dennis discuss machine learning.
Byron Reese: This is “Voices in AI,” brought to you by GigaOm. I’m Byron Reese. Today my guest is Dennis Laudick. He is the VP of Marketing of Machine Learning at ARM. ARM is—well, let’s just start off by saying, you certainly have several of their products. They make processors and they have between 90% to 95% market share of mobile devices. They’ve shipped 125 billion processors and are shipping at a rate of about 20 billion a year. That’s what three per person per year. Welcome to the show, Dennis.
Dennis Laudick: Great. Thank you very much. Pleased to be here.
So picking up on that thread, three per person. So, anybody who owns any electronics, they probably have four or five of your chips this year, where would they find those? Like, walk me around the house and office, what all might they be in?
Yeah so we are kind of one of the greatest secrets out in the market at the moment, we’re pervasive, certainly. So, I mean, ARM is responsible for designs of processors so that the CPUs are, ironic to this topic, the brains, as a lot of people call it, that go into the computer chips and that power our devices. So, behind your smartphone, obviously, there is a processor which is doing all of the things that you are seeing as well as a lot in the background. Just looking around you, TVs; I am speaking into a phone now, it probably has a processing chip in the background doing something—those consumer electronic devices, the majority are probably being powered by a processor which was designed by ARM.
We do things that range from tiny sensors and watches and things like that, clear up to much larger-scale processing. So yeah, just looking around, battery-powered devices or powered consumer electronic devices around you in your home or your office, there is a good chance that the majority of those are running a processor designed by ARM, which is quite an exciting place to be.
I can only imagine. What was that movie that was out, the Kingsmanmovie, where once they got their chips in all the devices, they took over the world? So I assume that’s kind of the long term plan.
I am not particularly aware of any nefarious plans, but we certainly got that kind of reach.
I like that you didn’t deny it. You just said, you are not in the loop. I am with that. So let’s start at the top of the equation. What is artificial intelligence?
So it’s a good question. I think the definitions around it are a bit unsettled at the moment. I mean certainly from my perspective, I tend to view things pretty broadly, and I think I probably best describe it as “a machine trying to mimic parts of what we consider to be human intelligence.” So it’s a machine mimicking either a part, or several parts of what humans considered to be intelligent. Not exactly a concrete term but probably is—
I think it’s a great definition except for problems with the word “artificial” and problems with the word “intelligence.” Other than that, I have no problem. In one talk I heard, you said that old tic-tac-toe problems would therefore be AIs. I am with that, but that definition is so broad. The sprinkler system that comes on when my grass is dry; that’s AI. A calculator adds 2+2 which is something a person does; that’s AI. An abacas therefore would be AI; it’s machine that’s doing what humans do. I mean is that definition so broad that it’s meaningless or what meaning do you tease out of that?
Yeah. That’s a good question, and certainly it’s a context-driven type of question and answer. And I tend to view artificial intelligence and intelligence itself is kind of a continuum of ideas. So I think the challenge is to sit there and go, “Right, let’s nail down exactly what artificial intelligence is,” and that naturally leads you to saying, “Right, let’s nail down exactly what intelligence is.” I don’t think we’re to the point where that’s actually a practical possibility. You would have to start from the principle that human beings have completely fathomed the context of what the human being is capable of and I don’t think we’re there yet. If we’ve learned everything there is to be learned about ourselves, then I would be very surprised.
So if you start from the concept that intelligence itself isn’t completely well understood, then you naturally fall back to the concept that artificial intelligence isn’t something that you can completely nail down. So, from a more philosophical standpoint which is quite fun, it’s not something that’s concrete that you can just say, this is the denotation of it. And, again, from my perspective, it’s much more useful if you want to look at it in a broad sense to look at it as a scale or a spectrum of concepts. So, in that context, then yeah, going back to tic-tac-toe, it was an attempt at a machine trying to mimic human intelligence.
I certainly spent a lot of my earlier years playing games like chess and so forth, where I was amazed by the fact that a computer could make these kind of assessments. And, yes, you could go back to an abacus. And you could go forward to things like, okay, we have a lot of immediate connotations around artificial intelligence, around robots and what we consider quasi-autonomous thinking machines, but that then leaves the questions around things like feelings, things like imagination, things like intuition. What exactly falls into the realm of intelligence?
It’s a pretty subjective and non-concrete domain but I think the important thing, although I like to look at it from a very broad continuum of ideas, you know you do have to drive it on a context-sensitive basis. So from a practical standpoint, as a technologist, we look at different problem spaces and we look at different technologies which can be applied to those problem spaces, and although it’s not always clear, there is usually some very contextual driven understanding between the person or the people talking about AI or intelligence itself.
So, when you think of different approaches to artificial intelligence, we’ve been able to make a good deal of advances lately for a few reasons. One is the kinds of processors, that do parallel processing, like you guys make, that become better and better and cheaper and cheaper and we use more and more of them, and then we are getting better at applying machine learning which is of course your domain to broader problem sets.
Yeah.
Do you have an opinion? You are bound to look at a problem like, “Oh, my car is routing me somewhere strange,” is that a machine learning problem? And machine learning, at its core, is studying the past—a bunch of data from the past—and projecting that into the future.
What do you think are the strengths of that approach and what, I am very interested, are the limits of it? You think for instance creativity, like what Banksy does, is fundamentally a machine learning problem? You give it enough cultural references and it will eventually be graffiti-ing it on the wall.
Yeah.
Where do you think machine learning rocks, and where is it not able to add anything?
Yeah. That’s a really interesting question. So, I think a lot of times I get asked a question about artificial intelligence and machine learning, and they get interposed between each other. I think a lot of people—because of the fact that in our childhood, we all heard stories from science fiction that were labeled under artificial intelligence and went off in various different directions—hear of a step forward in terms of what computers can do, and to quickly extrapolate to what is far-reaching elements of artificial intelligence, and it’s somewhere in the domain of science fiction still.
So it is interesting to get involved in those discussions, but there are some practicalities in terms of what the technology is actually capable of doing. So, from my perspective, I think this is actually a really important wave that’s happening at the moment, the machine learning wave as you might call it. For years and years, we’ve been developing more and more complex classical computing methodologies, and we’ve progressively become more complex in what we can produce, and therefore we got increasingly more sophisticated in terms of what we could achieve in terms of human expectations.
Simple examples that I use with people who aren’t necessarily technical are, we started out with programs that said, if the temperature is greater than 21°C, then turn on the air conditioner, and if it’s less than 21 °C, turn off the air conditioner. What you ended up with was a thermostat that was constantly flickering the air conditioning on and off. Then, we became a little more sophisticated and we introduced hysteresis, and we said, I tell you what, if the temperature goes above 22 °C, turn on the air conditioner and if the temperature goes below 19 °C, turn it off. You can take that example and extrapolate that over time, and that’s kind of what’s been happening in computing technology, is we’ve been introducing more and more layers of complexity to allow more sophistication and more naturalness in our interactions with things, and the way that things made quasi-decisions. And that’s all been well and fine, but the methodologies are becoming incredibly complex and it was increasingly difficult to make those next steps in progression and sophistication.
The ImageNet which is a bit of cornerstone in modern ML was just a great example of what happened—the classic approaches were becoming more and more sophisticated, but it was difficult to really move the output and the capabilities on. And the application of machine learning and neural networks in particular, that’s just really blown the doors open in terms of moving to the next level. You know, when I try to de-complicate what’s happened, I tend to express it as, we’ve gone from the world where we had a very deterministic approach and we were trying to mimic fuzziness, an approximation, to where we now have a computing approach which very naturally approximates and it does patterns and it does approximation. And it just turns out that, lo and behold, when you look at the world, a lot of things are patterns. And, suddenly, the ability to understand patterns as opposed to trying to break them up into very deterministic principles becomes very useful. It so happens that humans do a huge amount of approximation, and that suddenly moves us much more forward in terms of what we can achieve with computing. So, the ability to do pattern matching and the ability to do approximation, it doesn’t follow the linear progression of more and more determinism, and more complex determinism. It moves us into a more fuzzy space, and it just so happens that that fuzzy space is a huge leap forward in terms of getting fundamentally deterministic machines to do something that feels more natural to human beings. So that’s a massive shift forward in terms of what we can do with computers.
Now, the thing to keep in mind there, and what I am trying to explain what’s happening with machine learning to people who aren’t technologists or aren’t into the theory behind machine learning, one way I do try to simplify it is, I say, “Well listen, don’t get too worried in terms of building the next Terminator. What we’ve kind of, in essence, managed to do is we’ve taught computers to be much, much better at identifying cats.” There’s still a problem about okay, what should the machine do once it’s identified a cat. So it’s not a complete shift in all of what we can do with computing. It’s a complete shift in the capabilities but we still got a long way to go in terms of something like AGI and so forth. But don’t get me wrong, it’s a massive wave. I think this is a new era in terms of what we can get our machines to do. So it’s pretty exciting from that, but there is still a long way to go.
So you mentioned that we take these deterministic machines and get them to do approximations but, in the end, they are still at their core deterministic and digital. No matter how much you tried to obfuscate that in the application the technology, is there still an inherent limit to how closely that can mimic human behavior?
That’s, again, a very good question. So you are right, at its fundamental level, a computer is basically 1s and 0s. It all breaks down to that. What we’ve managed to do over time is produce machines which are increasingly more capable and we’ve created increasing layers of sophistication and platforms that can support that. That’s nothing to be laughed at. In the technology I work with in ARM, the leaps forward in the last few years have been quite incredible in terms of what you can do. But, yeah, it always breaks down to 1s and 0s. But it’s important not to let the fundamentals of the technology form a constraint about its potential because, if anything, what we have learned is that we can create increasing levels of sophistication to get these 1s and 0s to do more and more things and to act more and more natural in terms of our interactions and the way that they act.
So yes, you are absolutely right and it’s interesting to see the journey from 1s and 0s to being able to do something like natural language processing and things like that. So, as fascinating as that is, we do see the end on one side in terms of the beginning is 1s and 0s but it’s really difficult to understand where it’s going to finish in terms of what it’s capable of. What I do think is, we are still quite a long journey from something like AGI, depending on where you draw your limits in terms of where AGI is. We’ve undoubtedly taken a step forward with the machine learning principles, and the research that’s going on around that is still uncovering significant steps forward.
So the world has changed in that sense. And to say that that’s the kind of end of it, I don’t think anyone would buy into that. How far can it go, the fundamentals of your question, I don’t think we are anywhere close to reaching the end of that yet. There are probably more ways to come and we’ve even yet to explore where the limits of the new wave of machine learning is going to take us.
So I want to give you a question that I’ve been mulling lately and maybe you can help me with this. The setup goes like this: You mentioned we can teach your computer to identify cats and it turns out we needed a million cats, actually more than a million cats and a million dogs to get the computer to tell the difference reliably, right?
The interesting thing is that a human can be trained on a sample size of one. You get a human up, show them a stuffed animal of an imaginary creature and say find that creature in all these photos. And even if it’s frozen in a bulk of ice, or upside down, or half obscured by a tree, or has mold growing on it, or whatever, a human goes, “Yep, there, there, there, there.” Then, the normal reply to that is, well, that’s transfer learning and that’s something humans do really well. And the way we do it well is we have a whole life time of experience of seeing things in different settings and what happens is because we can do that, we can extend it. And I used to be fine with that, but now I am not.
Now I think, you know, you can take a little kid who does not have a lifetime of learning. You can show them half a dozen photos of a cat, ten photos of a cat whatever, a very, very small number and then you go for a walk and a Manx would walk out and they would say, “Oh look, a cat with no tail.” How did they do that because nobody told them sometimes cats don’t have tails. And yet, the Manx had enough cat-ness about it that it still said, “Oh, that’s still category cat, sans tail that’s worth noting.” So how did the child who is five years old, in my example, who does not have a lifetime of “Oh, there’s a dog, and a dog without a tail. There’s an elephant, and elephant without a tail.” Where did they learn to do that?
Yeah. So that’s interesting, and I think it actually breaks down into a couple of different components, one of them being the fundamentals of the processor, so to speak, the technology—not to dehumanize humans—but the platform under which the learning is occurring, and then the other one is the process of learning. I mean, one thing I would say is that, you know, to go into the more psychological, biological side of it, by the age you reach five, you’ve actually done an awful lot of experimental learning. And I know from my own experience, I spent far too many hours bent over my two-year old trying to keep them from doing something silly that they’d already probably done before and there’s actually a lot of experiments that have been run around this.
I mean I am not a behavioral psychologist, but one I remember is that they ran some experiments around a grasshopper in a glass of milk with children, and it turned out in this particular experiment that up until about the age of three, children were quite happy to drink the glass of milk, they didn’t mind. But it was around the age of three that the children started deciding that, no actually I don’t want to drink milk with a grasshopper in it, that’s disgusting. And the principle behind this research, from what I understood, was that disgusting is a learned behavior and it tends to kick in around the age of three.
So, the rate at which humans are building up information and knowledge and extensible understanding is just massive in the human in the early ages. So, being able to identify a dog or a cat or a piece of pie with a huge bite out of it, even by age two, you’ve got a huge amount of data that’s already gone into that. Now, behind that is a question of whether or not the human brain and the machine have the same capacity or the same capability. I think that’s a much more significant question, and that kind of gets down to the fundamentals of the machine versus the human. And it actually reaches back a lot to the question of what is intelligence and that’s again where I see a continuum of things.
So in terms of being able to identify objects, from a personal perspective, I think what we are seeing now in machine learning is really just the tip of the iceberg. We are working in the space where models are very static where they do, as you say, involve typically a vast amount of data in order to be able to train. Even more so than that, they often involve very particular setups in terms of the models that they are trained against. So, at the moment, it’s kind of a bit of a static world in machine learning. I would expect that it’s only a matter of time until that kind of space around static machine learning is well understood and the natural place to go from there is into a domain of more general purpose or more dynamic or more versatile machine learning type algorithms.
So models which can, not only deal with identification of particular classes of objects, but can actually be extended to do recognition of orthogonal type things, to models where they can dynamically update to learn as they experience. So I think, in terms of what we can do with machine learning, I really do think that it’s got a long way to go, a long way towards what human beings appear to do, which is be able to simulate not obviously like data and to form useful conclusions that are more general purpose. I think the technology or the wave we are on at the moment has the legs to get there. But whether this is the technology that’s going to take us into other aspects of human intelligence, such as the ability to imagine, the ability to feel or intuit, it’s not obvious at the moment that it lends itself to that at all.
If anything, technology continues to surprise us and surprise me. I like Arthur C. Clarke’s quote about, “any significantly advanced technology being indistinguishable from magic.” And I certainly believe that’s true. We’ve seen again and again that what we think is possible is simply a matter of time. A colleague of mine was on a flight with me and said they watched the original Space Odysseyand were amazed by how much of what seemed like the future and inconceivable at the time is now just a technical practicality. So I think there is a long way to go with the current wave around machine learning, but I am not sure it’s the right harness to take it into the domains of some of the further out aspects of human intelligence. But that falls in line with the fact that this is a pretty exciting wave that is going to change things, but it’s probably not the last wave.
So, if I can rephrase that, it sounds to me like you are saying that the narrow AI we have today is still nascent and we are still going to do amazing things with it, but it may have nothing whatsoever in common with a general intelligence other than they happen to share the word intelligence. That maybe a completely different quantum-based or who knows what have you, a completely different technology that we haven’t even started building it. Is that true?
Correct. Yeah, that’s certainly my opinion and I have been proved wrong repeatedly in my life and we will see where the technology takes us. The space of machine learning, it’s a new capability for machines which is not to be underestimated at all, it’s pretty amazing. But it does lend itself to certain types of things and it doesn’t lend itself to other types of things. I am not clear on where its limits are going to be found to be, but I don’t think this is the tool that’s going to solve all problems. It’s a tool that can impact everything in a positive way but it’s not going to take us to the end of the earth.
So, assuming that’s true, I want to get back to my five-year old again, because it sounds like you think the kinds of things I was just marveling that the five-year old did, the cat with no tail seems like that’s squarely in your bucket of things narrow AI can do. And so I would put the question to you slightly different. A computer should be able to do five years worth of living, maybe not in five minutes, but certainly in five days or five weeks.
Even if you built a sensor on a computer that a kid could wear around their neck 24 hours a day and you let them free in the world at age five, right now, the kid would still know a whole lot more than that device would know. Is that in your mind a software problem or a hardware problem? Do we not have the chip that can do it or do we not have the software or do we not have the sensors, do we not have embodiment which we may need in order for it to teach itself? What is it that you think maybe we are missing that at least would allow that narrow AI to track with the development of that growing child?
Yeah. So, my answer is roughly all of those. So, I think it’s important to bear in mind that the human brain is an amazing thing. What we do in my company is, we spend a lot of time thinking about power efficiency and you know, sort of part of our DNA is to try to push the boundaries in terms of processing capability but to make sure that we are doing it in a very, very energy efficient way and with that goal in mind, we are always looking for a beacon. And the beacon in terms of raw processing capability and efficiency, for us in many ways, is the human brain.
The human brain’s ability to process information, I don’t have the exact numbers at hand, but there’s been estimates as to the rough digital equivalent and the sheer bandwidth at which we can digest information is just massive. So I think we would be arrogant to the extreme to say that we’ve got a processor which is capable of supporting the same amount of information processing as a human brain. We certainly made great strides forward in the last couple of decades but the human brain is still the gold standard in terms of what can be achieved and the software kind of flows on from that. So I think there’s still a long way to go. That said, I have yet to see the limits in terms of what could be achieved both in the hardware and the software side of things, the pace at which they’ve been progressing has accelerated, if anything. So, still a long way to go to be able to match a five-year old or even a two-year old but it’s definitely increasing over time.
Yeah. It’s funny because you got this brain, and it’s a marvel in itself and then you say, what are its power requirements, and it’s 20 watts. Wow, how are we going to duplicate that in 20 watts, you know, because everything we do right now is more energy intensive. So, some of the techniques in machine learning are of course, fit things to a linear regression, or do classification—is that an A or a B or a C or a dog or a cat or whatever—and then there is clustering where the machine is fed a lot of data and it finds clouds in this n-dimensional space where you know it says something in that cloud has some likelihood of being such and such. So, if you basically said, “Here is a credit card transaction is it fraudulent?” and then the AI is going to say, “Well, how much was it and where was it purchased and what is the item and what kind of day,” and who knows, how many different things and then it says, “This is maybe fraud and this isn’t.”
You know, there is a sentiment and a legal reality that if an AI makes a decision that affects your life, you have the right to know why it made that decision. So my question to you, is that inherently going to limit what we are able to do with it? Because in n-dimensional space of clustering, it would be really hard to say, because the short answer is, you were in the cloud and this other person wasn’t in the cloud. If you were to go to Google and say, I rank #5 for such and such search, and my competitor ranks #1, why? They might very well say, we don’t know. So, how do you thread that needle?
So that’s a fascinating question. You are absolutely right. There’s kind of been a trend in society around, well, we think we understand what computers are capable of—we do understand what computers are capable of—and we try to build a human world around this, which is enjoyable or meets our social norms. And that has been, to date, largely based around the fact that computers are deterministic and they work in the classical deterministic algorithms and that those were reproducible, and so forth and so on. We kind of, as human beings, molded our world around those principles and it’s a progressive society and we continually mold our expectations and the rules of social norms to make us comfortable in that space.
Now, you are very right in the fact that when you get into the domain of machine learning, you are dealing with a technology which is largely irreproducible. So the traceability and the determinism of the decisions becomes a problem, or it becomes a shift in terms of what’s capable. From my perspective, I think this goes on to a range of different domain spaces. I mean, some of the places where they are talking about this are around automobiles for example, machine learning moves the capabilities of computing and it opens up a huge range of benefits that can be delivered into the automotive space. A lot of accidents and fatalities are caused by human error, and being able to hand more and more support to the driver, or do many things for them on a machine basis, potentially has the capability to save a lot of lives and save a lot of distress. So, that’s fantastic, but at the same time, it’s a heavily regulated industry that’s become used to determinism. And suddenly you have this thing where we can produce a huge amount of benefit for human kind, but it doesn’t follow the social norms that we’ve constructed around us to date.
I think this is causing a quandary in a lot of different spaces and even at some government levels. From my perspective, it’s interesting because a lot of the discussion today has been around what needs to be put in place around the technology, what are the constraints around the technology, how do we mold our views of the world today to get this new technology to fit into it. Personally I think that’s a very wrong way to look at it, because what we’ve had with machine learning and what we’ve currently got in front of us is a huge shift in what we can achieve with machines. And, as I said, it’s a principle which is now established which is only really getting started in terms of what it’s capable of and what it can be applied to.
And you know, there’s a lot of debate around is it good, is it bad, and you can find examples that are inherently good or inherently bad, but if you abstract far enough away from it, there’s a couple of principles I think that are important. One of them is, technology, in and of itself, is effectively inert. It’s not a question of it being good or bad. It can be used for positive or it can be used for negative. It doesn’t really inherently have a view on that. It’s about how the human beings normalize it in society, and you know, you can look at examples like speech synthesis.
So machine learning brings speech recognition to a level where it can be used for security purposes. It’s also capable of synthesizing speech from limited samples to be able to circumvent security. So, that’s a good example of a nil sum game. From my perspective, the real question around machine learning isn’t how do we get this technology to mold into our society. It’s about recognizing the fact that what we can achieve has suddenly changed, and getting society and human beings to move with that to remold their world around these new capabilities and rebuild the social norms so that they can harness the huge benefits that this technology can bring, but at the same time making sure that the social norms are in place to where they don’t become chemical weapons. And similar to chemical weapons, we say as a society, that’s not allowed, we are not going to tolerate that.
So, I think that the question around the technology, around the machine learning really is about human beings in societies need to recognize that this is a shift in capabilities. And we need to look at these, and reconstruct our social norms so that we are again happy with the positives that we can get, and we can benefit from those. But at the same time, we put the barriers up to the progression around what could be done negatively, and that’s something that has to happen with any technology advancement. I do think the focus really needs to be on society, and around the recreation of a particular decision, I think we can view that in terms of our existing social norms—we can look at it again as human beings and say, right, what do we consider to be acceptable. I am pretty confident that we will be able to reach those social norms, it’s just a question of the approach we take and how quick we get there. Personally, I feel that it starts from just embracing the technology and appreciating it’s here, let’s understand this and mold this into something that’s positive for us.
So let’s talk a little bit about IoT devices. You know that there’s been this struggle for 2,500 years between code makers and code breakers and there’s a longstanding unsettled debate as to who has the easier job. And then in computers, you had the same thing where you have people who make viruses and Trojan horses, then people who try to detect and prevent them. And they largely stay in check because when one makes an advance, the other one figures out how to counter it and then they patch the software and then they find the hole in that and then there’s another patch and we muddle through. I had a security person on the show and I said, you know, what’s your biggest concern about the future and he said, oh, you know that we are connecting billions of devices to the internet that we do not have the capability of upgrading and therefore if vulnerabilities are found in them, we don’t have a way to fix them.
So if somebody finds a way to turn on a toaster oven that’s connected to the internet, there is not really a way to fix that. What are your thoughts on that? Is that a real concern and is it intractable, is there a solution, what would you say?
Yeah. I think it’s definitely something to be taken seriously. It’s something that I know we are certainly very active around and we have been for quite some time. What I am pleased about is the fact that it’s become very topical. So, to kind of go back to your question, it is a concern. It is a genuine concern. We are attaching more and more devices to the internet. We’ve seen early examples where someone was able to gain control of a camera in a casino and people were able to launch denial of service attacks because they’d taken over a class of devices in the home. So I think the examples are there to move beyond the question of whether or not this is something we will need to be concerned about. And we are connecting more and more devices at a huge rate and these devices are more and more intelligent, and it just flows from that that we really do need to take security quite seriously.
I think there’s been a range of events that have happened particularly over the course of the last couple of years where if you have a retail credit card machine that’s been compromised through, as I said, cameras and so forth, that has really woken every one up. And it’s kind of interesting within the sort of fundamental technologies that we work on, it’s something that we’ve been taking very, very seriously for a long time but it wasn’t really something that everybody took seriously, and to some extent, we felt like we were banging a drum when no one was marching to it. But events have driven it to the forefront of people’s thinking a lot more lately and that’s been a positive thing to see.
One of the things that we view from, again, a platform technology perspective is, you really you can’t think of security as an afterthought. Many years ago, there were a lot of people who would build devices in the way that they had before and then say, oh, hold on, somebody said, I had to have some security, why don’t we bolt on some security at the end. It doesn’t really work that way. You can achieve a certain level but it’s easy to obfuscate in many ways. So, you really need to think about it at the very fundamentals. It needs to be something that’s as integral to the design as the 1s and 0s that you start with, to build it up. If you do that, then, that’s the right approach. Will we ever get to perfection? Probably not, but certainly it needs to be taken with the level of sincerity and gravitas that it deserves and I think people are starting to do that. We’ve seen people start to look at the security aspects of the device from the very beginning, in the very inception and carry that through to the end and thinking about things like, you know, the fact that we need to be able to manage and update these devices and so forth.
So yes, I think it’s a genuine concern. It’s something we do need to take very seriously, and like I said, the examples are already there to show. The positive note is that the trends that we are seeing from the low levels of design on up, people are actually taking it very serious, and that’s globally. Up until last summer, I lived in China for five years and over that time, I saw it become much more serious over there. So yeah, I think we are headed in the right direction. We’ve still got further to go, and, again, I think there’s lots of room for innovation in terms of what people can do around security. Will we ever get out of the cat and mouse chase? I am not sure we ever will, but it’s beholden on the people with the white hats to do the best they can from the very beginning and that seems to be the direction people are doing.
My final question is, when the media gets a hold of these topics about artificial intelligence and machine learning, automation, the effect on jobs, security, privacy, all of them—there is often kind of a dystopian narrative that is put forward. So I just kind of want to ask you flat out, are you optimistic about the future, especially with regard to these technologies? Do you think they are the empowering, wealth generating, information freeing up, cognitive skill enhancing, technologies that are going to transform the world into a better place? Or is the jury still out and we don’t know? Or is there always going to be kind of this dystopian narrative that’s breathing over our shoulder?
Yeah. So I think it’s a little bit of both to be honest with you. At the moment, there is certainly a huge amount of dialogue around the dystopian views of future. To some degree you kind of see these whenever there’s an element of unknown. It’s very easy to paint the worst and in some ways, that’s probably healthy because it means that you try to build the new world in such a way that it’s safe and it keeps you out of those kind of situations. So, I am not saying it’s a bad thing but I do think it’s fueled a lot by the unknown. We’ve had a significant jump forward in terms of what the technology can do, where it’s going to lead us. It’s almost impossible to say. I think those that are in the technology space have a sense of the limits of where it can take us and it’s far from those dystopian domains or even the AGI type domains, but they are incapable of seeing the end, so we can’t deterministically say where the end of the capabilities are. That leaves the world outside of the technology sphere with a huge amount of uncertainty and fear, and so I think that’s the generation behind a lot of the dialogue in the market and those are healthy. Thinking about what we find acceptable and unacceptable in the future is a perfectly sensible discussion to be having.
So is it actually going to produce a dystopian world? I am an optimist. I see the machine learning what it can do, the positives it can bring, just what it’s doing in the medical space alone is just incredible in terms of improving human health and giving us medical benefits. What it can do in automotive and so forth is, again, quite incredible. Projecting into the future, there’s a lot of questions around what’s going to happen with jobs and the ethics and so forth. I am not going to sit here and say, “I have a crystal ball that makes it clear anymore than anyone else does,” but I have an inherent belief in human society and I do think that we are going to have some disruption while we reconstruct our social norms around what’s allowed and what’s not allowed. But, as I said earlier, the technology is inert, so it really comes down to how we decide as human beings to manifest the technology’s capabilities. Although there may be individuals that may have a particular dark nefarious side to them, history would suggest that as a collective and as a whole, we tend to build our social norms of what can and can’t be done in a positive direction.
So I have a tremendous amount of faith that this is all ultimately happening within the apparatus of the human society, and that we will drive the capabilities in what actually gets achieved in a very largely positive direction. And so, from my standpoint, I think there is a massive amount of benefit to be had around machine learning, and I am personally very excited about what it might be able to produce, even if it is as simple as not having to worry about losing the remote on my TV. Sure, there is potential to abuse and misuse it and bring about negative consequences, as there is with any new technology, and to some degree, almost any classical technology, but I do have great faith in society’s guidance and where that’s going to actually end up being manifested.
Well, that’s a wonderful place to leave it. I want to thank you for an exciting hour of challenging and interesting thoughts.
Likewise, it’s been very interesting.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.