In this episode, Byron and Manoj discuss cognitive computing, consciousness, data, DARPA, explainability, and superconvergence.
Byron Reese: This is Voices in AI, brought to you by Gigaom. Today my guest is Manoj Saxena. He is the Executive Chairman of CognitiveScale. Before that, he was the General Manager of IBM Watson, the first General Manager, in fact. He’s also a successful entrepreneur who founded and sold two venture-backed companies within five years. He’s the Founding Managing Director of the Entrepreneur’s Fund IV, a 100-million-dollar seed fund focused exclusively on cognitive computing. He holds an MBA from Michigan State University and a Master’s in Management Sciences from the Birla Institute of Technology and Science in Pilani, India. Welcome to the show, Manoj.
Manoj Saxena: Thank you.
You’re well-known for eschewing the term “artificial intelligence” in favor of “cognitive computing”; even your bio says cognitive computing. Why is that?
AI, to me, is the science of making intelligent systems and intelligent machines. Cognitive computing, and most of AI, is around replacing the human mind and creating systems that do the jobs of human beings. I think the biggest opportunity and it has been proven out in multiple research reports, is augmenting human beings. So, AI for me is not artificial intelligence; AI for me is augmented intelligence. It’s how you could use machines to augment and extend the capabilities of human beings. And cognitive computing uses artificial intelligence technologies and others, to pair man and machine in a way that augments human decision-making and augments human experience.
I look at cognitive computing as the application of artificial intelligence and other technologies to create—I call it the Iron Man J.A.R.V.I.S. suit, that makes every human being a superhuman being. That’s what cognitive computing is, and that was frankly the category that we started off when I was running IBM Watson as, what we believed, was the next big thing to happen in IT and in enterprise.
When AI was first conceived, and they met at Dartmouth and all that, they thought they could kind of knock it out in the summer. And I think the thesis was, Minsky later said, it was just like physics had just a few laws, and electricity had just a few laws, they thought there was just a couple of laws. And then AIs had a few false starts, expert systems and so forth, but, right now, there’s an enormous amount of optimism about it, of what we’re going to be able to do. What’s changed in the last, say, decade?
I think a couple of dimensions in that, one is, when AI initially got going the whole intention was, “AI to model the world.” Then it shifted to, “AI to model the human mind.” And now, where I believe the most potential is, is, “AI to model human and business experiences.” Because each of those are gigantic. The first ones, “AI to model the world” and “AI to model the mind,” are massive exercises. In many cases, we don’t even know how the mind works, so how do you model something that you don’t understand? The world is too complex and too dynamic to be able to model something that large.
I believe the more pragmatic way is to use AI to model micro-experiences, whether it’s an Uber app, or a Waze. Or it is to model a business process, whether it’s a claim settlement, or underwriting, or management of diabetes. I think that’s where the third age of AI will be more focused around, not modeling the world or modeling the mind, but to model the human experience and a business process.
So is that saying we’ve lowered our expectations of it?
I think we have specialized in it. If you look at the human mind, again, you don’t go from being a child to a genius overnight. Let alone a genius that understands all sciences and all languages and all countries. I think we have gotten more pragmatic and more outcome driven, rather than more research and science-driven on how and where to apply AI.
I notice you’ve twice used the word “the mind,” and not “the brain.” Is that deliberate, and if so, where do you think “the mind” comes from?
I think there is a lot of hype, and there is a lot of misperception about AI right now. I like saying that, “AI today is both: AI equals ‘artificially inflated,’ and AI equals ‘amazing innovations.’” And I think in the realm of “AI equals artificially inflated,” there are five myths. One of the first myths is that AI equals replacement of the human mind. And I separate the human brain from the human mind, and from human consciousness. So, at best, what we’re trying to do is emulate functions of a human brain in certain parts of AI, let alone human mind or human consciousness.
We talked about this last time, we don’t even know what consciousness is, other than a doctor saying whether the patient is dead or alive. There is no consciousness detector. And a human mind, there is a saying that you probably need a quantum machine to really figure out how a human mind works—it’s not a Boolean machine or von Neumann machine; it’s a different kind of a processor. But a human brain, I think, can be broken down and can be augmented through AI to create exceptional outcomes. And we’ve seen that happen in radiology, at Wall Street, the quants, and other areas. I think that’s much more exciting, to apply AI pragmatically into these niches.
You know, it’s really interesting because there’s been a twenty-year effort called OpenWorm Project, to take the nematode worm’s brain, which is 302 neurons, and to model it. And even after twenty years, people in the project say it may not be possible. And so, if you can’t do a nematode… One thing is certain, you’re not going to do a human before you do a nematode worm.
Exactly. You know the way I see that, Byron, is that I’m more interested in “richer,” and not “smarter.” We need to get smarter but also we need to equally get richer. By “richer,” I don’t mean just making money, by “richer,” I mean: how do we use AI to improve our society, and our businesses, and our way of life? That’s where I think coming at it in the way of “outcome in,” rather than “science out,” is a more pragmatic way to apply AI.
So, you’ve mentioned five misconceptions, that was one of them. What are some of the other ones?
The first misconception was, AI equals replacing human mind. Second misconception is, AI is the same as natural language processing, which is far from the truth—NLP is just a technique within AI. It’s like saying, “My ability to understand and read a book, is the same as my brain.” That’s the second misconception.
The third is, AI is the same as big data and analytics. Big data and analytics are tools that are used to capture more input for an AI to work on. Saying that big data is the same as AI is saying, “Just because I can sense more, I can be smarter.” All big data is giving you is more input; it’s giving you more senses. It’s not making you smarter, or more intelligent. That’s the third myth.
The fourth myth is that AI is something that is better implemented horizontally versus vertically. I believe true AI, and successful AI—particularly in the business world—will have to be verticalized AI. Because it’s one thing to say, “I’ve got an AI.” It’s another thing to say, “I have an AI that understands underwriting,” versus an AI that understands diabetes, versus an AI that understands Super Bowl ads. Each of these require a domain-specific optimization of data and models and algorithms and experience. And that’s the fourth one.
The fifth one is that AI is all about technology. At best, AI is only half about technology. The other half of the equation has to do with skills, has to do with new processes, and methods, and governance on how to manage AI responsibly in the enterprise. Just like when the Internet came about, you didn’t have the methods and processes to create a web page, to build a website, to manage the website from getting hacked, to manage updates of the website. Similarly, there is a whole AI lifecycle management, and that’s what CognitiveScale focuses on: how do you create, deploy, and manage AI responsibly and at scale?
Because, unlike traditional IT systems—which do not learn; they are mostly rules-based systems, and rules-based systems don’t learn—AI-based systems are pattern-based, and they learn from patterns. So, unlike traditional IT systems that did not learn, AI systems have an ability to self-learn and geometrically improve themselves. If you can’t get visibility and control over these AI systems, you could have a massive problem of “rogue AI”—is what CognitiveScale calls it—where it’s irresponsible AI. You know that character Chucky from the horror movie, it’s like having a bunch of Chuckys running around in your enterprise, opening up your systems. What is needed is a comprehensive end-to-end view of managing AI from design, from deployment, to production, and governance of it at scale. That requires a lot more than technology; it requires skills and methods; and processes.
When we were chatting earlier you mentioned that some people were having difficulty scaling their projects, that they began in their enterprise, making them kind of enterprise-ready. Talk about that for a moment. Why is that, and what’s the solution to that?
Yes. I’ve talked to over six hundred customers just in the last five years—everything from IT level to board level and CEO level. There are three big things that are going on that they’re struggling with getting value of AI. Number one is, AI is seen as something that can be done by data scientists and analytics people. AI is far too important to be left to just data scientists. AI has to be done as a business strategy. AI has to be done top-down to drive business outcomes, not bottom-up as a way of finding data patterns. That’s the first part. I see a lot of science projects that are happening. One of the customers called it darts versus bubbles. He says, “There are lots of darts of projects that are going on, but where do I know where the big bubbles are, which really move the needle for a multibillion-dollar business that I have?” There is a lot of, I call it, bottom-up engineering experiments that are going on, that are not moving the needle. That’s one thing.
Number two is, the data scientists and application developers are struggling with taking these projects into production, because they are not able to provide fundamental capabilities to AI that you need in an enterprise, such as explainability. I believe 99.9% of the AI companies today that are funded will not make it in the next three years, because they lack some fundamental capability, like explainability. It’s one thing to find pictures of cats on the internet using a deep learning network, it’s another thing to explain to a chief risk officer why a particular claim was denied, and the patient died, and now they have a hundred-million-dollar lawsuit. The AI has to be responsible, trustworthy, and explainable; able to say why was that decision made at that time. Because of lack of these kinds of capabilities—and there are five such capabilities that we call enterprise-grade AI—most of these projects are not able to move into production, because they’re not able to meet the requirements from a security and performance perspective.
And then last but not least, these skills are very sparse. There are very few skills. Someone told me there are only seven thousand people in this world who have the skills to be able to understand and run AI models and networks like deep learning and others. Imagine that, seven thousand. I know of a bank who’s got twenty-two thousand developers, one bank alone. There is a tremendous gap in the way AI is being practiced today, versus the skills that are available in trying to get this production-ready.
That’s another thing that CognitiveScale is doing, we have created this platform to democratize AI. How do you take application developers and data scientists and machine learning people, and get them to collaborate, and deploy AI in 90-day increments? We have this method called “10-10-10,” where, in 10 hours we select a use case, and in 10 days we build the reference application using their data, and in 10 weeks we take them into production. We do that by helping these groups of people collaborate on a new platform called Cortex, that lets you take AI safely and securely into production, at scale.
Backing that up a little bit, there are European efforts to require that if the AI makes a decision about you, that you have a right to understand to know why—why it denies you a loan. So, you’re saying that that is something that isn’t happening now, but it is something that’s possible.
Actually, there are some efforts that are going on right now. DARPA has got some initiatives around this notion of XAI, explainable AI. And I know other companies are exploring this, but it’s still a very low-level technology effort. It is not coming up—explainable AI—at a business process level, and at an industry level, because explainability requirements of an AI vary from process to process, and from industry to industry. The explainability requirements for a throat cancer specialist talking talk about why he recommended a treatment, are different than explainability requirements for an investment advice manager in wealth management, who says, “Here’s the portfolio I recommended to you with our systems of AI.” So, explainability exists at two levels. It exists at a horizontal level as a technology, and it exists at an industry-optimized level, and that’s why I believe AI has to be verticalized and industry-optimized for it to really take off.
You think that’s a valid request to ask of an AI system.
I think it’s a requirement.
But if you ask a Google person, “I rank number three for this search. Somebody else ranks number four. Why am I three and they’re four?” They’d be like, “I don’t know. There are six thousand different things going on.”
Exactly. Yeah.
So wouldn’t an explainability requirement impede the development of the technology?
Or, it can create a new class of leaders who know how to crack that nut. That’s the basis on which we have founded CognitiveScale. It’s one of the six requirements, that we’ve talked about, in creating enterprise-grade AI. One of the big things—and I learned this while we were doing Watson—was how do you build AI systems you can trust, as a human being? Explainability is one of them. Another one is, recommendations with reasons. When your AI gives you an insight, can it also give you evidence to support, “Why I’m suggesting this as the best course of action for you”? That builds trust in the AI, and that’s when the human being can take action. Evidence and explainability are two of those dimensions that are requirements of enterprise-grade AI and for AI to be successful at large.
There’s seven thousand people who understand that. Assuming it’s true, is that a function of how difficult it is, or how new it is?
I think it’s a function of how different a skill set it is that we’re trying to bring into the enterprise. It is also how difficult it is. It’s like the Web; I keep going back to Internet. We are like where the Internet was in 1997. There were probably, at that time, only a few thousand people who knew how to develop HTML-based applications or web pages. AI today is where the Internet was in 1996 and 1997, where people were building a web page by hand. It’s far different from building a web application, which is connecting a series of these web pages, and orchestrating them to a business process to drive an outcome. That’s far different from optimizing that process to an industry, and managing it at the requirement of explainability, governance, and scalability. There is a lot of innovation around enterprise AI that is yet to come about, and we have not even scratched the surface yet.
When the Web came out in ’97, people rushed to have a web department in their company. Are we there, are we making AI departments and is that, like, not the way to do it?
Absolutely. I won’t say it’s not the way to do it. I’ll say it’s a required first step; to really understand and learn. Not only just AI, even blockchain—CognitiveScale calls it “blockchain with a brain.” I think that’s the big transformation, which has yet to happen, that’s on the horizon in the next three to four years—where you start building self-learning and self-assuring processes. Coming back to the Web analogy, that was the first step of three or four, in making a business become an e-business. Twenty-five years ago when the Web came about, everyone became in e-business, every process became “webified.” Now, with AI, everyone will become an i-business, or a c-business—a cognitive business—and everyone is going to get “cognitized.” Every process is going to get cognitized. Every process will learn from new data, and new interactions.
The steps they will go through are not unlike what they went through with the Web. Initially, they had a group of people building web apps, and the CEO said after a while, 1998, “I’ve spent half a million dollars, all I have is an intelligent digital brochure on the website. What has it done for my business?” That is exactly the stage we are at. Then, someone else came up and said, “Hey, I can connect a shopping cart to this particular set of web pages. I can put a payment system around it. I can create an e-commerce system out of it. And I have this open-source thing called JBoss, that you can build off of.” That’s kind of similar to what Google TensorFlow is doing today for AI. Then, there are next-generation companies like Siebel and Salesforce that came in and said, “I can build for you a commercial, web-based CRM system.” Similarly, that’s what CognitiveScale does. We are building the next-generation intelligent CRM system, or intelligent HRM system, that lets you get value out of these systems in a reliable and scalable manner. So it’s sort of the same progression that they’re going to go through with AI, like we went through with the Web. And there’s still a tremendous amount of innovation and new market leadership. I believe there will be a new hundred-billion-dollar AI company and that will get formed in the next seven to ten years.
What’s the timescale on AI going to be, is it going to be faster or slower?
I think it’ll be faster. I think it’ll be faster for multiple reasons. We have, and I gave a little TED Talk on this, around this notion of a superconvergence of technologies. When the Web came about, we were shifting from just one technology to another—we moved from client-server to Web. Right now, you’ve got these super six technologies that are converging that will make AI adoption much faster—they are cloud, mobile, social, big data, blockchain, and analytics. All of these are coming together at a rate and pace that is enabling compute and access, at a scale that was never possible before, and you combine that with an ability for a business to get disrupted dramatically.
One of the biggest reasons that AI is different than the Web is that those web systems are rules-based. They did not geometrically learn and improve. The concern and the worry that the CEOs and boards have this time around is—unlike a web-based system—an AI-based system improves with time, and learns with time, so either I’m going to dramatically get ahead of the competition, or I’m going to be dramatically left behind. What some people call “the Uber-ification” of businesses. There is this threat, and an opportunity to use AI as a great transformation and accelerator for their business model. That’s where this becomes an incredibly exciting technology, riding on the back of the superconvergence that we have.
If a CEO is listening, and they hear that, and they say, “That sounds plausible. What is my first step?”
I think there are three steps. The first step is to educate yourself, and your leadership team, on the business possibilities of AI—AI-powered business transformation, not technology possibilities of AI. So, one step is just education; educate yourself. Second is, to start experimenting. Experiment by deploying 90-day projects that cost a few hundred thousand dollars, not a two-year project with multiple million dollars put into it, so you can really start understanding the possibilities. Also you can start cutting through the vendor hype about what is product and what is PowerPoint. The narrative for AI, unfortunately, today, is being written by either Hollywood, or by glue-sniffing marketers from large companies, so the 90-day projects will help you cut through it. So, first is to educate, second is experiment, and third is enable. Enable your workforce to really start having the skill sets and the governance and the processes and enable an ecosystem, to really build out the right set of partners—with technology, data, and skills—to start cognitizing your business.
You know AI has always kind of been benchmarked against games, and what games it can beat people at. And that’s, I assume, because games are these closed environments with fixed rules. Is that the way an enterprise should go about looking for candidate projects, look for things that look like games? I have a stack of resumes, I have a bunch of employees who got great performance reviews, I have a bunch of employees that didn’t. Which ones match?
I think that’s the wrong metaphor to use. I think the way to have a business think about AI, is in the context of three things: their customers, their employees, and their business processes. They have to think about, “How can I use AI in a way that my customer experience is transformed? That every customer feels very individualized, and personalized, in terms of how I’m engaging them?” So, that’s one, the customer experiences that are highly personalized and highly contextualized. Second is employee expertise. “How do I augment my experience and expertise of my employees such that every employee becomes my smartest employee?” This is the Iron Man J.A.R.V.I.S. suit. It’s, “How do I upskill my employees to be the smartest at making decisions, to be the smartest in handling exceptions?” The third thing is my business processes. “How do I implement business processes that are constantly learning on their own, from new data and from new customer interaction?” I think if I were a CEO of a business, I would look at it from those three vectors and then implement projects in 90-day increments to learn about what’s possible across those three dimensions.
Talk a minute about CognitiveScale. How does it fit into that mix?
CognitiveScale was founded by a series of executives who were part of IBM Watson, so it was me and the guy who ran Watson Labs. We ran it for the first three years, and one thing we immediately realized was how powerful and transformative this technology is. We came away with three things: first, we realized that for AI to be really successful, it has to be verticalized and it has to really be optimized to an industry. Number two is that the power of AI is not in a human being asking the question of an AI, but it’s the AI telling the human being what questions to ask and what information to look for. We call it the “known unknowns” versus “unknown unknowns.” Today, why is it that I have to ask an Alexa? Why doesn’t Alexa tell me when I wake up, “Hey, while you were sleeping, Brexit happened. And—” if I’m an investment adviser, “—here are the seventeen customers you should call today and take them through the implications, because they’re probably panicking.” It’s using a system which is the opposite of a BI. A BI is a known-unknown—I know I don’t know something, therefore I run a query. An AI is an unknown unknown, which means it’s tapping me on the shoulder and saying, “You ought to know this,” or, “You ought to do this.” So, that was the second thesis. One is verticalize, second is unknown unknowns, and the third is quick value in 90-day increments—this is delivered using the method we call “10-10-10,” where we can stand up little AIs in 90-day increments.
The company got started about three-and-a-half years ago and the mission is to create exponential business outcomes in healthcare, financial services, telecom, and media. The company has done incredibly well, we have investments from Microsoft, Intel, IBM, Norwest—raised over $50 million. There are offices in Austin, New York, London and India. And the who’s-who, there are over thirty customers who are deploying this, and now scaling this as an enterprise-wide initiative, and it’s, again, built on this whole hypothesis of driving exponential business outcomes, not driving science projects with AI.
CognitiveScale is an Austin-based company, Gigaom is an Austin-based company, and there’s a lot of AI activity in Austin. How did that come about, and is Austin an AI hub?
Absolutely, that’s one of the exciting things I’m working on. One of my roles is Executive Chairman of CognitiveScale. Another of my roles is that I have a hundred-million-dollar seed fund that focuses on investing in vertical AI companies. And for my third thing, we just announced last year, is an initiative called AI Global—out of Austin—whose focus is on fostering the deployment of responsible AI.
I believe East Coast and West Coast will have their own technology innovations in AI. AI will be bigger than the Internet was. AI will be at the scale of what electricity was. Everything we know around us—from our chairs to our lightbulbs and our glasses—is going to have elements of AI woven into it over the next ten years. And, I believe one of the opportunities that Austin has—and that’s why we founded AI Global in Austin—is to help businesses implement AI in a responsible way so that it creates good for the business in an ethical and a responsible manner.
Part of the ethical use of AI and responsible use of AI involves bringing a community of people together in Austin, and have Austin be known as the place to go, for designing responsible AI systems. We have the UT Law school working with us, the UT Design school, the UT Business school, the UT IT school—all of them are working together as one. We have the mayor’s office and the city working together extensively. We also have some local companies like USAA, who is coming in as a founding member of this. What we are doing now is helping companies that come to us for getting a prescription on how to design, deploy, and manage responsible AI systems. And I think there are tremendous opportunities, like you and I have talked about, for Gigaom and AI Global to start doing things together to foster implementation of responsible AI systems.
You may have heard that IBM Watson beat Ken Jennings at Jeopardy. Well, he gave a TED Talk about that, and he said that there was a graph that, as Watson got better, it would show the progress, and every week they would send him an update and their line would be closer to his. He said he would look at it with dread. He said, “That’s really what AI is, it’s not the Terminator coming for you. It’s the one thing you do great, and it just gets better and better and better and better at it.” And you talked about Hollywood driving the narrative of AI, but one of the narratives is AIs effect on jobs, and there’s a lot of disagreement about it. Some people believe it’s going to eat a bunch of low-skill work, and we will have permanent unemployment and it will be like the Depression, and all of that. While some think that it’s actually going to create a bunch of jobs. That, just like any other transformative technology, it’s going to raise productivity which is how we raise wages. So which of those narratives, or a different one, do you follow?
And there’s a third group that says that AI could be our last big innovation, and it’s going to wipe us out as a species. I think the first two, in fact, all three are true, elements of them.
So it will wipe us out as a civilization?
If you don’t make the right decisions. I’m hearing things like autonomous warfare which scares the daylights out of me.
Let’s take all three. In terms of AI dislocating jobs, I think every major technology—from the steam engine to the tractor to semiconductors—has always dislocated jobs; and AI will be no different. There’s a projection that by the year 2020 eighteen million jobs will be dislocated by AI. These are tasks that are routine tasks that can be automated by a machine.
Hold on just a second, that’s twenty-seven months from now.
Yeah, eighteen million jobs.
Who would say that?
It’s a report that was done by, I believe it was World Economic Forum, but here’s the thing, I think that’s quite true. But I don’t worry about that as much as I focus on the 1.3 billion jobs that AI will uplift the roles on. That’s why I look at augmentation as a bigger opportunity than replacement of human beings. Yes, AI is going to remove and kill some jobs but there is a much, much larger opportunity by using AI to augment and skill your employees, just like the Web did. The Web gave you reach and access and connection, at a scale that was never possible before—just like the telephone did before that, and the telegraph did before that. And I think AI is going to give us a tremendous amount of opportunities for creating—someone called it the “new collar jobs,” I think it was IBM—not just blue collar or white collar, but “new collar” jobs. I do believe in that; I do believe there is an entire range of jobs that AI will bring about. That’s one.
The second narrative was around AI being the last big innovation that we will make. And I think that is absolutely the possibility. If you even look at the Internet when it came about, the top two applications in the early days of the Internet were gambling and pornography. Then we started putting the Internet to work for the betterment of businesses and people, and we made choices that made us use the Internet for greater good. I think the same thing is going to happen with AI. Today, AI is being used for everything from parking tickets being contested, to Starbucks using it for coffee, to concert tickets being scalped. But I think there are going to be decisions as a society that we have to make, on how we use AI responsibly. I’ve heard the whole Elon Musk and Zuckerberg argument; I believe both of them are right. I think it all comes down to the choices we make as a society, and the way we scale our workforce on using AI as the next competitive advantage.
Now, the big unknown in all of this is what a bad actor, or nation states, can do using AI. The part that I still don’t have a full answer to, but it worries the hell out of me, is this notion of autonomous warfare. Where people think that by using AI they can actually restrict the damage, and they can start taking out targets in a very finite way. But the problem is, there’s so much that is unknown about an AI. An AI today is not trustworthy. You put that into things that can be weapons of mass destruction, and if something goes wrong—because the technology is still maturing—you’re talking about creating massive destruction at a scale that we’ve never seen before. So, I would say all three elements of the narrative: removing jobs, creating new jobs, creating an existential threat to us as a race—all of those elements are a possibility going forward. The one I’m the most excited about is how it’s going to extend and enhance our jobs.
Let’s come back to jobs in just a minute, but you brought up warfare. First of all, there appear to be eighteen countries working to make AI-based systems. And their arguments are twofold. One argument is, “There’s seventeen other people working to develop it, if I don’t…”
Someone else will.
And second, right now, the military drops a bomb and it blows up everything… Let’s look at a landmine. A landmine isn’t AI. It will blow up anything over forty pounds. And so if somebody came and said, “I can make an AI landmine that sniffs for gunpowder, and it will only blow up somebody who’s carrying a weapon.” Then somebody else says, “I can make one that actually scans the person and looks for drab.” And so forth. If you take warfare as something that is a reality of life, why wouldn’t you want systems that were more discriminative?
That’s a great question, and I believe that will absolutely happen, and probably needs to happen, but over a period of time—maybe that’s five or ten years away. We are in the most dangerous time right now, where the hype about AI has far exceeded the reality of AI. These AIs are extremely unstable systems today. Like I said before, they are not evidence-based, there is no kill-switch in an AI, there is no explainability; there is no performance that you can really figure out. Take your example of something that can sniff gunpowder and will explode. What if I store that mine in a gun depot, in the middle of a city, and it sniffs the gunpowder from the other weapons there, and it blows itself up. Today, we don’t have the visibility and control at a fine-grain level with AI to warrant an application of it in that scale.
My view is that it will be a prerogative for everyone to get on it as nation-states—you saw Putin talk about it, saying, “He who controls AI will control the future world.” There is no putting the genie back in the bottle. And just like we did with the rules of war, and just like we did with nuclear warfare; there will be new Geneva Convention-like rules that we will have to come up with as a society on how and where these responsible AI systems have to be deployed, and managed, and measured. So, just like we have done that for chemical warfare, I think there will be new rules that will come up for AI-based warfare.
But the trick with it is… A nuclear event is a binary thing; it either happened or it didn’t. A chemical weapon, there is a list of chemicals, that’s a binary thing. AI isn’t though. You can say your dog-food dish that refills automatically when it’s empty, that’s AI. How would you even phrase the law, assuming people followed it, how would you phrase it in just plain English?
In a very simple way. You’ve heard Isaac Asimov’s three rules in I, Robot. I think as a society we will have to—in fact, I’m doing a conference on this next year north of London around how to use AI and drones in warfare in a responsible way—come up with a collective mindset and will from the nations to propose something like this. And I think the first event has not happened yet, though you could argue that the “fake news” event was one of the big AI events that’s happened, that, potentially, altered the direction of a presidential race. People are worried about hacking; I’m more worried about attacks that you can’t trace the source of. And I think that’s work to be done, going forward.
There was a weapons system that did make autonomous kill decisions, and the militaries that were evaluating it said, “We need it to have a human in the middle.” So they added that, but of course you can turn that off. It’s almost intractable to define it in such a way.
It sounds like you’re in favor of AI weapons, as long as they’re not buggy.
I’m not in favor of AI weapons. In general, as a person, I’m anti-war. But it’s one of those human frailties and human limitations that war is a necessary—as ugly as it is—part of our lives. I think people and countries will adopt AI and they will start using it for warfare. What is needed, I think, is a new set of agreements and a new set of principles on how they go about using it, much like they do with chemical weapons and nuclear warfare. I don’t think it’s something we can control. What we can do is regulate and manage and enforce it.
So, moving past warfare, do you believe Putin’s statement that he who controls AI in the future will control the world?
Absolutely. I think that’s a given.
Back to jobs for a moment. Working through the examples you gave, it is true that steam and electricity and mechanization destroyed jobs, but, what they didn’t do is cause unemployment. Unemployment in this country, in the US, at least, has been between five and ten percent for two hundred years, other than the Depression, which wasn’t technology’s fault. So, what has happened is, yes, we put all of the elevator operators out of business when we invented the button and you no longer had to have a person, but we never saw a spike in unemployment. Is that what’s going to happen? Because if we really lost eighteen million jobs in the next twenty-seven months, that would just be… That’s massive.
No, but here’s the thing, that eighteen million number is a global number.
Okay, that’s a lot better then. Fair enough, then.
And you have to put this number in context of the total workforce. So today, there are somewhere between seven hundred million to 1.3 billion workers that are employed globally and eighteen million is a fraction of that. That’s number one. Number two, I believe there is a much bigger potential in using AI as a muse, and AI as a partner, to create a whole new class of jobs, rather than be afraid of the machine replacing the job. Machines have always replaced jobs, and they will continue to do that. But I believe—and this is where I get worried about our education system, one of the first things we did with Watson was we started a university program to start skilling people with the next generation skillsets that are needed to deploy and manage AI systems—that over the next decade or, for that matter over the next five decades, there is a whole new class of human creativity and human potential that can and will be unleashed through AI by creating whole new types of job.
If you look at CognitiveScale, we’re somewhere around one hundred and sixty people today. Half of those jobs did not exist four years ago. And many of the people who would have never even considered a job in a tech company are employed by CognitiveScale today. We have linguists who are joining a software company because we have made their job into computational linguistics, where they’re taking what they knew of linguistics, combining it with a machine, and creating a whole new class of applications and systems. We have people who are creating a whole new type of testing mechanisms for AI. These testers never existed before. We have people who are now designing and composing intelligent agents using AI, with skills that they are blending from data science to application development, to machine learning. These are new skills that have come about. Not to mention salespeople, and business strategists, who are coming up with new applications of this. I tend to believe that this is one of the most exciting times—from the point of view of economic growth and jobs—that we, and every country in this world, has in front of them. It all depends on how we commercialize it. One of the great things we have going for the US is a very rich and vibrant venture investment community and a very rich and vibrant stock market that values innovation, not just revenues and profits. As long as we have those, and as long as we have patent coverage and good enforcement of law, I see a very good future for this country.
At the dawn of the Industrial Revolution, there was a debate in this country, in the United States, about the value of post-literacy education. Think about that. Why would most people, who are just going to be farmers, need to go to school after they learn how to read? And then along came some people who said that the jobs of the future, i.e. Industrial Revolution jobs, will require more education. So the US was the first country in the world to guarantee every single person could go to high school, all the way through. So, Mark Cuban said, if he were coming up now, he would study philosophy. He’s the one who said, “The first trillionaires are going to be AI people.” So he’s bullish on this, he said, “I would study philosophy because that’s what you need to know.” If you were to advise young people, what should they study today to be relevant and employable in the future?
I think that’s a great question. I would say, I would study three different things. One, I would study linguistics, literature—soft sciences—things around how decisions are made and how the human mind works, cognitive sciences, things like that. That’s one area. The second thing I would study is business models and how businesses are built and designed and scaled. And the third thing I would study is technology to really understand the art of the possible with these systems. It’s at the intersection of these three things, the creative aspects of design and literature and philosophy around how the human mind works, to the commercial aspect of what to make, and how to build a successful business model, to the technological underpinnings of how to power these business models. I would be focusing on the intersection of those three skills; all embraced under the umbrella of entrepreneurship. I’m very passionate about entrepreneurship. They are the ones who will really lead these country forward, entrepreneurs, both in big companies, and small.
You and I have spoken on the topic of an artificial general intelligence, and you said it was forty or fifty years away, that’s just a number, and that it might require quantum computers. You mentioned Elon and his fear of the existential threat. He believes, evidently, that we’re very close to an AGI and that’s where the fear is. That’s what he’s concerned about. That’s what Hawking is concerned about. You said, “I agree with the concern, if we screw up, it’s an existential threat.” How do you reconcile that with, “I don’t think we’ll have an AGI for forty years”?
Because I think you don’t need an AGI to create an existential threat. There are two different dimensions. You can create an existential threat by just building a highly unreliable autonomous weapons system that doesn’t know anything about general intelligence. It only knows how to seek out and kill. And that, in the wrong hands, could really be the existential threat. You could create a virus on the Internet that could bring down all public utilities and emergency systems, without it having to know anything about general intelligence. If that somehow is released without proper testing or controls, you could bring down economies and societies. You could have devastation, unfortunately, at the scale of what Puerto Rico is now going through without a hurricane going through it; it could be an AI-powered disaster like that. I think these are the kinds of outcomes we have to be aware of. These are the kinds of outcomes we have to start putting rules and guidelines and enforcements around. And that’s an area, that and skills, are the two that I think we are lagging behind significantly today.
The OpenAI initiative is an effort to make AI so that one player doesn’t develop it—in that case an AGI, but all along the way. Do you think that is a good initiative?
Yeah, absolutely. I think OpenAI, we probably need a hundred other initiatives like that, that focus on different aspects of AI. Like what we’re doing at AI Austin, and AI Global. We are focusing on the ethical use of AI. It’s one thing to have a self-driving car, it’s another thing to have a self-driving missile. How do you take a self-driving car that ran over four people, and how do you cross-examine that in a witness box? How is that AI explainable? Who’s responsible for it? So there is a whole new set of ethics and laws that have to be considered when putting this into the intelligent products. Almost like the underwriter labs equivalent of AI that needs to be woven into every product and every process. Those are the things that our governments need to get aware of, and our regulators need to get savvy about, and start implementing.
There is one theory that says that if it’s going to rely on government, that we are all in bad shape because the science will develop faster than the legislative ability to respond to it. Do you have a solution for that?
I think there’s a lot of truth to that, particularly with what we’re seeing recently in the government around technology, there’s a lot of merit to that. I believe, again, the results of what we become and what we use AI for, will be determined by what we do as private citizens, what we do as business leaders, and what we do as philanthropists. One of the beautiful things about America is what philanthropists like Gates and Buffett and all are doing—they’ve got more assets than many countries now, and they’re putting it to work responsibly; like what Cuban’s talking about. So, I do have hope in the great American “heart,” if you may, about innovation, but also responsible application. And I do believe that all of us who are in a position to educate and manage these things, it’s our duty to be able to spread the word, and to be able to lean in, and start helping, and steering this AI towards responsible applications.
Let’s go through your “What AI Isn’t” list, your five things. One of them you said, “An AI is not natural language processing” and obviously, that is true. Do you think, though, the Turing test has any value? If we make a machine that can pass it, is that a benchmark? We have done something extraordinary in that case?
When I was running Watson, I used to believe it had value, but I don’t believe that as much anymore. I think it has limited value in applicability, because of two things. One is, in certain processes where you’re replacing the human brain with a machine, you absolutely need to have some sort of a test to prove or not prove. The more exciting part is not replacement of automated or repetitive human functions, the more exciting part is things that the human brain hasn’t thought of, or hasn’t done. I’ll give you an example: we are working at CognitiveScale with a very large media company, and we were analyzing Super Bowl TV ads, by letting an AI read the video ad, to find out exactly what kinds of creative—is it kids or puppies or celebrities—and at what time, would have the most impact on creating the best TV ad. And what was fascinating was that we just let the AI run at it; we didn’t tell it what to look for. There was no Turing test to say, “This is good or bad.” And the stuff the AI came back with were things that were ten or twelve levels deep in terms of connections it found, things that a human brain normally would have never thought about. And we still can’t describe why there is a connection to it.
It’s stuff like that—the absolute reference is not the human brain, this is the “unknown unknown” part I talked about—that with AI, you can emulate human cognition but, as importantly, with AI you can extend human cognition. The extension part of coming up with patterns or insights and decisions that the human brain may not have used, I think that’s the exciting part of AI. We find when we do projects with customers that there are patterns that we can’t explain, as a human being, why it is, but there’s a strong correlation; it’s eighteen levels deep and it’s buried in there, but it’s a strong correlator. So, I kind of put this into two buckets: first is low-level repetitive tasks that AI can replace; and second is a whole new class of learning that extends human cognition where—this is the unsupervised learning bit—where you start putting a human in the loop to really figure out and learn new ways of doing business. And I think they are both aspects that we need to be cognizant of, and not just try to emulate the current human brain which has, in many cases, proven to be very inefficient in making good decisions.
You have an enormous amount of optimism about it. You’re probably the most optimistic person, that I’ve spoken to, about how far we can get without a general intelligence. But, of course, you keep using words like “existential threat,” you keep identifying concepts like a virus that takes down the electrical grid, warfare, and all of that; you even used “rogue AI” in the context of a business. In that latter case, how would a rogue AI destroy a business? And you can’t legislate your way around that, right? So, give me a example of a rogue AI in an enterprise scenario.
There are so many of them. One of them actually happened when we recently met with a large financial institution. We were sitting and having a meeting, and suddenly we found out that that particular company was going through a massive disruption of business operations because all of their x-number of data centers were shutting down, every 20 minutes or so, and rebooting themselves; all over the world, their data centers were shutting down and rebooting. They were panicking because this was during the middle of a business day, there were billions of dollars being transacted, and they had no idea why these data centers were doing what they were. A few hours into it, they found out that someone wrote a security bot last month, and they launched it into the cloud system that they have, and for some reason, that agent—that AI—felt that it was a good idea to start shutting down these systems every 20 minutes and rebooting it. That was a simple example of how, they finally found it, but there was no visibility in governance of that particular AI that was introduced. That’s one of the reasons we talked about the ability to have a framework for managing visibility and control of these AIs.
The other one could be—and this has not happened yet, but this is one of the threats—you look at underwriting. An insurance company uses technology today a lot, to start underwriting risks. And if, for whatever reason, you have an AI system that sees correlations and patterns, but has not been trained well enough on really understanding risk, you could pretty much have the entire business wiped out. By having the AI—if you depend on it too much without explainability and trust—suggesting you take on risks, that will put your business at an existential risk.
I can go on and on, and I can use examples around cancer, around diabetes, around anything to do with commerce where AI is going to be put to use. I believe as we move forward with AI, the two phrases that are going to become incredibly important for enterprises are “lifecycle management of an AI,” and “responsible AI.” And I think that’s where there’s a tremendous amount of opportunity. That’s why I’m excited about what we’re doing at CognitiveScale to enable those systems.
Two final questions. So, with those scenarios, give me the other side, give us some success stories you’ve personally seen. They can be CognitiveScale or other ones, that you’ve seen have a really positive impact on a business.
I think there are many of them. I’ll pick an area in retail, something as simple as retail, where through an AI we were able to demonstrate how a rules-based system—so this particular large retailer used to have a mobile app where they presented to you a shirt, and trousers, and some accessories and it was like a Tinder or “hot or not” type of a game—and the rules-based system, on average, were getting less than ten percent conversion on what people said they liked. Those were all systems that are not learning. Then we put an AI behind it, and that AI could understand that that particular dress was an off-shoulder dress, and it was a teal color, and it was pairs with an open-toe shoe that’s a shiny leather. As the customers started engaging with it, the AI started personalizing the output, and we demonstrated a twenty-four percent conversion compared to a single-digit conversions, in a matter of seven months. And here’s the beautiful part, every month the AI is getting smarter and smarter, and every percentage conversion equals tens of millions of dollars in top-line growth. So that’s one example of a digital brain, a cognitive digital brain, driving shopper engagement and shopper conversion.
The other thing we saw was in the case of pediatric asthma. How an AI can help nurses do a much better job of preventing children from having an asthma attack, because the AI is able to read a tweet from pollen.com that says there will be a ragweed outbreak on Thursday morning. The AI understands the zip code that it’s talking about, and Thursday is four days out, and there are seventeen children with a risk of ragweed or similar allergies; and it starts tapping the nurse on the shoulder and saying, “There is an ‘unknown unknown’ going on here which is, four days from now there will be a ragweed outbreak, you better get proactive about it and start addressing the kids.” So, there’s an example in healthcare.
There are examples in wealth management, and financial services, around compliance and how we’re using AI to improve compliance. There are examples of how we are changing the dynamics of trading, foreign exchange trading, and how a trader does equities and derivatives trading by the AI guiding them through a chat session where the AI is listening in and guiding them as to what to do. The examples are many, and most of them are things that are written up in case studies, but this is just the beginning. I think this is going to be one of the most exciting innovations that will transform the landscape of businesses over the next five to seven years.
You’re totally right about the product recommendation. I was on Amazon and I bought something, it was a book or something, and it said, “Do you want these salt-and-pepper-shaker robots that you wind up and they walk across the table?” And I was like, “Yes, I do!” But it had nothing to do with the thing that I was buying.
Final question, you’ve talked about Hollywood setting the narrative for AI. You’ve mentioned I, Robot in passing. Are you a consumer of science fiction, and, if so, what vision of the future—book or whatever—do you think, “Aha, that’s really cool, that could happen,” or what have you?
Well, I think probably the closest vision I would have is to Gene Roddenberry, and Star Trek. I think that’s pretty much a great example of a data quarter helping a human being make a better decision—a flight deck, a holodeck, that is helping you steer. It’s still the human, being augmented. It’s still the human making the decisions around empathy, courage, and ethics. And I think that’s the world that AI is going to take us to; the world of augmented intelligence. Where we are being enabled to do much bigger and greater things, and not just a world of artificial intelligence where all our jobs are removed and we are nothing but plastic blobs sitting in a chair.
Roddenberry said that in the twenty-third century there will be no hunger, and there will be no greed, and all the children will know how to read. Do you believe that?
If I had a chance to live to be twice or three times my age, that would be what I’d come in to do. After CognitiveScale, that is going to be my mission through my foundation. Most of my money I’ve donated to my foundation, and it will be focused on AI for good; around addressing problems of education, around addressing problems of environment, and around addressing problems of conflict.
I do believe that’s the most exciting frontier where AI will be applied. And there will be a lot of mishaps along the way, but I do believe, as a race and as a humanity, if we make the right decisions, that is the endpoint that we will reach. I don’t know if it’s 2300, but, certainly, it’s something that I think we will get to.
Thank you for a fascinating hour.
Thank you very much.
It was really extraordinary and I appreciate the time.
Thanks, Byron.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.