Voices in AI – Episode 54: A Conversation with Ahmad Abdulkader

About this Episode

Episode 54 of Voices in AI features host Byron Reese and Ahmad Abdulkader talking about the brain, learning, and education as well as privacy and AI policy. Ahmad Abdulkader is the CTO of Voicera. Before that he was the technical lead of Facebook’s DeepText, an AI text understanding engine. Prior to that he developed OCR engines, machine learning systems, and computer vision systems at Google.

Visit to listen to this one-hour podcast or read the full transcript.

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm. I am Byron Reese. Today our guest is Ahmad Abdulkader. He is the CTO of Voicera. Before that he was the lead architect for Facebook supplied AI efforts producing Deep Texts, which is a text understanding engine. Prior to that he worked at Google building OCR engines, machine learning systems, and computer vision systems. He holds a Bachelor of Science and Electrical Engineering degree from Cairo University and a Masters in Computer Science from the University of Washington. Welcome to the show.

Ahmad Abdulkader: Thank you, thanks Byron, thanks for having me.

I always like to start out by just asking people to define artificial intelligence because I have never had two people define it the same way before.

Yeah, I can imagine. I am not aware of a formal definition. So, to me AI is the ability of machines to do or perform cognitive tasks that humans can do or learn to do rather. And eventually learn to do it in a seamless way.

Is the calculator therefore artificial intelligence?

No, the calculator is not performing a cognitive task. A cognitive task I mean vision, speech understanding, understanding text, and such. Actually, in fact the brain is actually lousy at multiplying two six-digit numbers, which is what the calculator is good at. But the calculator is really bad at doing a cognitive test.

I see, well actually, that is a really interesting definition because you’re defining it not by some kind of an abstract notion of what it means to be intelligent, but you’ve got a really kind of narrow set of skills that once something can do those, it’s an AI. Do I understand you correctly?

Right, right, I have a sort of a yard stick, or I have a sort of a set of tasks a human can do in a seamless easy way without even knowing how to do it, and we want to actually have machines mimic that to some degree. And there will be some very specific set of tasks, some of them are more important than others and so far, we haven’t been able to build machines that actually get even close to the human beings around these tasks.

Help me understand how you are seeing the world that way, and I don’t want to get caught up on definitions, but this is really interesting.


So, if a computer couldn’t read, couldn’t recognize objects, and couldn’t do all those things you just said, but let’s say it was creative and it could write novels. Is that an AI?

First of all, this is hypothetical. I wouldn’t know, I wouldn’t call it AI, so it goes back to the definition of intelligence, and then there’s a natural intelligence that humans exhibit, and then there is artificial intelligence that machines will attempt to make and exhibit. So, the most important of these that we actually use sort of almost every second of the day are vision, speech understanding, or language understanding, and creativity is one of them. So if you were to do that I would say this machine performed a subset of AI, but haven’t exhibited the behavior to show that’s it good at the most important ones, being vision, speech and such.

When you say vision and speech are the most important ones, nobody’s ever really looked at the problem this way, so I really want to understand how you’re saying that, because it would seem to me those aren’t really the most important by a long shot. I mean, if I had an AI that could diagnose any disease, tell us how to generate unlimited energy, fix all the environmental woes, tell us how to do faster than light travel, all of those things, like, feed the hungry, and alleviate poverty and all of those things, but they couldn’t tell a tuna fish from a Land Rover. I would say that’s pretty important, I would take that hands down over what you’re calling to be more important stuff.

I think really important is an overloaded word. I think you’re talking about utility, right? So, you’re imagining a hypothetical situation where we’re able to build computers that will do the diagnosis or poverty and stuff like that. These would be way more useful for us, or that’s what we think, or that’s the hypothesis. But actually to do these tasks that you’re talking about, it probably implies, most probably that you have done or solved, to a great degree, solved vision. It’s hard to imagine that you would be doing diagnosis without actually solving vision. So, these are sort of the basic tasks that actually humans can do, and babies learn, and we see babies or children learn this as they grow up. So, perhaps the utility of what you talked about would be much more useful for us, but if you were to define importance as sort of the basic skills that you could build upon, I would say vision would be the most important one. Language understanding perhaps would be the second most important one. And I think doing well in these basic cognitive skills would enable us to solve the problems that you’re talking about.

Listen to this one-hour episode or read the full transcript at 


Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.