Here’s what you missed at Gigaom’s ‘Future of AI’ meetup

Last week, Gigaom held a sold-out meetup in San Francisco featuring some of the biggest names in artificial intelligence and deep learning. Today, we’re happy to share videos of the talks so those who missed it live can still absorb the 2.5 hours of information about one of the hottest fields in tech right now.

All of the talks are embedded below and as a playlist, titled “Future of AI,” on the Gigaom YouTube channel.

Andrew Ng on deep learning and Baidu’s big plans

Andrew Ng — chief scientist at Baidu, co-founder of Coursera and associate professor at Stanford — kicked off the night with a talk about why deep learning is so hot right now and how Baidu is using it to power some very interesting ideas. Sure, there’s image and voice search, and the new Baidu Eye wearable computer, but Ng also discusses a project dubbed the “Baidu Cool Box” that’s like an all-powerful information appliance.


How Google made speech recognition a reality

Johan Schalkwyk, a principal staff engineer in Google’s speech recognition group, talks about how the company came to power so many of its products with voice commands. While acknowledging the big role deep learning models made in helping make it possible, he also notes some of the lingering shortcomings of current approaches and touches on the relationship between speech recognition and natural language processing.


Putting the AI renaissance in perspective

John Platt, a distinguished scientist and manager of the machine learning group at Microsoft Research, discusses decades worth of artificial intelligence research and explains how and why today’s approaches — which are based on earlier work — are finally proving so effective. Platt also discusses how Microsoft is commercializing these techniques in everything from cloud services to Skype features.


Moving from savant systems to generally intelligent ones

Oren Etzioni, executive director of the Allen Institute of Artificial Intelligence (formerly founder of Farecast and, takes a contrarian view of all the deep learning hype. Essentially, he argues, while systems that are better than ever at classifying images or words are great, they’re still not “intelligent.” He describes work underway to build systems that can truly understand content, including one capable of passing fourth-grade short-answer exams.


Getting started with deep learning needn’t be impossible, if it’s even necessary

This panel discussion was moderated by Jeremy Howard of Enlitic (Gigaom readers might know him best as chief scientist at Kaggle) and includes panelists Adam Gibson of Skymind, Bobby Jaros of Yahoo Labs and Ben Recht of the University of California, Berkeley/AMPLab. They discuss, among other things, why actually using some of today’s deep learning models doesn’t require a Ph.D. in math or computer science, and also how to figure out whether simpler, better-understood approaches to machine learning might be best depending on the problem.