The Future Capability and Impact of AI
published: Aug. 23, 2017, recorded: February 2017, views: 1285
Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data.
Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status.
I'll briefly recount my fifty year experience in AI. It began with my meeting Marvin Minsky in 1962 when I was fourteen which began a 54 year mentorship until his passing a year ago. That same month I also met Frank Rosenblatt, then leader of the nascent connectionist school. He shared an intuition with me which would be proven correct decades after his passing in 1971.
Deep Neural Nets and Long-Short Temporal Memory techniques, along with the ongoing progression of what I call the "Law of Accelerating Returns" (the exponential growth of the price-performance and capacity of information technologies, which is a much broader phenomenon than Moore's Law) is fueling a wave of optimism in AI. But DNNs and LSTMs have a limitation characterized by the motto "life begins at a billion examples."
I'll share an alternative model based on self-organizing hierarchies of sequential models and explain why I believe that this is how the human neocortex works, and why this approach has the potential to overcome the apparent limitations of big DNNs.
I would then like to explore these ideas and get questions from the insightful AAAI audience.
Link this pageWould you like to put a link to this lecture on your homepage?
Go ahead! Copy the HTML snippet !