Yann LeCun & John Werner on The Next AI Revolution: Open Source & Risks | IIA Davos 2025

1.26k views4851 WordsCopy TextShare
Imagination in Action
Join Turing Award laureate Yann LeCun—Chief AI Scientist at Meta and Professor at NYU—as he discusse...
Video Transcript:
[Applause] [Music] [Applause] so Yan maybe is this an exciting time in AI H how are you doing yeah it's super exciting I feel like I felt when I was you know 25 or something uh and I haven't felt that excited since then and it's mostly not because of the industry I mean the industry is really exciting but mostly because because uh of the research like the the prospect of you know you think that there there's been a revolution in AI over the last few years there's going to be another one in the next three to
five years and that's that's what I've I've been working on so it's kind of the the next step and I'm super excited about what might occur in the the next few years all right so I've broken this this conversation into uh six parts so the first part I want to let's let's building off of that inspiration and career what's marked your interest in um artificial intelligence uh augmented intelligence I had someone else said ambient intelligence uh I'm a neighbor of uh where Minsky lived he's passed away and I went up to his house I said
why' you call it AI because in 1955 he and McCarthy um wrote a paper to the government saying Hey give us money so we can take these cybernetic studying people up to Dartmouth and kind of think about it and he said to me John I could have called it bi but I thought AI would would sound scary and the government would give us money how did you get into AI um and um yeah I just deleted the rest of my my question hold on uh well I give a very long answer so you have time
to search so here is what sparked your interest in artificial intelligence and how is your career evolved over the years and and maybe this is to inform some of the young people here to think about their careers uh and also maybe you could contrast to how you think careers are going to be different that maybe the way you did it is going to be different given the dimension of of of AI but yeah uh well so it started a very long time ago um and Marvin msky had something to do with it um I mean
I was always interested in science and and sort of uh one of the thing that I always found fascinating since I was a kid was the emergence of uh of intelligence in animals and humans um and then when I was N9 years old my parents took me to watch 2001 at space ay and there was a intelligent computer like I I didn't you know even know the concept uh was was even possible turns out this scientific adviser for that movie was Marvin Minsky yeah yeah yeah okay um in the in the mid-60s um and so
when I was uh undergraduate I also learned about the fact that people had um worked on on machines that could learn I thought learning was really an essential part of of of intelligence maybe because I thought I wasn't smart enough to design intelligent machines from scratch so you know AI has to emerge by itself basically intelligence uh natural intelligence as well and I stumbled in another article by another MIT professor sour peer yes Turtle logo right but before that uh he was uh uh he worked with uh Jean P the Swiss experimental psychologist developmental psychologist
and he wrote an article in a a book which was a debate between between p and chumsky about whether language is innate or or learned and here we was talking about the perceptron a model you know that was uh originally developed in the 50s and so I I found the idea of uh learning machines absolutely fascinating I started digging the literature I was maybe in second or third year of uh electrical engineering studies and um I got really sort of passionate about it and did sort of independent research projects with various professors on this and
then went on to uh to grad school and when I was searching for that literature I discovered that Minsky and pepper wrote a book that basically killed the entire field in the late 60s the AI winter uh well the the machine learning winter because it was uh I mean there was a simultaneous winter in the 70s for both classical logic based Ai and machine learning based AI uh and that kind of was revived in 1980s so if you had a DeLorean could go back in time and whisper in their ear as the as the winter
was unfolding what would you say to them uh that you think might might be profound or or timely or interesting well that their interest when they were young which was self-organizing running machines uh was the right path and they abandoned it and they basically convinced everyone else to abandon it U and that was a bit of a mistake I actually discussed this with Marvin Minsky once but I was a very young student at the time and uh and he said no it was good to kill it because it got us to kind of invent other
things that would otherwise not have emerged and he you kind of had a point yeah so in terms of your earlier work um evolutional uh neuron Nets um how did that influence your later research Direction okay so a lot of people think I worked on conditional neural net so this was in the uh late 80s uh at B Labs I actually started when I was a Pak Anders here of Toronto Claude Shannon um yeah he wasn't there anymore but but there were wrote unicycle around MIT when he was at MIT right he was already at
MIT by then um or retired actually and but um and people thought maybe I was interested in character recognition something not at all I was interested in building intelligent machines and I worked my way kind of back so you want to build intelligent machines you're not smart enough to conceive it from scratch so the machine has to basically build itself to learning now how do you get machine to learn okay people had worked on this in the 60s but the techniques were very limited and it was pretty obvious back then that what was limiting was
the fact you couldn't train a system with sort of more than one layer basically and so I started looking into like how can you train a system with multiple layers what we call Deep learning now and you know discovered a rule that now we would call Target prop but was Sim very similar to back propagation um and that's why kind of got me into this and then once you have a learning algorithm that works the next question is where how are you going to connect those neurons so that they do something useful and the only
data that uh at the time in the 8S that we had access to were either um you know characters um scanned you know you couldn't connect a USB camera on a computer back then right it was actually a major undertaking to get uh images into computers and they were mostly like black and white um so the only two sets of data that were available at the time in in sort of decent size were either character recognition or speech recognition and speech recognition already had a lot of people working on it um character recognition for printed
was working fine just handwriting wasn't so I worked on this just because that's where the data was um and there were some you know potential uh applications which actually we ended up deploying in the mid 90s for reading checks and and various other documents so this was kind of a great thing where and I think that's maybe a lesson in there you set yourself a very ambitious goal very long term and you work your way back and then you take the first step and it turns out the first step usually is useful and you have
uh you know consequences relatively shortterm on this path towards Ai and since then you know I've I've kept trying to make progress uh along that that route um and and convinced you know people like Max Zuckerberg and M Sher CTO at the time at at meta that this is the way you should organize research in Industry have a very ambitious goal and then work your way back and good things are going to come out of it inevitably right so let me Telegraph where I'm going um I I want to ask you some questions about AI
research and progress then I want to ask you some questions about challenges opportunities then I want to ask you about meta and fair and then I want to ask you about ethics and responsibility and then finally advice and Legacy um and you're my favorite turning Prize winner all right uh we have 18 minutes um so I'm going to just throw three quick questions and you could pass or just say something pithy um so uh what role did your experience with yosa Benja and Jeffrey Hinton play in shaping your research interests and with the new wave
of gen gen what has changed in your view about AI so quickly answer those and then we got to get to the next thing okay jeffon is uh as a mentor I admire him in in many ways um when I was starting my PhD he was the person I most wanted to meet in the world U because I knew he had understood this question of you know multilayer training and hidden units Etc we met in 1985 and uh and again in 1986 and he offered me to do a posac with him which I was super
happy uh to do so I did a posac with him 1987 88 while I was in Toronto working with him um I gave a talk in Montreal and there was this young master student who asked really intelligent questions that was yosha vjo and so after that when I was at BBS I hired yosua as a post off first and and scientist afterwards and so we've been working together on and off for you know the last almost 40 years now um uh we have a lot of mutual admiration there are a few topics on which we
really disagree and one of them is uh existential risk of AI so I think Jeff is completely wrong about this uh I think yosa is is very largely wrong about that and we disagree with your friends great um New Wave of gen just one sentence answer what has changed in your view about AI uh so geni has a sha life of about three years now okay everybody is excited about gen the who's excited about gen all right three years he says okay what is Gen right you train a system to predict what's going to happen
next in a in a sequence that's basically what it is so it works really write for text it works for DNA sequences it works for things that are sequences of discrete symbols if what you want to produce is not discrete it's in a continuous World um so this idea this Paradigm of learning by the way is called self-supervised learning you don't have an input and an output everything is a an input and an output and you train the system to pred to predict part of the input from another part of the input okay now the
idea that you should use this to train a system to understand the World by training it to for example predict video is very old people have been working on this for a long time including me for the better part of the last 20 years cat videos I mean not just cat videos um you need diversity in the in the training no but it doesn't work it works really well for text it doesn't work for video and the reason is you know you can't predict the next word in the text but you can predict the distribution
of all possible words we don't know how to represent this for for uh for video we don't know how to encode distrib go to my next next group The Punch Line okay punch punch line is the only way to do this is to get rid of generative models Al together and train systems that learn an abstract representation of video and make prediction in this abstract representation a course this japa that means joint timeing productive architecture and that basically will swipe away generative models okay I'm G to ask a few questions you decide what you want
to answer um if you think it's relevant all right what do you consider the most significant breakthrough in AI research over the past decade that's one what are the key challenges in scaling deep learning models to larger data sets and more complex tasks that's two how do you see uh graph neuron Nets evolving to handle complex structured data and then I have one more question but I'll wait till you take those three yeah I can't already remember all the three questions so okay so uh first one is first one is uh most significant breakthrough in
AI in the last decade I mean certainly uh architectural Concepts that allow us to build build deep leing system by assembling modules um that you know have different properties I think is a great thing so Transformers certainly are have been a super useful and fruitful architectural concept for this uh they are equivalent to pration which is great conval layers are equivalent to translation we're probably going to come up with new ones like that uh self-supervised learning I think is probably the the most revolutionary concept that has really completely changed the way we practice machine learning
over the last uh 10 years also um and then there is other things like you know systems that are augmented with Associated memories and stuff like that but there's could to be more progress in along that line second question you know what I'm going to go to a question final question of this segment do you think AGI is real uh or is it a hoax uh as close is it as close as everyone says what what's what's your stance on AGI go okay so AGI is a misnomer uh it really stands for human level Ai
and human human intelligence is very specialized so calling this general intelligence is complete nonsense um at meta we call this Advanced machine intelligence Ami and we pronounce it Ami which means friends in French uh so so I don't like the phrase AGI but there is no question that at some point in the future we will have system that are as intelligent as humans in pretty much all the domains where humans are are intelligent the idea that somehow intelligence is kind of a linear scale is is nonsense again um your cat is smer than you on
certain things and you're smer than it on certain things a $30 Gadget or Euro Gadget that you can buy that beats you at chess is smer than UHS so I mean the idea that somehow it's a linear scale and that you know at some point it's going to be an event when we reach AGI is complete nonsense it's going to be Progressive uh it's going to be you know uh uh finding new Concepts that will allow AI systems of the future to understand the physical world which llms can do have persistent memory which LMS don't
have are capable of reasoning and planning which LMS definitely can't do right so those are the four challenges of the next few years in AI this is what I'm working on this is what uh a good half of fair at MAA is working on this is the next Generation AI system so um in terms of opportunities with AI you you just said what you think of AGI how can we ensure that AI benefits everyone regardless of background social economic uh status I know people it's kind of like the wild west let's create these tools I
think web 2 there were a lot of win or take all privacy went out the window you know meta and Facebook and Google did really well I think in this era of AI there's some real opportunity to bake in some things Uber didn't come about until 16 years in the web 2 when people said oh you can combine these things I think there's going to be a lot of things we can combine in with this technology that's that's happening now what do you what do you think of the opportunities to you be an equalizer two
words open source so so how did you get in open source did they recruit you and say do this did you show up at meta and said we're doing this okay it's almost like that uh when I was recruited by by Mark and uh Mark Sher who the CTO at the time to create a research activity in in AI the first research lab at met at MAA at Facebook at the time I told them I have three conditions first one is I don't move from New York I don't move to California I keep my job
at NYU because I need a foot in Academia and third thing is we have to practice open research and open source everything we do and the answer to the third question the answer to the first two question was yes and the answer to the third question is was you don't need to worry about this it's in the DNA of the company we already open source all of our platform software uh you know like uh react and things like this and open compute and all that stuff so like this is you don't need to worry about
this and say okay where do I sign uh it's the first time I heard this from any company that I either interviewed with or work for before um and so I thought that was like the the most interesting aspect of it so we've been practicing open research we had the effect of actually causing other labs like Google to actually open up a little bit Google brand you know was publishing but not nearly as much and they they opened up um and this is one of the big factors that caused the rapid expansion of AI over
the last Dozen Years uh the fact that um scientific information code and everything is freely uh exchange it accelerates everything okay so that became kind of a you know a major uh uh evidence for the management at MAA that this was really good not just for the world but like for the entire industry and for ourselves you know we get a lot of feedback uh and the flywheel of innovation from from people using our tools so that's why we not only open source pytorch but we also transfer the ownership to the Linux Foundation pytorch now
basically is the basis for all you know pretty much all AI R&D today um including open AI by the way um trbt is built on P torch um and uh but I think there is a more important reason for it um uh two more important reason for it so the first one is the AI industry is largely built on top of Open Source uh Foundation models today um and a lot of the startups that you see are built on top of Lama or Mistral or some other uh open source um engin so there would not
be an AI industry that we know today if it were not for for for those open source platforms that's the first thing second thing is in the future every single one of our interactions with the digital world will be mediated by AI assistants from a smartphone for smart devices like those glasses I can take take a picture of you guys smile um and what that means is that uh you know all of our information diet will come from a assistants we cannot afford to have those assistant come from a handful company on the west coast
of the US or China it has to be highly diverse and the only way for it to be diverse which relates to the previous question is open source finition models that are then fine-tuned for ricol application uh or for you know learning every language in the world every uh culture every value system and that's how you get a diverse population of of of of AI assistants we need that for the same reason we need a diverse press okay so it's very important for democracy to preserve open source and to not legislate it out of existence
um well put um five five years ago did you think what is happening now was going to be happening now I I talked to Sebastian who's at Microsoft research and he said five years ago he thought everything happening now was going to happen 90 years from now he was very emotional because he said you know things his kid is going to be using you know he he he thought it was the next generation are are you is things on track to how you thought they'd be uh a little ahead or what's different well five years
ago it was pretty clear that uh things like dialogue systems and U you know celf supervisor leing in the context of of language was going to have a huge impact it wasn't clear for you know the The Wider population because chpt hadn't been you know in front of everyone but there were you know systems that were being worked on at meta at Google at open AI various other outfits that made it pretty clear where this was going um so yeah it was pretty clear five years ago now Sebastian is a interesting guy so he's he
came from like a very theoretical mathematical background he was a head of the theory group at Microsoft research um and he had an epiphany you know he kind of resisted learning he didn't think that was interesting because you know theoretically it's just too complicated and then he had an epiphany one day with you know playing with uh with gb4 and say oh wow this is really really cool and you know wrote this paper about you know the Sparks of AGI and he quit Microsoft and now he's working of any so he was with me the
day that he published that paper and it went online and he was very emotional right right and I think uh both of his opinions you know before and after the Epiphany I were excessive yeah all right I have one my last question is going to be for you to say something that people don't know that you think is important or something like that's unique to your perspective I mean I'm kind of curious what you think your legacy is right now in the AI world what do you think it will be that's my last question but
the question before that is I think AI can be scary what are some potential risks or downsides of advanced AI systems and how can we mitigate them I'm a techno Optimist but I think we need to be aware of of the challenges so before I ask you about your legacy and and you give these guys the and ladies the inside scoop um what about that so let me talk about risk and benefit I mean the the the benefits far outweigh the risks um and uh you know benefits in in like every corner you know not
just like economic growth and and things like that and and and and and progress in science and medicine and and you know safety and all that stuff uh but also the fact that everyone will basically be smarter because of AI right if we walk around with those AI assistants that are with with us at all time they will amplify your intelligence they will work for us they're not going to dominate us or anything right they're not going to kill us all or anything um they're going to do what we tell them to do but they
might solve problems in a smaller way than any individual of uh you know any people in this room including me can uh can do now we're very familiar with this concept at least very familiar with the concept of working with other people who are smarter than me and this is the way it's going to happen you're going to have a staff of people working for you virtual people uh that are smarter than you and they'll just make you more productive more creative uh smarter and amplifying human intelligence I think is the best thing that we
can do ever um the last time this happened in a big scale was following the invention of printing press which you know allowed the dissemination of knowledge and you know philosophy and basically brought down the feudal system in Europe and and you know brought the Enlightenment and then caused the American Revolution the French Revolution and emergence of democracy I mean this had like enormous uh impact right just the dissemination of knowledge so AI may have kind of the same effect but the next step you know it's going to be a new Renaissance perhaps for Humanity
how about that for a legacy um I I think know well well put so you got a few minutes just say what's on your mind like like you know right now I think demiss is in another uh Demis is somewhere else he has a lunch I think this we have a bigger group than that one um you got a a captive audience you got world leaders you got corporates you got the techies you got the MIT Community what do we need to know that that you know what here use this this is your your stage
okay AI is not going to K us all first of all all right uh future AI systems will be on a very different blueprint from current ones and to some extent they'll be safer in a sense that they be more controllable so uh what I've been a proponent of is something called objective driven AI so basically an AI system that sort of elaborates its answer by optimizing an objective um subject to guard rails so that makes those system controllable which is not the case for llms llms don't optimize any objective they just kind of produce
one token after the other um so future system will be more would be smarter capable of reasoning hopefully capable of understanding the physical world have common sense um but they'll be controllable because of those guard rails and objectives and it's not super uh difficult or unusual to design those guard rails we're we're we're familiar we're doing this with humans and superhumans entities called corporations and that's called making laws okay so we know how that works um so then are going to kill us all future system going be very different from current ones there's going to
be a Revolution coming another revolution in AI um within the next 3 to 5 years due to those change of paradigms and this may open the door to systems that not just you know can pass the bar exam and solve math problems but also like do what a cat can do like we have no idea how a cat becomes so smart so quickly and can understand the world so well much better than any robot we can build and so if we can crack that nut and you know understanding physical world have common sense we'll have
systems that can power domestic robots level five start driving cars which still don't exist um and all the stuff that you know we've uh imagined um and so maybe the next decade will be the decade of Robotics if we can make that step towards the Next Generation AI systems
Related Videos
Deep Dive into LLMs like ChatGPT
3:31:24
Deep Dive into LLMs like ChatGPT
Andrej Karpathy
753,796 views
The Shape of AI to Come! Yann LeCun at AI Action Summit 2025
50:40
The Shape of AI to Come! Yann LeCun at AI ...
DSAI by Dr. Osbert Tay
64,884 views
BREAKING NEWS: Pete Hegseth Puts NATO Allies On Notice That U.S. Will Not Be Treated Like A 'Sucker'
28:07
BREAKING NEWS: Pete Hegseth Puts NATO Alli...
Forbes Breaking News
796,560 views
The Most Useful Thing AI Has Done
24:52
The Most Useful Thing AI Has Done
Veritasium
4,010,277 views
OpenAI VP on Competing with Deepseek, How ChatGPT ‘Reasons’ and More | WSJ
28:18
OpenAI VP on Competing with Deepseek, How ...
WSJ News
45,689 views
The World by 2030: Futurist Gerd Leonhard's super-wide-screen presentation on AI & Work (GLMC 2025)
26:34
The World by 2030: Futurist Gerd Leonhard'...
Gerd Leonhard
141,563 views
Ever the futurist, Bill Gates finally looks back| All Things Considered | NPR
22:59
Ever the futurist, Bill Gates finally look...
NPR
3,920 views
OpenAI Chairman on Elon Musk Bid and the Future of AI Agents | WSJ
29:05
OpenAI Chairman on Elon Musk Bid and the F...
WSJ News
61,622 views
Andrew Ng Explores The Rise Of AI Agents And Agentic Reasoning | BUILD 2024 Keynote
26:52
Andrew Ng Explores The Rise Of AI Agents A...
Snowflake Inc.
561,965 views
Inside West Point: The Future of Technology in Warfare with Mr. Elon Musk
41:26
Inside West Point: The Future of Technolog...
West Point - The U.S. Military Academy
258,656 views
Rebuilding Gaza & a deal with Putin – Professor John Mearsheimer on Trump
54:56
Rebuilding Gaza & a deal with Putin – Prof...
The Spectator
270,244 views
NEW: Elon Musk On The Future Of Warfare
30:42
NEW: Elon Musk On The Future Of Warfare
Farzad
721,384 views
Google DeepMind CEO Demis Hassabis: The Path To AGI, Deceptive AIs, Building a Virtual Cell
54:58
Google DeepMind CEO Demis Hassabis: The Pa...
Alex Kantrowitz
165,561 views
What Does the AI Boom Really Mean for Humanity? | The Future With Hannah Fry
24:02
What Does the AI Boom Really Mean for Huma...
Bloomberg Originals
1,109,268 views
Vertical AI Agents Could Be 10X Bigger Than SaaS
42:13
Vertical AI Agents Could Be 10X Bigger Tha...
Y Combinator
680,885 views
GLOVES OFF: Bernie Sanders drops BOMB on Elon Musk
19:14
GLOVES OFF: Bernie Sanders drops BOMB on E...
Brian Tyler Cohen
901,247 views
Lecture Series in AI: “How Could Machines Reach Human-Level Intelligence?” by Yann LeCun
1:26:17
Lecture Series in AI: “How Could Machines ...
Columbia Engineering
66,690 views
Mo Gawdat Deep Dives Into DeepSeek AI Technology | Full Interview
1:09:48
Mo Gawdat Deep Dives Into DeepSeek AI Tech...
Al Arabiya English
13,262 views
Visualizing transformers and attention | Talk for TNG Big Tech Day '24
57:45
Visualizing transformers and attention | T...
Grant Sanderson
458,512 views
DeepSeek facts vs hype, model distillation, and open source competition
39:17
DeepSeek facts vs hype, model distillation...
IBM Technology
111,082 views
Copyright © 2025. Made with ♥ in London by YTScribe.com