Geoffrey Hinton: Will AI Save the World or End it? | The Agenda

289.62k vues4116 MotsCopier le textePartager
TVO Today
Geoffrey Hinton, also known as the godfather of AI, was recently awarded the Nobel Prize in Physics ...
Transcription vidéo:
our next guest is known as the godfather of AI and recently was awarded the Nobel Prize in physics for his pioneering work in artificial intelligence He's also gotten under Elon Musk's skin We'll have to ask about that He is Jeffrey Hinton professor emeritus of computer science at the University of Toronto and he joins us now to discuss the promise and perils of advanced AI So good to have you back in the studio Thank you for inviting me We're going to show a little clip off the top here of um I suspect one of the better
days of your life Sheldon if you [Music] [Applause] [Music] would That was King Carl the 16th of Sweden and you are getting in Stockholm the Nobel Prize When does the buzz of all of that wear off um I'll tell you when it wears off It still has not Not completely No not completely How cool a day was that it was amazing Yeah Particularly since I don't do physics and I got the Nobel Prize in physics You want to explain how that happened i think they wanted to award a Nobel Prize for the developments in AI
because that's where a lot of the excitement in science is now And so they sort of repurposed the physics one by pretending I did physics Did you point that out to them yes So thanks for the Nobel Prize but you guys know I don't do physics Is that what you said that's what I said Yes And did they say don't look a gift horse in the mouth or what pretty much Yeah Uh you get a medal right oh you do yes Yeah Where do you keep the medallion i'm not telling you I'm not going to
steal it Jeffrey No but somebody else might Oh all right All right It's It's 6 ounces of gold It's worth about $15,000 Oh so you melt it down If you So you're not going to tell me if it's at home or if you keep it in a safe deposit box or whatever No you're not Okay fair enough Here's I'm I'm going to read what you won for You won for quote foundational discoveries and inventions that enable machine learning with artificial neural networks And I know you've been asked a million times what does that mean in
English so let's make it a million in one What does that mean in English okay In your brain you have a whole bunch of brain cells called neurons And they have connections And when you learn something new what's happening is you're changing the strength of those connections And so to figure out how the brain works you have to figure out what the rule is for changing the strength of connections That's all you need to know How does the brain decide whether to make a connection stronger or weaker so that you'll be better at doing something
like understanding what I just said and your brain has a way of figuring out whether to make a connection strength slightly stronger or slightly weaker And the question is what is that way how does it do it and what happens if you can mimic that and take a big network of simulated brain cells and we now know what happens it gets very smart of all the different thousands and thousands of areas of scientific research that you could have done Why that one because that's clearly the most interesting one to you I think it's actually the
most interesting to everyone Because um in order to understand people you really need to understand how the brain works And so how the brain works is we still don't know properly how the brain works We have more ideas than we did Um but that seems like just a huge issue You were obviously very well known uh after getting well you were well known before you got the Nobel but then you got the Nobel and of course that has an explosive effect on one's profile Since then you have been warning us about the perils of AI
You even quit your job at Google a couple of years ago because of concerns about this So let's break this down The short-term risks of not having adequate control of artificial intelligence are what in your view okay so there's really two kinds of risk There's risks due to bad actors misusing it and those are more short-term risks Those are more immediate It's already happening And then there's a completely different kind of risk which is when it gets smarter than us is it going to want to take over is it going to want to just brush
us aside and take over and how many examples do you know of more intelligent things being controlled by much less intelligent things not many I mean we know we know sort of more intelligent people can be controlled by less intelligent people but that's not a big difference in intelligence I was going to make a Trump joke there but never mind We're going to move on So was I but I avoided it I just alluded to it Okay Um bad actors Let's start with that one there Give us an example of a of a the concern
that you have about bad actors exploiting this Well somebody getting lots of data about people and using that data to target fake AI videos to persuade those people for example not to vote That would be a bad action That would be a problem Yes And those are the kinds of problems we're already facing Um cyber attacks So between 2023 and 2024 um fishing attacks went up by 1,200% There were 12 times more fishing attacks in 2024 than in 2023 And that's because these large language models made them much more effective So it used to be
you get a fishing attack where the syntax was slightly wrong It was kind of direct translation from the Ukrainian or whatever and the spelling was slightly wrong And so you knew this was a fishing attack Now they're all in perfect English It's getting too sophisticated now Yeah Okay How about uh examples of um the second thing you said dumber people being in control of smarter people or dumber things being in control of smarter things There's only one example I know actually and that's a baby and a mother It's very important for the baby to control
the mother and evolution put a huge amount of work into making the baby's cries be unbearable to the mother Um but that's about it The longer term uh risks that you are worried about we just talked short-term How about longer term well the long-term risk is um it's going to get smarter than us Almost all the leading researchers agree that it will get smarter than us They just disagree on when Some people think it's maybe 20 years away Other people think it's three years away A few people think it's one year away Um and so
we all agree it's going to get smarter than us The question is what happens then and basically we have no idea Um people have opinions but we don't have any really found any good foundation for estimating these probabilities So I would guess there's a 10 to 20% chance it'll take over but um I have no idea really It's more than 1% and it's less than 99% You when you say take over I think you've gone further than that I think you've said there's a 10 to 20% chance that we will be rendered extinct Yeah If
it takes over that's what'll happen Do you want to give us a time frame on that um no Because like I say there's no good way to estimate it But it if we don't do something about it now um it might not be that long Right now we're at a point in history where there's still a chance we could figure out how to develop super intelligent AI and make it safe We don't know how to do that We don't even know if it's possible Um hopefully it's possible And if it is possible we ought to
try and figure that out And we ought to spend a lot of effort trying to figure that out Can you play out that scenario for us how would they render us extinct because of their superiority there's so many different ways they could do that if they wanted to that is I don't think there's much point speculating I don't think it would be like Terminator Um they could for example create a virus that just kills us all H okay So we got clearly got to get a handle on that Are we doing it would be a
good idea to get a handle on that And there there is research on safety and there's research on this existential threat that they might just take over but not nearly enough And the big companies are motivated by short-term profits What we need is the people to tell the governments they ought to make these big companies do more research on safety They ought to spend like a third of their resources on it Something like that How's that going um people are becoming more aware Politicians are becoming more aware Um recently in the states there was a
step backwards but do you want to refer to what you're talking about there um the Biden administration was interested in AI safety and had an executive order and um I think it's gone the way of all of Biden's executive orders under Trump As in it's been reversed Yeah Okay And I presume it's been reversed because the richest techiest people in the United States are all supporting this administration right now It's fair to say it's sad to say Yes What all right Clearly you would like to see us get a handle on this Um you know
what can we do since it appears that there isn't the consensus there to do anything about this at the moment yes The first thing to do is build consensus This is a really serious problem It's not just science fiction and we need to persuade the big companies to do more research on safety It's like climate change You have to first build consensus that there really is climate change and it's really going to be terrible if we don't do anything about it and then you could start getting action Um not enough action but at least some
With this we first need the consensus But one piece of good news is for the existential threat that it might wipe people out Um all the different countries should be able to collaborate We should be able to collaborate with the Chinese Actually I'm not sure who we is anymore I used to think of Wii as you know Canada and America but that's not a Wii anymore It is not You are right Um but anyway countries should be able to collaborate because nobody wants to get wiped out The Chinese leaders don't want to get wiped out
Trump doesn't want to get wiped out They can collaborate on the existential threat So that's a little piece of good news But the bad news is we don't know what to do about it and we desperately need research now to figure out what to do about it Is there an international institution that you see leading the way to get that collaboration there's a number of um a number of organizations that are trying to help with that but no no dominant one yet I mean is it a job for the UN or or who well the
UN is sort of a bit pathetic right it's not up to this really Who's up to it um the big companies have the resources So to do research on AIO safety you need to be dealing with the latest most advanced models and only the big companies have the resources to train those Okay let's talk about the richest man in the world shall we i gather you're not on We have to Well I gather you're not on his Christmas card list anymore Okay so I agree with him on various things I agree with him on the
existential threat for example He he takes it seriously Um and he's done some good things like electric cars and um communications for people in Ukraine using Starlink So he's definitely done some good things but um what he's doing now with Doge is obscene What's happening is he's cutting um almost at random lots of government workers who good honest people who go to work and do their job He's accusing them of being corrupt and lazy and useless and just cutting their jobs Um and it's going to be terrible It's going to have terrible consequences on people
and he just doesn't seem to care So the one thing he the only time I've seen him care was when I criticized him and he said I was cruel Well let's do this here You went on his home turf X formerly Twitter and you tweeted "I think Elon Musk should be expelled from the British Royal Society not because he pedals conspiracy theories and makes Nazi salutes but because of the huge damage he is doing to scientific institutions in the US." Now let's see if he really believes in free speech And apparently you caught his attention
because he tweeted back at you "Only craven insecure fools care about awards and memberships History is the actual judge always and forever Your comments above are carelessly ignorant cruel and false That said what specific actions require correction i will make mistakes but endeavor to fix them fast Okay What was your reaction to his tweet um I thought it's best not to get involved in a long series of exchanges with Elon Musk because I want to be able to get into the US and my um friend um Yan Lakur um answered those questions Okay And where
would we be able to see the answers on Twitter If you look at the on X-ray underneath if you look at Randita So that's the only interaction you had directly with him I a couple of years ago he asked me to call him because he wanted to talk about the existential threat Um actually he wanted to recruit me to be an advisor for X Um so XAI Um so we talked about the existential threat for a bit Um and then he asked if I would be an adviser for his new XAI company and I said
no Um he thought I might agree because he employed one of my best students as one of the technical people there Um and then he started just rambling and so I said I made up a meeting and said I'm sorry Elon I have another meeting so I have to go and that's it That's it If I can sort of break this thing in two I mean he takes some fairly personal shots at you at the beginning as you did at him fair I mean not everybody agrees that what he was doing when he got up
on stage and did that thing was a Nazi salute You know he would argue he was just throwing his heart out to the crowd Sure You're not buying that No you're not buying that Okay Particular if you look at his history and um his parents' views and so on Yeah He does seem to cozy up to some fascistic situations here and there Yes But then the second part of this is rather constructive He's asked you for advice on what corrections he can make Yes And I let somebody else answer that Yan answered that so I
left that Okay Do do you want to just share maybe one or two of the things that you think he ought to do well if he's going to I mean let let's get straight What's going on here um he wants there to be an enormous tax cut for the rich He wants a $4 trillion tax cut That's what it's going to cost And in order to get the money for that without increasing the national debt hugely they have to cut somewhere Put tariffs on us The two things they're planning to do are cut government spending
and um have tariffs which are really attacks on the poor Tariffs are nonprogressive tax They're going to make everything more expensive And so normal people are going to end up paying $4 trillion more for what they buy to pay for the tax cuts for the rich This is disgusting This is government policy in the United States right now which is disgusting Uh you talk about damage to scientific institutions in the United States Referring to what well for example if you put a crazy guy with a worm in his brain in charge of um the health
system that would be RFK Jr that you're referring to there Yeah You don't like anything of what he's doing right now um no I wouldn't say that These things are never completely black and white Um I think his emphasis on people having a healthy diet is important Um maybe some of the things he is dead against like seed oils isn't quite right But the idea that people should have a healthy diet and that'll improve health That's an important idea and he sort of pushes that a bit But most of the rest of what he says
is just nonsense You don't share his suspicion about vaccines and pharma and how we get autism and that kind of thing No I don't So there's been a lot of research on that already I mean people have taken very seriously because of all these crazy claims Um most of the people who push that just want to sell you medicines or sell you something They're doing it as a sales technique to get your attention Um they don't really believe it themselves He's had his own kids vaccinated as far as I know That says a lot I
mean it reminds me of the time when when Fox News will be broadcasting 24/7 against the mandatory vaccination and yet all the Fox employees had to get vaccinated Right There you go Okay We've talked a lot about the perils of AI Is there anything you can leave with us here that should make us somewhat optimistic that things may actually work out well one thing one of the reasons AI will be developed we can't just stop it now is because there's so many good things will come out of it So for example um in healthcare it's
going to do amazing things You're going to get much much better healthare um like you're going to have a family doctor who's seen 100 million patients who knows and remembers the results of all the tests that have ever been done on you and on your relatives and can give much much better diagnosis Already an AI system working with a doctor um gets far less errors in diagnosing complex cases than a doctor alone So already that's happening and it's going to get much better It's going to be amazing in education So we know that a kid
with a private tutor will learn about twice as fast because the tutors can see what the kid misunderstands Now AI systems aren't there yet but sometime in the next 10 years probably they'll be really good And so when a kid is learning something the AI system will be able to see exactly what it is the kid misunderstands because the AI systems seen a million other kids right he knows exactly what the kid misunderstands exactly what example to give the kid to make it clear what the misunderstanding is And so if a private tutor that's a
person is like two or two times better these will be three or four times better Um it may not be good news for universities but it's very good news for people learning stuff Not good news for universities because maybe won't won't need them anymore Won't have to go well you know you'll need them for doing graduate research I think I think you'll still need an apprenticeship to learn how to do research because we can't say how you do research We can say okay this problem I would tackle it this way Um we can't really give
the rules for it There aren't any rules It's an apprenticeship All the kids who thought it was going to be a great idea to go to university and learn how to code or take computer science are they in trouble now they may well be Yes I mean computer science you'll learn more than just learning how to code But they call you the godfather of AI Do you like that title i quite do actually It wasn't intended kindly Someone started calling me that after a meeting um in which I was kind of chairing the meeting I
kept interrupting people Oh and therefore they called they called me the godfather the godfather Andrew Andrew Ing It was a meeting in Windsor um in England And after the meeting Andrew Ing started referring to me as the godfather because you cut people off Because I was sort of Yeah You were I was the oldest guy there and I pushing people around Got it Uh half of your Nobel which I gather is what $350,000 something like that The whole prize is a million dollars about and so half it's half a million dollars Half a million dollars
Okay Of that half a million you donated $ 350 to Water First Do I have that right yeah Quarter of a million US is 350 Canadian Got it What's Water First okay Water First is an organization that trains people who live in indigenous communities in water technology So how people who live in those communities can make their water safe And why did you pick them i adopted a child in Peru and I lived there for two months and you couldn't drink the tap water It was kind of lethal And so I experienced what it's like
not to have safe drinking water And it just occ if you have a baby and you don't have safe drinking water it just occupies all your time on how you're going to stop the baby getting sick And it's just a crazy extra burden to impose on people And I I think it's kind of obscene that in a rich country like Canada um there's all these indigenous communities that don't have safe drinking water Like in Ontario 20% of the indigenous communities don't have safe drinking water It's this will not satisfy you I don't mean it to
satisfy you but it's it's better today than it was a decade ago Maybe No it is I mean we can say that But they should all have safe drinking water Of course they should Of course they should Uh okay What's ahead for you um I'm trying to retire I'm doing a very bad job of it How old are you 77 Oh that's way too young to retire You got a I I thought I left Google at 75 because I wanted to retire You got a lot of runway you left still Maybe I mean you look
awfully good for 77 I got to say Thank you Yeah No I think you got at least one or two or maybe three chapters For 77 too Good makeup artist makes all the difference let me tell you I'm so grateful you uh could spare some time to come in and uh take these impertinent questions today And um who knows maybe you and Elon will get back together again and try and solve these problems that we need solutions to I think that's improbable That's Jeffrey Hinton professor emeritus computer science University of Toronto who was a Nobel
laureate in physics Thank you for joining us on TVO tonight Thank you for inviting me
Vidéos connexes
Our AI Future Is WAY WORSE Than You Think | Yuval Noah Harari
1:37:44
Our AI Future Is WAY WORSE Than You Think ...
Rich Roll
800,506 views
Former Facebook executive exposes tech giant’s alarming failings | 60 Minutes Australia
22:00
Former Facebook executive exposes tech gia...
60 Minutes Australia
2,483,668 views
Why The "Godfather of AI" Now Fears His Own Creation | Geoffrey Hinton
1:13:26
Why The "Godfather of AI" Now Fears His Ow...
Curt Jaimungal
325,637 views
The Most Useful Thing AI Has Ever Done
24:52
The Most Useful Thing AI Has Ever Done
Veritasium
8,342,248 views
Geoffrey Hinton, Nobel Prize in Physics 2024: Official interview
18:52
Geoffrey Hinton, Nobel Prize in Physics 20...
Nobel Prize
69,867 views
Is Russia Weaker Than We Think? | The Agenda
30:02
Is Russia Weaker Than We Think? | The Agenda
TVO Today
11,250 views
Recapping Trump's First 100 Days in Office: Part 1 | The Daily Show
1:33:53
Recapping Trump's First 100 Days in Office...
The Daily Show
324,882 views
Harvard Professor reveals the Science of Happiness in 15 minutes | Arthur Brooks [ARC 2025]
14:53
Harvard Professor reveals the Science of H...
Alliance for Responsible Citizenship
854,919 views
What's next for AI at DeepMind, Google's artificial intelligence lab | 60 Minutes
14:01
What's next for AI at DeepMind, Google's a...
60 Minutes
1,155,849 views
Physicist Brian Cox explains quantum physics in 22 minutes
22:19
Physicist Brian Cox explains quantum physi...
Big Think
1,062,476 views
‘Godfather of AI’ predicts it will take over the world | LBC
11:40
‘Godfather of AI’ predicts it will take ov...
LBC
1,317,937 views
NVIDIA CEO Jensen Huang's Vision for the Future
1:03:03
NVIDIA CEO Jensen Huang's Vision for the F...
Cleo Abram
2,483,332 views
Les 4 étapes pour entrainer un LLM
39:59
Les 4 étapes pour entrainer un LLM
ScienceEtonnante
71,644 views
Sir Demis Hassabis - The Future of AI and Scientific Discovery
16:45
Sir Demis Hassabis - The Future of AI and ...
Queens' College Cambridge
5,070 views
Veritasium: What Everyone Gets Wrong About AI and Learning – Derek Muller Explains
1:15:11
Veritasium: What Everyone Gets Wrong About...
Perimeter Institute for Theoretical Physics
1,378,840 views
The Government Knows AGI is Coming | The Ezra Klein Show
1:03:41
The Government Knows AGI is Coming | The E...
The Ezra Klein Show
583,027 views
The Race to Harness Quantum Computing's Mind-Bending Power | The Future With Hannah Fry
24:02
The Race to Harness Quantum Computing's Mi...
Bloomberg Originals
5,586,180 views
Bill Gates on Trump, AI, and a Life of Revolutionizing Tech  | Amanpour and Company
18:19
Bill Gates on Trump, AI, and a Life of Rev...
Amanpour and Company
523,678 views
AI bosses on what keeps them up at night
14:24
AI bosses on what keeps them up at night
The Economist
135,379 views
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
13:12
"Godfather of AI" Geoffrey Hinton: The 60 ...
60 Minutes
2,815,811 views
Droits d'auteur © 2025. Fait avec ♥ à Londres par YTScribe.com