almost everybody I know who's an expert on AI believes that they will exceed human intelligence it's just a question of when in between five and 20 years from now there's a probability of about a half that will'll have to confront the problem of them trying to take over I began by asking Jeffrey Hinton whether he thought the world is getting to grips with this issue or if he's concerned as ever I'm still as concerned as I have been but I'm very pleased that the world's beginning to take it seriously so in particular they're beginning to
take the existential threat seriously that these things will get smarter than us and we have to worry about whether they'll want to take control away from us that's something we should think seriously about and people now take that seriously a few years ago they thought it was just science fiction and from your perspective from having worked at the top of this having developed some of the theories underpinning all of this explosion in AI that we're seeing that existential threat is real yes so some people think these things don't really understand they're very different from us
they're just using some statistical tricks that's not the case um these big language models for example the early ones were developed as a theory of how the brain understands language they're the best theory we've currently got of how the brain understands language um we don't understand either how they work or how the brain works in detail but we think probably they work in Fairly similar ways what is it that's triggered your concern it's been a combination of two things so playing with the large chatbots particularly one at Google before gb4 but also with gbt 4
um they're clearly very competent they clearly understand a lot they have a lot more knowledge than any person they're like a not very good expert at more or less everything um so that was one worry and the second was coming to understand the way in which they're a superior form of intelligence because you can make many copies of the same neural network each copy can look at a different bit of data and then they can all share what they learned so imagine if we had 10,000 people they could all go off and do a degree
in something they could share what they learned efficiently and then we'd have all be have 10,000 degrees um we'd know a lot then we can't share knowledge nearly as efficiently as different copies of the same neural network can okay so you so the key concern here is that it could exceed human intelligence indeed the massive human intelligence very few of the experts are in doubt about that almost everybody I know who's an expert on AI believes that they will exceed human intelligence it's just a question of when and and at that point it's really quite
difficult to control them well we don't know we've never dealt with something like this before there's a few experts like my friend Yan Lao who think it'll be no problem we'll give them the goals it'll be no problem they'll do what we say they'll be subservient to us um there's other experts who think absolutely they'll take control given this big spectrum of opinions I think it's wise to be cautious I think there's a chance they'll take control and it's a significant chance it's not like 1% it's much more could they not be contained in certain
areas I know scientific research but not for example the Armed Forces maybe but actually if you look at all the current legislation including the European legislation and there's a little clause in all of it that says that none of this applies to military applications governments aren't willing to restrict their own uses of it for defense I mean there's been some uh evidence even in current conflicts of the use of AI in generating thousands and thousands of targets yes I mean that's happened since you started warning about AI is that the sort of pathway that you're
concerned about I mean that's the thin end of the wedge what I'm most concerned about is when these things can autonomously make the decision to kill people so robot soldiers yeah and those are drones and and the like and it may be we can get something like Geneva conventions to regulate them but I don't think that's going to happen until after very nasty things have happened and there's an analogy here with the Manhattan Project and with Oppenheimer which is if we restrain ourselves from military use in the G7 Advanced democracies what's going on in China
what's going on in Russia yes it has to be an international agreement but if you look at chemical weapons the international agreement for chemical weapons has worked quite well I mean do you have any sense of of whether the shackles are off in a place like Russia well Putin said some years ago that whoever whoever controls AI controls the world so I imagine they're working very hard fortunately um the West is probably well ahead of them in research um we're probably still slightly ahead of China but China is putting more resources in and so in
terms of military uses of air I think there's going to be a race sounds very theoretical but you this this argument this thread of argument if you if you follow it you really are quite worried about extinction level events so we should distinguish these different risks the risk of using AI for autonomous lethal weapons doesn't depend on AI being smarter than us that's a quite separate risk from the risk that the air itself will go Rogue and try and take over I'm worried about both things the autonomous weapons is clearly going to come yeah whether
AI goes Rogue and tries to take over is something we may be able to control or we may not we don't know and so at this point before it's more intelligent than us we should be putting huge resources into seeing whether we are going to be able to control it what sort of society do you see evolving which jobs will still be here yes I'm very worried about AI taking over lots of mundane jobs and that should be a good thing it's going to lead to a big increase in productivity which leads to a big
increase in wealth and if that wealth was equally distributed that would be great but it's not going to be in the systems we live in that wealth is going to go to the rich and not to the people people whose jobs get lost and that's going to be very bad for society I believe so it's going to increase the gap between rich and poor which increases the um chances of right-wing populists getting elected so to be clear you think that the societal impacts from the changes in jobs could be so profound that we may need
to rethink the politics of I know the benefit system inequality absolutely Universal basic income yes I certainly believe in un basic income I don't think that's enough though because a lot of people get their self-respect from the job they do and if you put everybody on universal basic income that doesn't that solves the problem of them starving and not being able to pay the rent but it doesn't solve the self-respect problem so what you just try to the government needs to get I mean it's not how we do things in Britain you know tend to
sort of stand back and let the economy decide the winners and losers Yes actually I was um consulted by people in Downing Street and I advised them that Universal basic income was a good idea and this is I you said 10 to 20% risk of them taking over are you are you more certain that this is going to have to be addressed in the next five five years next Parliament perhaps next my guess is in between five and 20 years from now there's a probability of about a half that we'll have to confront the problem
of them trying to take over are you particularly impressed by the efforts of governments so far to try and Reign this in I'm impressed by the fact that they're beginning to take it seriously I'm unimpressed by the fact that none of them is willing to regulate military uses and I'm unimpressed by the fact that most of the regulations have no teeth do you think that the tech companies are letting down their guard on safety because they need to be the winner in this race for AI I don't know about the tech companies in general I
know quite a lot about Google because I used to work there Google was very concerned about these issues and Google didn't release the big chat Bots it was concern about his reputation if they told lies um but as soon as open AI went into business with Microsoft and Microsoft put chatbots into being Google had no choice so I think the competition is going to cause these things to be developed rapidly and the competition um means that they won't put enough effort into safety people parents talk to their children give them advice on the future of
the economy what jobs they should do what degrees that they should do it seems like the world it's being thrown up in the air by this by the world that you're describing uh what what would you advise somebody to to study now to to kind of surf this this wave I don't know because it's clear that um a lot of a lot of midlevel intellectual jobs are going to disappear um and if you ask which jobs are safe my best bet about a job that's safe is Plumbing because these things aren't yet very good at
physical manipulation that'll be probably be the last thing they're very good at and so I think Plumbing is safe for quite a long time driving that's driving no driving no that's that's hopeless that's I mean that's been slower than um journalism might last a little bit longer but I think um these things are going to be pretty good journalists quite soon and probably quite good interviewers too okay well