When I asked you a couple years ago about job displacements, you seem to think that wasn't a big concern. Is that still your thinking? No, I'm thinking it will be a big concern.
AI's got so much better in the last few years that I mean if I had a job in a call center, I'd be very worried. Yeah. Or maybe a job as a lawyer or a job as a journalist or a job as an accountant.
Yeah. And doing anything routine. I think jour investigatively journalists I think will last quite a long time because you need a lot of initiative plus some moral outrage and I think journalists will be in business for a bit but beyond call centers what are your concerns about jobs well any routine job so a sort of standard secretarial job something like a parallegal for example those jobs have had it have you thought about what how we move forward in a world where all these jobs go away so it's like this it ought to be that if you can increase productivity, everybody benefits.
Um, the people who are doing those jobs can work a few hours a week instead of 60 hours a week. Um, they don't need two jobs anymore. They can get paid lots of money for doing one job because they're just as productive using AI assistance.
But we know it's not going to be like that. We know what's going to happen is the extremely rich are going to get even more extremely rich and the not very well off are going to have to work three jobs. I think no one likes this question, but we like to ask it.
This idea of P doom, how likely it is. And I am curious if you see this as a a quite possible thing or it's just so bad that even though the likelihood isn't very high, we should just be very concerned about it. Where are you on that scale of probability?
So I think um most of the experts in the field would agree that if you consider the possibility that these things will get much smarter than us and then just take control away from us just take over the probability of that happening is very likely more than 1% and very likely less than 99%. I think all the pretty much all the experts can agree on that. But that's not very helpful.
No, but it's a good start. It it might happen and it might not happen. And then different people disagree on what the numbers are.
I'm in the unfortunate position of happening to agree with Elon Musk on this. Um which is that it's sort of 10 to 20% chance that these things will take over. Um but that's just a wild guess.
Yeah. Um I think reasonable people would say it's quite a lot more than 1% and quite a lot less than 99%. But we're dealing with something we've got no experience of.
um we have no real good way of estimating what the probabilities are. It seems to me at this point it's inevitable that we're going to find out. We are going to find out.
Yes, we because um it seems extremely likely that these things will get smarter than us. Already they're much more knowledgeable than us. So GPT4 knows thousands of times more than a normal person.
It's a not very good expert at everything and eventually its successors will be a good expert at everything. Um they'll be able to see connections between different fields that nobody's seen before. Yeah.
Yeah. I'm als I'm also interested in in understanding okay there's this terrible 10 to 20% chance but or more or more or less or less but let's just take as a premise that there's a 80% chance that they don't take over and wipe us out. So that's the most likely scenario.
Do you still think it would be net positive or net negative if it's not the worst outcome? Okay, if we can stop them taking over, um, that would be good. The only way that's going to happen is if we put serious effort into it.
But I think once people understand that this is coming, there will be a lot of pressure to put serious effort into it. If we just carry on like now just trying to make profits, it's going to happen. They're going to take over.
Um, we we have to have the public put pressure on governments to do something serious about it. But even if the AIs don't take over, there's the issue of bad actors using AI for bad things. So mass surveillance, for example, which is already happening in China.
If you look at what's happening in the west of China to the weagers, um the AI is terrible for them. I I to board a plane to come to Toronto, I had to take a facial recognition photo from our US government. Right.
When I come into Canada, you put your passport and it looks at you and it looks at your passport. Yeah. Every time it fails to recognize me.
Um, everybody else it recognizes people from all different nationalities. It recognizes me. It can't recognize.
And I'm particularly indignant since I assume it's using neural nets. You didn't carve out an exception, did you? No.
No. It just there's something about me that it doesn't like. I've been a little bit holding us back from kind of the central thing that I think you want people to take away which is this idea of of machines taking over and the impact of that.
So I'd like to just discuss that as fully as you'd like or that we can like how do you want to frame this issue? How should people think about it? One thing to bear in mind is how many examples do you know of less intelligent things controlling much more intelligent things?
So we know that things are more or less equal intelligence. The less intelligent one can control the more intelligent one. Um but with a big gap in intelligence.
There's very very few examples where the more intelligent one isn't in control. So that's something you should bear in mind. That's a big worry.
I think the situation we're in right now, the best way to understand it emotionally is we're like somebody who has this really cute tiger cup. It's just such a cute tiger cup. Now, unless you can be very sure that it's not going to want to kill you when it's grown up, you should worry.
M and to extend the metaphor, you put it in a cage, you kill it. What do you do with the tiger cub? Well, the point about the tiger cub is it's just physically stronger than you.
So, you can still control it because you're more intelligent. Yeah. Um things that are more intelligent than you.
We have no experience of that, right? People aren't used to thinking about it. People think somehow you constrain it, you don't allow it to press buttons or whatever.
Um things more intelligent than you, they're going to be able to manipulate you. So another way of thinking about it is imagine that there's this kindergarten. There's these two and three year olds and the two and three year olds are in charge and you just work for them in the kindergarten and you're not that much more intelligent than a two or three-year-old.
Not compared with super intelligence, but you are more intelligent. Um so how hard would it be for you to get control? Well, you just tell them all you're going to get free candy and if they just sort of sign this or just agree verbally to this um you'll get free candy for as long as you like.
and you'll be in control. They won't they won't have any idea what's going on. And with super intelligences, they're going to be so much smarter than us.
We'll have no idea what they're up to. And so what do we do? Um we worry about whether there's a way to build a super intelligence so that it doesn't want to take control.
I don't think there's a way of stopping it to take control if it wants to. So there's one possibility is never build a super intelligence. You think that's possible?
I mean it's conceivable, but I don't think it's going to happen because there's too many too much competition between countries and between companies and they're all after the next shiny thing and it's developing very very fast. So I don't think we're going to be able to avoid building super intelligence. It's going to happen.
The issue is can we design it in such a way that it never wants to take control that it's always benevolent. Um that's a very tricky issue. Just people say well we'll get it to align with human interests.
But human interests don't align with each other. And if I say I've got two lines at right angle and I want you to show me a line parallel to both of them. That's kind of tricky, right?
And if you look at the Middle East for example, there's people with very strong views that don't align. So how are you going to get AI to align with human interests? Human interests don't align with each other.
So that's one problem. It's going to be very hard to figure out how to get super intelligence that doesn't want to take over and doesn't want to ever hurt us. Um but we should certainly try.
And trying is kind of just an iterative process. Month by month, year by year, we try to Yeah. So obviously if you're gonna develop something that might want to take over when it's just slightly less intelligent than you are, and we're very close to that now, um you should kind of look at what it'll do to try and take over.
So if you look at the current AIS, you can see they're already capable of deliberate deception. They're capable of pretending to be stupider than they are. um of lying to you so that they can kind of confuse you into not understanding what they're up to.
Um we need to be very aware of all that and to study all that and study about whether there's a way to stop them doing that. When we spoke a couple years ago, I was surprised at you voicing concerns because you hadn't really done much of that before and now you're voicing them quite clearly and loudly. Was it mostly that you felt more liberated to say this stuff or was it really a really big sea change in how you saw it in these last few years?
When we spoke a couple of years ago, I was still working at Google then. It was in March and I didn't resign till um the end of April. Um but I was thinking about leaving then.
Um and I had had a kind of epiphany before we spoke where I realized that these things might be a better form of intelligence than us. And that got me very scared. And you didn't think that before just because you thought the time horizon was so different.
No, it wasn't just that. It was because of the research I was doing at Google. Okay.
I was trying to figure out whether you could design analog large language models that would use much less power. Mhm. Um, and I began to fully realize the advantage of being digital.
So all the models we've got at present are digital. And if you're a digital model, you can have exactly the same neural network with the same weights in it running on several different pieces of hardware, like thousands of different pieces of hardware. And then you can get one piece of hardware to look at one bit of the internet and another piece of hardware to look at another bit of the internet.
And each piece of hardware can say, how would I like to change my internal parameters, my weights, so I can absorb the information I just saw. And each of these separate pieces of hardware can do that. And then they can just average all the changes to the weights because they're all using the same weights in exactly the same way.
And so averaging makes sense. You and I can't do that. And if they've got a trillion weights, they're sharing information at like a trillions of bits every time they do this averaging.
Now you and I when I want to get some knowledge from my head into your head, I can't just take the strength of the connections between neurons and average them with the strength of the connections between your neurons because our neurons are different. We we're analog and we're just very different brains. Yeah.
So the only way I have getting knowledge to you is I do some actions and if you trust me, you try and change the connection strengths in your brain so that you might do the same things. And if you ask, well, how efficient is that? Well, if I give you a sentence, it's only a few hundred bits of information at most.
So, it's very slow. We communicate just a few bits per second. These large language models running on digital systems can communicate trillions of bits a second.
So, they're billions of times better than us at sharing information. That got me scared. Right.
But what surprised you or what changed your thinking was you were thinking the analog was going to be the path previously. No, I was thinking if we want to use much less power, yeah, we should think about whether it's possible to do this analog. Yeah.
And because you can use much less power, you can also be much sloppier in the design of the system. Because what's going to happen is you don't have to manufacture a system that does precisely what you tell it to, which is what a computer is. You can manufacture a system with a lot of slop in it, and it will learn to use that sloppy system, which is what our brains are.
Do you think the technology is no longer destined for that solution, but is going to stick with the digital solution? I think it'll probably stick with the digital solution. Now, it's quite possible that we can get these digital computers to design better analog hardware better than us.
Um, I think that may be the long-term future. You got into this field because you wanted to know how the brain works. Yes.
Do you think we're getting closer to that through this? I think for a while we did. So I think we've learned a lot at a very general level about how the brain works.
So 30 years ago or 50 years ago, if you ask people, well, could you have a big random neural network with random connection strengths and then could you show it data and have it learn to do difficult things like recognize what someone's saying or answer questions just by showing it lots of data? Almost everybody would have said, "That's crazy. There's no way you're going to do that.
it has to have lots of pre-wired structure that comes from evolution. Well, it turns out they were wrong. It turns out you can have a big random neural network.
Um, and it can learn just from data. Now, that doesn't mean we don't have a lot of pre-wired structure, but basically most of what we know comes from learning from data, not from all this pre-wired structure. So, that's a huge advance in understanding the brain.
Now, the issue is how do you get the information that tells you whether to increase or decrease the connection strength? If you can get that information, we know that we can then train a big system that starts with random weights to do wonderful things. The brain needs to get information like that and it probably gets it in a different way from the standard algorithm used in these big AI models which is called back propagation.
The brain probably doesn't use back propagation. Nobody can figure out how it could be doing it. Um it's probably getting the gradient information that is how changing a weight will improve the performance in a different way.
But we do know now that if it can get that great information, it can be really effective at learning.