Why The "Godfather of AI" Now Fears His Own Creation | Geoffrey Hinton

168.74k views11576 WordsCopy TextShare
Curt Jaimungal
As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer...
Video Transcript:
There's some evidence now that AIs can be deliberately deceptive. Once they realize getting more control is good, and once they're smarter than us, we'll be more or less irrelevant. We're not special, and we're not safe. What happens when one of the world's most brilliant minds comes to believe his creation poses an existential threat to humanity? Professor Geoffrey Hinton, winner of the 2024 Nobel Prize in Physics and former Vice President and Engineering Fellow at Google, spent decades developing the foundational algorithms that power today's AI systems. Indeed, in 1981, he even published a paper that foreshadowed the
seminal attention mechanism. However, Hinton is now sounding an alarm that he says few researchers want to hear. Our assumption that consciousness makes humans special and safe from AI domination is patently false. My name's Curt Jaimungal, and this interview is near and dear to me, in part because my degree in mathematical physics is from the University of Toronto, where Hinton's a professor and several of his former students, like Ilya Sutskever and Andrej Karpathy, were my classmates. Being invited into Hinton's home for this gripping conversation was an honor. Here, Hinton challenges our deepest assumptions about what makes
humans unique. Is he a modern Oppenheimer? Or is this radiant mind seeing something that the rest of us are missing? What was the moment that you realized AI development is moving faster than our means to contain it? I guess in early 2023, it was a conjunction of two things. One was chatGBT, which was very impressive. And the other was work I've been doing at Google on thinking about ways of doing analog computation to save on power and realizing that digital computation was just better. And it was just better because you could make multiple copies of
the same model. Each copy could have different experiences, and they could share what they learned by averaging their weights or averaging their weight gradients. And that's something you can't do in an analog system. Is there anything about our brain that has an advantage because it's analog? The power. It's much lower power. We run like 30 watts. And the ability to pack in a lot of connections. We've got about a hundred trillion connections. The biggest models have about a trillion. So we're still almost a hundred times bigger than the biggest models. And we run at 30
watts. Is there something about scaling that is a disadvantage? So you said it's better, but just as quickly as something nourishing or positive can spread. So can something that's a virus or something deleterious can be replicated quickly. So we say that that's better because you can make copies of it quicker. If you have multiple copies of it, they can all share their experiences very efficiently. So the reason GPT-4 can know so much is you have multiple copies running on different pieces of hardware. And by averaging the weight gradients, they could share what each copy learned.
You didn't have to have one copy experience the whole internet. That could be carved up among many copies. We can't do that because we can't share efficiently. Scott Aaronson actually has a question about this. Dr. Hinton, I'd be very curious to hear you expand on your ideas of building AIs that run on unclonable analog hardware so that they can't copy themselves all over the internet. Well, that's what we're like. If I want to get knowledge from my head to your head, I produce a string of words and you change the connection strings in your head
so that you might have said the same string of words. And that's a very inefficient way of sharing knowledge. A sentence only has about 100 bits. So we can only share about 100 bits per sentence, whereas these big models can share trillions of bits. So the problem with this kind of analog hardware is it can't share. But an advantage, I guess, if you're worried about safety, is it can't copy itself easily. You've expressed concerns about an AI takeover or AI dominating humanity. What exactly does that look like? We don't know exactly what it looks like.
But to have AI agents, you have to give them the ability to create sub-goals. And one path that's slightly scary is they will quickly realize that a good sub-goal is to get more control. Because if you get more control, you can achieve your other goals. So even if they're just trying to do what we ask them to do, they'll realize getting more control is the best way to do that. Once they realize getting more control is good, and once they're smarter than us, we'll be more or less irrelevant. Even if they're benevolent, we'll become somewhat
irrelevant. We'll be like the sort of very dumb CEO of a big company that's actually run by other people. Hmm. I want to quote you. You said that it's tempting to think, because many people will say, can't we just turn off these machines? Like currently we can. So it's tempting to think that we can just turn it off. Imagine these things are a lot smarter than us. And remember that they'll read everything, everything Machiavelli has ever wrote. They'll have read every example in the literature of human deception. They'll be real experts at doing human deceptions
because they'll learn that from us. And they'll be much better than us. As soon as you can manipulate people with your words, then you can get whatever you like done. Do you think that this is already happening? That the AIs are already manipulating us? There's some evidence now. There's recent papers that show that AIs can be deliberately deceptive. And they can do things like behave differently on training data from on test data so that they deceive you while they're being trained. So there is now evidence they actually do that. Yeah. And do you think there's
something intentional about that? Or that's just some pattern that they pick up? I think it's intentional, but there's still some debate about that. And of course, intentional could just be some pattern you pick up. So is it your contention that there's a subjective experience associated with these AIs? Okay. So most people, almost everybody in fact, thinks one reason we're fairly safe is we have something that they don't have and will never have. Most people in our culture still believe that. We have consciousness or sentience or subjective experience. Now, many people are very confident they don't
have sentience. But if you ask them, what do you mean by sentience? They say, I don't know, but they don't have it. That seems a rather inconsistent position to be confident they don't have it without knowing what it is. So, I prefer to focus on subjective experience. I think of that as like the thin end of the wedge. If you could show they have subjective experience, then people would be less confident about consciousness and sentience. So let's talk about subjective experience. When I say, suppose I get drunk, and I tell you, I have the subjective
experience of little pink elephants floating in front of me. Most people interpret that, they have a model of what that means, and I think it's a completely incorrect model. And their model is, there's an inner theater, and in this inner theater, there's little pink elephants floating around, and only I can see them. That's the sort of standard model of what the mind is, at least as far as perception is concerned. And I think that model is completely wrong. It's as wrong as a religious fundamentalist model of the material world. Maybe the religious fundamentalist believes it
was all made 6,000 years ago. That's just nonsense, it's wrong. It's not that it's a truth you can choose to believe, it's just wrong. So I think people's model of what the mind is is just wrong. So let's take, again, I have the subjective experience of little pink elephants floating in front of me, and I'll now say exactly the same thing without using the word subjective experience. Okay, here it goes. My perceptual system is telling me something I don't believe. That's why I use the word subjective. But if there were little pink elephants floating in
front of me, my perceptual system would be telling me the truth. That's it. I just said the same thing without using the word subjective or experience. So what's happening is when my perceptual system goes wrong, I indicate that to you by saying subjective, and then in order to try and explain to you what my perceptual system is trying to tell me, I tell you about a hypothetical state of affairs in the world such that if the world were like that, my perceptual system would be telling me the truth. Okay. Now let's do the same with
the chatbot. So suppose we have a multimodal chatbot. It has a robot arm that can point, and it has a camera, and it can talk, obviously. And we train it up, and then we put an object in front of it, and we say point at the object. No problem, it points at the object. Then when it's not looking, we put a prism in front of the camera lens. And then we put an object in front of it, and say point at the object, and it points over there. And we say no. That's not what the
object is. The object's actually straight in front of you, but I put a prism in front of your lens. And the chatbot says, oh, I see. The prism bent the light rays, so the object's actually there, but I had the subjective experience it was there. Now if it says that, it's using the word subjective experience exactly like we use it. And therefore I say, multimodal chatbots can already have subjective experiences. If you mess up their perceptual system, they'll think the world's one way, and it'll actually be another way. And in order to tell you how
they think the world is, they'll say, well, they had the subjective experience that the world was like this. Okay, so they already have subjective experience. Now you become a lot less confident about the other things. Consciousness is obviously more complicated, because people vary a lot on what they think it means, but it's got an element of self-reflexive element to it, a self-awareness element, which makes it more complicated. But once you've established that they have subjective experience, I think you can give up on the idea there's something about them, something about us, that they will never
have. And that makes me feel a lot less safe. So do you think there's a difference between consciousness and self-consciousness? You said consciousness has a self-reflexiveness to it, but some consciousness does. Yes. So philosophers have talked a lot about this, and at present I don't want to get into that. I just want to get the thin end of the wedge in there and say they have subjective experience. So for something to have subjective experience, does that not imply that it's conscious? Like who is the subjective experience happening to? Where is the subjective experience being felt?
Okay. Exactly. So you say, where's the subjective experience being felt? That involves having a particular model of subjective experience that somehow, if you ask philosophers, when I say I've got the subjective experience of little pink elephants floating in front of me, they'll say, and you say, where are those little pink elephants? They say they're in your mind. And you say, well, what are they made of? And philosophers have told you they're made of qualia. They're made of pink qualia, and elephant qualia, and floating qualia, and not that big qualia, and right way up qualia, all
stuck together with qualia glue. That's what many philosophers think. And that's because they made a linguistic mistake. They think the words experience of work like the words photograph of. If I say I've got a photograph of little pink elephants, you can very reasonably ask, well, where is the photograph? And what's the photograph made of? And people think that if I say I have an experience of little pink elephants, you can ask, well, where is the experience? Well, it's in my mind. And what's it made of? It's made of qualia. But that's just nonsense. That's because
you thought the words experience of worked the same way as photograph of, and they don't. Experience of, the way that works, or subjective experience of, is the subjective says I don't believe it, and the experience of is really an indicator that I'm going to tell you about my perceptual system by telling you about a hypothetical state of the world. That's how that language works. It's not referring to something in an inner theater. When I hear the word perception, it sounds like an inner theater as well. Like if you say, I see something in my perceptual
system, it sounds like there's this you that's seeing something on a perceptual system that's being fed to you. So that's the wrong model. Yes. You don't see your percepts, you have your percepts. So photons come in, your brain does a whole bunch of processing, you presumably get some internal representation of what's out there in the world, but you don't see the internal representation. We call that internal representation a percept. You don't see that, you have that. Having that is seeing. People are forever trying to think that you have the external world, something comes into the
inner theater, and then you look at what's in the inner theater. It doesn't work like that. There is a psychologist or neurologist who thought that the pons had to do with consciousness, and then recently self-consciousness has to do with the default mode network. Okay, is there something, is there a part of an AI system that has to do with self-consciousness? And also, help me understand even my own terminology when I'm saying the AI system. Are we saying when it's running on the GPU? Are we saying it's the algorithm? What is the AI system that is
conscious or that has subjective experience? So where is it? I guess that there's going to be some hardware that's running it, and it's going to be that system that's going to be conscious. If something's going to be conscious, software by itself, it has to be running on something, I would have thought to be conscious. The Economist has actually spoken to and covered Geoffrey Hinton several times before. Links are in the description. As you know, on Theories of Everything, we delve into some of the most reality-spiraling concepts from theoretical physics and consciousness to AI and emerging
technologies. To stay informed in an ever-evolving landscape, I see The Economist as a wellspring of insightful analysis and in-depth reporting on the various topics we explore here and beyond. The Economist's commitment to rigorous journalism means you get a clear picture of the world's most significant developments, whether it's in scientific innovation or the shifting tectonic plates of global politics. The Economist provides comprehensive coverage that goes beyond the headlines. What sets The Economist apart is their ability to make complex issues accessible and engaging, much like we strive to do in this podcast. If you're passionate about expanding
your knowledge and gaining a deeper understanding of the forces that shape our world, then I highly recommend subscribing to The Economist. It's an investment into intellectual growth, one that you won't regret. As a listener of TOE, you get a special 20% off discount. Now you can enjoy The Economist and all it has to offer for less. Head over to their website, www.economist.com slash TOE, T-O-E, to get started. Thanks for tuning in. And now, back to our explorations of the mysteries of the universe. Software by itself has to run on, it has to be running on
something, I would have thought, to be conscious. What I'm asking is, just like prior, there was the PONS that was started. I think a good way to think about it is to think about what AI systems are going to be like when they're embodied. So, and we're going to get there quite soon because people are busy trying to build battle robots, which aren't going to be very nice things. But if a battle robot has figured out where you're going to be late at night, that you're going to be in some dark alley by yourself late
at night, and it's decided to creep up behind you when you're least expecting it and shoot you in the back of the head, it's perfectly reasonable to talk about what the battle robot believes. And you talk about what the battle robot believes in the same way as you talk about what a person believes. The battle robot might think that if it makes a noise, you'll turn around and see it. And it might really think that, in just the way people think it. It might have intentions. It might be intending to creep up behind you and
shoot you. So, I think what's going to happen is our reluctance to use words like believe and intend and think is going to disappear once these things are embodied. And already it's disappeared to quite a large extent. So, if I'm having a conversation with the chatbot, and it starts recommending to me things that don't make any sense. And then after a while, and then after a while, I figure the chatbot must think I'm a teenage girl. That's why it gives me all these things about makeup and clothes and certain pop groups, boy bands, whatever. And
so I asked the chatbot, what demographic do you think I am? And it says, I think you're a teenage girl. When it says, I think you're a teenage girl, we really don't have any doubt that that's what it thinks, right? In normal language, you say, okay, it thought I was a teenage girl. And you wouldn't say, you don't really believe that, okay, it's a bunch of software or neural nets, and it acts as if it thinks I'm a teenage girl. You don't say that. It thinks you're a teenage girl. We already use thinks when we're
dealing with these systems, even if they don't have hardware associated with them or obvious hardware associated with them. We already use words like thinks and believes. So we're already attributing mental states to them. It's just we have a funny model of a mental state. So we can attribute mental states to them, but have a completely incorrect model of what it is to have a mental state. We think of this inner theater that's the mind and so on. That's not having a mental status. How much of your concern about AI and its direction would go away
if they were not conscious or did not have subjective experience? Is that relevant to it? Does that just accelerate the catastrophe? I think the importance of that is that it makes most people feel relatively safe, makes most people think we've got something they haven't got or never will have. And that makes us feel much safer, much more special. We're not special, and we're not safe. And we're not safe. We're certainly not safe because we have subjective experience and they don't. But I think the real problem here is not so much a scientific problem as a
philosophical problem. The people misunderstand what is meant by having a subjective experience. I want to give you an example to show that you can use words. You've got a science background, so you probably think you know what the words horizontal and vertical mean. I mean, that's not a problem, right? It's obvious what they mean. And if I show you something, that one's vertical and that one's horizontal, right? Not difficult. So I'll now convince you you actually had a wrong model of how they work. Not totally wrong, but there were significant problems, significant incorrectnesses in your
model of the terms horizontal and vertical. OK, here we go. Suppose in my hands I have a whole bunch of little aluminium rods, a large number, and I throw them up in the air. And they tumble and turn and bump into each other. Then suddenly I freeze time. And I ask you, are there more that are within one degree of vertical or more within one degree of horizontal, or is it about the same? Say it's approximately the same. Right, that's what most people say. Approximately the same. And they're surprised when I tell you there's about
114 times as many that are within one degree of horizontal. That's kind of surprising, right? How did that happen? Well, that's vertical, and this is vertical too. One degree of rotational freedom. That's horizontal, and this is horizontal too. But so is this. So horizontal has two degrees of freedom. Vertical only has one degree of freedom. So, here's something you didn't know about horizontal and vertical. Vertical's very special, and horizontal's two a penny. That's a bit of a surprise to you. Obviously it's not like that in 2D, but in 3D they're very different. And one's very
special and the other isn't. So why didn't you know that? Well, I'm going to give you another problem. Suppose in my hands I have a whole bunch of little aluminum disks. And I throw them all up in the air, and they tumble and turn and bump into each other. And suddenly I freeze time. Are there more that are within one degree of vertical or more within one degree of horizontal, or is it about the same? No, there's about 114 times as many that are within one degree of vertical. Interesting. So that's vertical, and this is
vertical, and this is vertical. This is horizontal, and this is horizontal, but it's only got one degree of freedom. So, for planes, horizontal is very special and vertical's two a penny. And for lines, vertical's very special and horizontal's two a penny. So that's just a little example of, you have a sort of meta-theory of how the words work. And that meta-theory can be wrong, even though you use the words correctly. And that's what I'm saying about all these mental state terms, terms like subjective experience of. You can use them correctly, and you can understand what
other people mean when they use them. But you have a meta-theory of how they work, which is this inner theater with things made of qualor in them that's just complete junk. So what is it then about a theory of percepts or subjective experience that makes it then correct, in order for you to say, well, I'm more on the correct track than most people think? That you think of them as, think that these subjective experiences, you think they have to be somewhere and they have to be made of something. That neither of those things is true.
When I say subjective experience, that's an indicator that I'm now about to talk about a hypothetical state of the world that isn't true. So it isn't anywhere, it's a hypothetical state of the world. But notice the big difference between saying, I'm going to talk about this something that's just hypothetical and isn't actually anywhere. But if it was somewhere, it'd be out there in the world. Versus, I'm talking about something that's in an inner theater made of funny stuff. Those are two completely different models. And the model that is in an inner theater made of funny
stuff, I think is just completely wrong, even though it's a model we almost all have. What about someone like your fellow Nobel Prize winner, Roger Penrose, who we were talking about? Let me tell you a story about Roger Penrose. A long time ago, he was invited to come to the University of Toronto and give a talk about his new book, The Emperor Has No Clothes. And I got invited to introduce him. The dean called me up and said, would you introduce Roger Penrose? And I said, sure. And she said, oh, thank you very much. And
I said, ah, but before you agree, you should know what I'll say. And she said, what will you say? And I said, I will say Roger Penrose is a brilliant mathematical physicist who's made huge contributions to mathematical physics. And what he's going to talk about today is complete junk. So that's my view of Roger Penrose's view of consciousness. And in particular, he makes a crazy mistake, which is, now I have to think how to say this carefully, because obviously people will be criticizing it. The issue is, can mathematicians intuit things are true that can't be
proved to be true? And that would be very worrying if mathematicians intuition was always right. If they could do that correctly every time, that'd be really worrying and would sort of mean something funny was going on. But they can't. Mathematicians have intuitions, and they're sometimes right and sometimes wrong. So it doesn't really prove anything. It doesn't prove that you need quantum mechanics to explain how mathematicians work. And I don't see any reason for needing quantum mechanics to explain things like consciousness. AI is doing a pretty good job so far. We've produced these chatbots. These chatbots,
as I just argued, if you give them a camera, can have subjective experiences. There's nothing about people that requires quantum mechanics to explain it. Is there something about the Penrose argument that relies on mathematicians 100% of the time intuiting correctly? It's only if they could intuit correctly. If they're guessing, that's fine. If they have a way of always getting it right, the answer to these questions that can't be derived within the system, that can't be answered within the system, then that would be a problem. But they don't. They make mistakes. Why don't you outline what
his argument is, Penrose's? I don't want to. I mean, the argument, as I understood it, the argument is, there's two things going on. One is, he says, classical computation isn't going to explain consciousness. I think that's a big mistake, and I think that's based on a funny notion of what consciousness is. That's not right. A misunderstanding of what consciousness is. A second is that mathematicians can intuit the truth of things that can't be proved, and that shows there's something funny going on. That doesn't show there's something funny going on unless they intuit it correctly every
time. So, I'm sure you've heard of the Chinese room experiment. I have. What are your thoughts on that? And feel free to briefly outline it for the audience. Okay. So, back in about 1990, I got invited to be on a TV program with John Searle, and I called up my friend Dan Dennett and said, should I do this? And he said, well, you know, he will try and make you look stupid, but if you do it, don't talk about the Chinese room argument. So, I agreed to be on the program with Searle, and the very
first thing he said was an hour-long interview. The very first thing he said was, so Jeffrey Hinton is a connectionist, so of course he has no problems with the Chinese room argument. He's a connectionist. A connectionist. And so he then says, so he has no problems with the Chinese room argument, which was, we'd agreed not to talk about it, and he was saying something that was completely false. I've got a lot of problems with the Chinese room argument. I think it's nonsense. And I think it's a deliberately deceptive argument. I think it's a dishonest argument.
What you're doing is you're saying, there's this room full of Chinese people, I think. There's this room where he wants you to identify, yeah, we could make a system made of Chinese people who are sending messages to each other in Chinese, and as a result of all these messages that are sent around in Chinese, you can send in an English sentence, they'll send messages to each other in Chinese. This is just my memory of the argument. And they'll be able to answer this English sentence, even though none of the people sending these messages around understood
a word of English, because they're just running a program. But they do it by sending messages in Chinese to each other. What's dishonest about the argument is, he wants you to think that, to get confused between the whole system and the individual Chinese people sending messages. So the whole system understands English. The individual Chinese people sending messages don't. He wants you to think that that whole system can't possibly understand English, because the people inside don't understand English. But that's nonsense. The system understands English. That's what I think's wrong with the argument. Now speaking about China,
something that many AI researchers didn't predict was that China would catch up with the West in terms of AI development. So how do you feel about that, and what are the consequences? I don't think they're quite caught up yet. They're very close, though. America's going to slow them down a bit by trying to prevent them having the latest NVIDIA chips. NVIDIA, maybe, can find workarounds. And what that's going to do, if the embargo is effective, it's just going to cause the Chinese to develop their own technology. And they'll be a few years behind, but they'll
catch up. They've got better STEM education than the US. So they've got more people who are better educated. I think they're going to catch up. Do you know who Marc Andreessen is? He thinks. Yeah, I disagree with him about more or less everything, I think. Okay, how about, let's pick one. So he had a comment that said, I don't understand how you're going to lock this down. He was speaking to someone from the government about how the government was saying, well, if AI development gets out of hand, we can lock it down, quote unquote. Right.
How can you do that? Because the math for AI is out there, it's being taught everywhere. To which the officials responded, well, during the Cold War, we classified entire areas of physics and took them out of the research community. Entire branches of physics basically went dark and didn't proceed. If we decide that we need to, we're going to do the same to the math underneath AI. Forget it. I agree with Marc Andreessen on that. There's no way you're going to be able to. Now, it could have been, for example, that Google in 2017 could have
decided not to publish Transformers. And it might have been several years before anybody else came up with the same idea. So they could slow it down by a few years maybe. But I don't think there's much hope in, I mean, just think what it would take to prevent the information getting out there. It'd be very hard. So you don't think the government can classify some, what would it be, linear algebra? No. I mean, they could make it harder to share certain kinds of information, which would slow things down a little bit. But I just think
it's implausible that they could take AI ideas that really work well and by not sharing them, prevent anybody else creating them. What happens with new ideas is that there's a kind of, there's a zeitgeist. And within that zeitgeist, it's possible to have new ideas. And it often happens that one person has a new idea, and at more or less the same time and quite independently, except they're sharing the same zeitgeist, someone else has a slightly different version of the same idea. This is going on all the time. Unless you can get rid of the whole
zeitgeist, you're not going to be able to have new ideas and keep them secret. Because a few years later, somebody else is going to come up with the same idea. What about decentralizing AI? So that's a huge topic. Some people would say, well, that's giving the atomic bomb to any person who wants access to an atomic bomb. Yes, I say that. And then there are other people who say, well, that's what is required in order to create the guardrails against the Skynet scenario, is where we have multiple different decentralized agents or AIs. Sorry, there's two
notions of decentralized. So let's talk about sharing weights. So if you ask, why doesn't Alabama have a bomb? It's because you need fissile material, and it's hard to get fissile material. It takes a lot of time and energy to produce the fissile material. Once you have the fissile material, it's much easier to make a bomb. And so the government clearly doesn't want fissile material to be out there. You can't go on eBay and buy some fissile material. That's why we don't have lots of little atomic bombs belonging to tiny states. So if you ask, what's
the equivalent for these big chatbots? The equivalent is a foundation model. That's been trained, maybe using a hundred million dollars, maybe a billion dollars. It's been trained on lots of data. It's got a huge amount of competence. If you release the weights of that model, you can now fine tune it to all sorts of bad things. So I think it's crazy to release the weights of these big models, because they are our main constraint on bad actors. And Meta has now done it, and other people have followed suit. So it's too late now. The cat's
out of the bag. But it was a crazy move. Speaking about foundation models, much of our latest AI boom is because of Transformer, the Transformer architecture. Do you see some other large breakthrough, either some paradigm or some other architecture on the horizon? Okay, I think there will be other large breakthroughs of comparable magnitude. Because that's just how science works. I don't know what they are. If I knew what they were, I'd be doing them. Would you, though? Well, I'm too old now. I have students doing them. What I mean is, how do you reconcile your
past contributions to this field and you have your current woes? So would you be contributing to it? So here's the issue. AI is very good for lots of things that will benefit humanity a whole lot. Like better healthcare, fighting climate change, better materials, things like room temperature superconductors, where AI may well be involved in actually discovering them. Assuming there are some out there. So there's so many things, good uses of AI, that I don't think the development is going to be stopped. So I don't think it's sensible to say, we should be slowing down AI,
slowing down the development. It's not going to happen anyway because there's so much competition. And it's just not feasible. It might be the best thing for humanity, but it's not going to happen. What we should be doing is, as it's being developed, trying to figure out how to keep it safe. So it's another thing to say that this is a boulder that no one can stop. It's another thing to also be responsible for pushing the boulder as well. So do you actually feel like, if there was a breakthrough on the horizon that you see, and
you're like Ray Kurzweil, you have this great predictive quality, that you would actually put your coins into it and work on it? As long as that was combined with working on how to keep it safe, yes. I feel I didn't realize soon enough how dangerous it was going to be. I wish I'd realized sooner. There's this quote from Einstein about the atomic bomb. He said, I would burn my hands had I known what I was developing would lead to the atomic bomb. Do you feel similar? I don't actually, no. Maybe I should. I don't kind
of regret what I've done. I regret the fact it may lead to bad things. But I don't think back and think, oh, I wish I'd never done that. I think AI is going to be developed. I don't think we have much choice about that, just because of the competition between countries and between companies. So I think we should focus our efforts on trying to develop it safely. And that's very different from trying to slow down the development. In addition to alignment, what does safe development of AI mean? Okay. Figuring out how to deal with the
short-term risks. And there's many of those, and they all have different solutions. So things like lethal autonomous weapons. And to deal with that, you need things like Geneva Conventions. And we're not going to get those until nasty things have happened. You've got fake videos and images corrupting elections, particularly if they're targeted at particular people. To deal with that, I think you need a much better system for establishing the provenance of a video or an image. Initially, I thought you should mark them as fake. You should insist they're marked as fake. I don't think there's much
future in that anymore. I think you're better off insisting that there's a provenance associated with things and your browser can check the provenance. Just as already with email, it says, don't trust this one, I can't establish it. It should be like that. There's discrimination and bias where you can freeze the weights of a system and measure its bias and then somewhat correct it. You'll never correct it perfectly, but somewhat correct it. So you can make the system less biased than the data it was trained on. And so you can replace people by a less biased
system. It'll never be unbiased. But if you just keep replacing systems by less biased systems, that's called gradient descent, things will get less biased. So I'm not so worried about that one. Possibly because I'm an old white man. There's jobs. We don't really know what to do about that. So you don't get many people digging ditches anymore because a backhoe is just much better at digging ditches than a person. It's going to be the same for almost all mundane intellectual labor. An AI system is going to make a much better paralegal than a person. That's
kind of really scary because of what it's going to do to society. It's going to cause the rich to get richer because we're going to get big increases in productivity. And where's that wealth going to go to? It's going to go to rich people. And poor people are going to get poorer. I don't know what to do about that. Universal basic income helps. Stops them starving. But it doesn't really solve the problem because people's dignity is gone if they don't have a job. Earlier we were talking about perception and then perception was associated with subjective
qualities. Maybe there's a wrong model there. But anyhow, whenever we're speaking about percepts are we speaking about perception and thus we're speaking about a subjective experience associated with it? No, when you use the word subjective experience you're indicating that you're about to talk about a hypothetical state of the real world. Not some funny internal thing but a hypothetical state of the real world. These funny internal things don't exist. There are no qualia. There's nothing made of qualia. There's just hypothetical states of the world as a way of explaining how your perceptual system is lying to
you. And that's what we mean when we say subjective experience is these hypothetical states of the world. That's how we actually use it. It's all prediction or null. Oh, getting the issue of prediction into it is sort of red herring. It's a different direction altogether. The thing you have to get in your head is that there isn't a funny kind of thing called a subjective experience that's made of some funny mental stuff. There's just a technique of talking about how your perceptual system goes wrong which is to say what the world would have had to
have been like for it to be telling the truth. And that's what we're indicating. When we use the phrase subjective experience we indicate that that's the game we're playing. We're playing the game of telling you about hypothetical states of the world in order to explain how my perceptual system's going wrong. A subjective experience is not a thing. And can anything have a perceptual system? Can a book have a perceptual system? What defines a perceptual system? Okay, to have a perceptual system you'd have thought you needed something that can have some internal representation of something going
on in some external world. That's what I'd have thought. So like, a toad gets light in its eyes and it snaps up flies and it's clearly got a perceptual system, right? Because I see where the flies are. I don't think a book has a perceptual system because it's not sensing the world and having an internal representation. Hi everyone, hope you're enjoying today's episode. If you're hungry for deeper dives into physics, AI, consciousness, philosophy, along with my personal reflections, you'll find it all on my substack. Subscribers get first access to new episodes, new posts as well,
behind-the-scenes insights, and the chance to be a part of a thriving community of like-minded pilgrimers. By joining, you'll directly be supporting my work and helping keep these conversations at the cutting edge. So click the link on screen here, hit subscribe, and let's keep pushing the boundaries of what and let's keep pushing the boundaries of knowledge together. Thank you and enjoy the show. Just so you know, if you're listening, it's c-u-r-t-j-a-i-m-u-n-g-a-l.org, CURTJAIMUNGAL.org. Because it doesn't, it's not sensing the world and having an internal representation. What would be the difference between intelligence and rationality? Okay, so there's
various kinds of intelligence. So you wouldn't accuse a cat of being rational, but a cat could be pretty intelligent. In particular, when you talk about rationality, you typically mean logical reasoning. And that's very different from the way we do most things, which is intuitive reasoning. So a nice analogy would be if you take something like AlphaZero that plays chess. I use chess because I understand it better than Go. It'll have something that can evaluate a board position and say, how good is that for me? It'll have something that can look at a board position and
say, what's a plausible move for me? And then it'll have what's called Monte Carlo rollout, where it's, you know, if I go here and he goes there and I go here, oh dear, that's bad. The Monte Carlo rollout is like reasoning. The neural nets that just say, that would be a good move, or this is a bad position for me, they're like intuitive reasoning. And we do most things by intuitive reasoning. Originally in AI, they wanted to do everything by using reasoning and logical reasoning. And that was a huge mistake and they couldn't get things
done. They didn't have a way of dealing with things like analogy. What neural nets are good at is intuitive reasoning. So what's happened in the last 20 years is we've used neural nets to model human intuition rather than human reasoning, and we've got much further that way. Is it the case that the more intelligent you are, the more moral you are? I read something about that recently that suggested it was, but of course I don't know the provenance of that, so I don't know whether to believe it. I'm not convinced that's true. Here's some evidence.
Elon Musk is clearly very intelligent. I wouldn't accuse him of being very moral. And you can be extremely moral and not terribly intelligent? I think so, yes. That's my guess. Well, you said that you weren't entirely sure, so what's the evidence to the contrary? What's the evidence that as you increase in intelligence, your morality increases proportionally somehow? Well, I mean, I just have no idea whether there's a correlation at all. I see. I think there's highly intelligent people who are very bad, and there's highly intelligent people who are very good. What does it mean to
understand? Okay, that's a question I'm happy to answer. So again, I think most people have a wrong model of what understanding is. If you look at these large language models, there's many people, particularly people from the Chomsky School of Linguistics, who say they don't really understand what they're saying. They just are using statistical correlations to predict the next word. If you look at the first models like that, I think I probably made the very first language model that used backpropagation to train the ways to predict the next word. So you backpropagate the error in predicting
the next word, and the point of the model was to show how you could learn meanings for words, or to put it another way, to show how you could take a string of words and learn to convert the words into feature vectors and interactions between feature vectors, and that's what understanding is. Understanding a string of words is converting the words into feature vectors, so that you can use interactions between features to do things like predict the next word, but also to do other things. So you have a sentence which is a string of symbols. Let's
not talk about word fragments. I know these transformers use word fragments, but let's suppose they used whole words. It's easier to talk about. It would just make them work a bit worse, that's all. They'd still work. So I give you a string of words, some text. The meaning isn't in the text. What you do is you convert those words into feature vectors, and you've learned how feature vectors in context, how the features should interact with each other to do things like disambiguate the meanings of ambiguous words, and once you've associated features with those words, that
is understanding. That's what understanding is, and that's what understanding is both in a large language model and in a person. In that sense, we understand in the same basic way they understand. It's not that when we understand, there's some magical internal stuff called understanding. I'm always trying to get rid of magical internal stuff in order to explain how things work. We're able, using our big neural networks, to associate features with these symbols in such a way that the features all fit together nicely. So here's an analogy I quite like. If you want to model 3D
shapes and you're not too worried about getting the surface just right, you can use Lego blocks. These are big shapes, like a car. You can make something the same shape as a Porsche with Lego blocks. The surface won't be right, but it'll have the same space occupancy. So Lego blocks are a kind of universal way of modeling 3D structures, and you don't need many different kinds of Lego block. Now, think of words as like Lego blocks, except that there's a whole bunch of different Lego blocks with different names. What's more, each Lego block has some
flexibility to it. It's not a rigid shape like a piece of Lego. It can change in various directions. It's not completely free. The name tells you something about how it can change, but there's some flexibility to it. Sometimes there'll be a name and it's two completely different shapes it can have, but it can't have any old shape. So what we've invented is a system for modeling much more complicated things than the 3D distribution of matter, which uses high-dimensional Lego blocks. So the Lego blocks with, say, a thousand dimensions. And if you're a mathematician, you know
thousand-dimensional spaces are very weird things, and they have some flexibility. And I give you the names of some of these Lego blocks, and each of which is this thousand-dimensional underlying shape, and they all deform to fit together nicely, and that's understanding. So that explains how you can learn the meaning of a word from one sentence without any definitions. So if, for example, I say, she scrummed him with the frying pan, you have a sense of what scrummed means. It's partly phonetic, but because the ed on the end tells you it's a verb. But you think
it probably means she hit him over the head with it or something like that. It could mean something different. She could have impressed him with it. You know, she cooked such good omelets that that really impressed him. It could mean she impressed him, but probably it means she hit him over the head or something like that, something aggressive like that. And you get that from just one sentence. And nobody's telling you this is a definition of scrummed. It's just that all the other Lego blocks for the other words, she and him, and all those other
words, adopt shapes that fit together nicely, leaving a hole. And that hole is the shape you need for scrummed. So now that's giving you the shape that scrum should be. So that's how I think of language. It's a modeling system we've invented where there's some flexibility in each of these blocks. I give you a bunch of blocks and you have to figure out how to fit them together. But because they all have names, I can tell other people about what my model is. I can give them the names. And if they share enough knowledge with
me, they can then figure out how they all fit together. So are you suggesting, help the audience understand what... I think what's going on in our heads, and that's what's going on in these large language models. So they work the same as us. And that means they really do understand. One of Chomsky's counter arguments to that the language models work the same as that we have sparse input for our understanding. We don't have to feed the internet to ourselves. So what do you say to that? It's true that the language models are trained on much
more data. They are less statistically efficient than us. However, when children learn language, they don't just learn it by listening to the radio. They learn it by being in the real world and interacting with things in the world. And you need far less input if you train a multimodal model. It doesn't need as much language. And the more, if you give it a robot arm and a camera and it's interacting with the world, it needs a lot less language. So that's one argument. It still probably needs more than a person. The other argument goes like
this. The backpropagation training algorithm is really good at packing a lot of knowledge into a few weights, where a few is a trillion, if you give it a lot of experience. So it's good at taking this huge amount of experience, sucking the knowledge out and packing it into a relatively small number of weights like a trillion. That's not the problem we have. We have the opposite problem. We've got a huge number of weights like a hundred trillion, but we only live for two billion seconds. And so we don't have much experience. So we need to
be optimized for making the best use you can of the very limited amount of experience you get, which says we're probably not using backpropagation. We're probably using some other learning algorithm. And in that sense, Chomsky may be right that we learn based on less knowledge. But what we learn is how to associate features with words and how these features should interact. We want to continue to talk about learning and research. Jay McClellan said that in your meetings with your graduate students and other researchers, you tend to not write equations on the board, unlike in other
machine learning research meetings. Instead, you draw pictures and you gesticulate. So what's the significance of this and what are the pros and cons of this approach? Okay, so I think intuitively and do the math afterwards. Some people think with equations and derive things and then get the intuitions afterwards. There's some people who are very good at both, like David Mackay, who's very good intuitively and also very good at math. So they're just different ways of thinking, but I've always been much better at thinking in terms of spatial things rather than in terms of equations. Can
you tell us about your undergraduate experience, how you changed programs and why or what led you to do so? So it's a long story, but I started off at Cambridge doing physics and chemistry and crystalline state, which was x-ray crystallography essentially. And after a month, I got fed up. It's the first time I'd lived away from home and the work was too hard. So I quit and reapplied to do architecture. And I got back in and after a day of that, I decided I'd never be any good at architecture. So I went back to science.
But then I did physics and chemistry and physiology, and I really liked the physiology. And after a year of that, I decided I wanted to know more about the mind. And I thought philosophy would teach me that. So I quit science and did philosophy for a year. And I learned some stuff about Wittgenstein and Wittgenstein's opinions. But on the whole, the main thing that happened was I developed antibodies to philosophy. Mainly because it's all talk. They don't have an independent way of judging whether a theory is good. They don't have an experiment. It's good if
it sounds good. And that was unsatisfactory for me. So then I did psychology to find out more about the mind. And I found that very annoying. Because what psychologists would do is have a really stupid simple theory and have very well-designed experiments to see whether this theory was true or false. And you could tell before you started the theory was hopeless. So what's the point of the experiments? That's what most of psychology was. And so then I went into AI. And there we did computer simulations. And I was much happier doing that. When you became
a professor, and to this day, how is it that you select research problems? Okay. There's no reason why I should really know how I do it. That's one of the most sophisticated things people do. And I can pontificate about how I think I might do it. But you shouldn't necessarily believe me. Feel free to confabulate, like LLMs. One thing I think I do is this. Look for a place where you think everybody's doing it wrong. You just have an intuition everybody's doing it wrong. And see if you can figure out how to do it better.
And normally what you'll discover is eventually you discover why people are doing it the way they're doing it. And that your method that you thought was going to be better isn't better. But just occasionally, like if you think everybody's trying to use logic to understand intelligence, and we should be using neural networks. And the core problem of understanding intelligence is how the connection strengths in a neural network adapt. Just occasionally, you'll turn out to be right. And until you can see why your intuition is wrong, and the standard way of doing it is right, stick
with your intuition. That's the way you'll do radically new things. And I have an argument I like, which is, if you have good intuitions, you should clearly stick with your intuitions. If you have bad intuitions, it doesn't really matter what you do, so you might as well stick with your intuitions. Now, what is it about the intuitions of Ray Kurzweil that ended up making a variety of correct predictions when even I was following him in the early 2000s and thinking there's no way half of these will be correct. And time and time again, he's correct.
Well, if you read his books, that's what you conclude. I suspect there's a number of things he said that he doesn't mention so much, which weren't correct. But the main thing he said, as far as I can tell, his main point is that computers are getting faster, they'll continue to get faster. And as computers get faster, we'll be able to do more things. And using that argument, he's been roughly right about the point at which computers will get as smart as people. Do you have any similar predictions that your colleagues disagree with, but your intuition
says you're on the right track? Now, we've talked about AI and alignment and so on, but perhaps not that, because that's covered ground. I guess the main one is to do with what is subjective experience, what's consciousness and so on, where I think most people just have a totally wrong model of what mental states are. That's more philosophical now. In terms of technical things, I still believe that fast weights are going to be very important. So synapses in the brain adapt at many different timescales. We don't use that in most of the AI models. And
the reason we don't use it is because you want to have many different training cases that use exactly the same weights. And that's so you can do matrix-matrix multipliers, which are efficient. If you have weights that adapt rapidly, then for each training case, you'll have different weights because they'll have rapidly adapted. That's what I believe is a kind of overlay of fast weights and slow weights. The slow weights are adapting as per usual, but on top of that, there's fast weights which are adapting rapidly. As soon as you do that, you get all sorts of
nice extra properties, but it becomes less efficient on our current computers. It would be fine if we were running things on analog computers. So I think eventually we're going to have to use fast weights because they lead to all sorts of nice properties. But that's currently a big difference between brains and the hardware we have. You also talked about how, publicly, how you're slightly manic-depressive, in that you have large periods of being extremely self-critical and then large periods of having extreme self-confidence. And then this has helped you with your creativity. Shorter periods of self-confidence. Okay,
let's hear about that, please. So when I get a new idea, I get very excited about it. And I can actually weigh my ideas. So sometimes I have one-pound ideas, but sometimes I have like five-pound ideas. And so what happens is I get this new idea, I get very excited, and I don't have time to eat. So my weight goes down. Oh, I see. And so I can measure sort of how exciting I found this idea by how much my weight went down. And, yes, really good ideas, I lose about five pounds. Do you have
a sense of carrying the torch of your great-great-grandfather, Boole? No, not really. I mean, my father talked about this kind of inheritance, and it's a fun thing to talk about. I have a sense of very high expectations that came from my father. They didn't come from George Boole, they came from my father. High expectations for yourself? My academic success, yes. Do you have a successor that, in your mind, you're passing the torch to? Not exactly. I don't think, I don't want to impose that on anybody else. Why'd you say not exactly instead of no? I
have a couple of nephews who are very good at quantitative stuff. I see. But you don't want to put that pressure on them? No. Speaking of pressure, when you left Google, you made some public statements about your concern regarding AI safety. What was the most difficult part about making that break and voicing your anxieties to the world? I don't think it was difficult. I wouldn't say it was difficult. It was just, I was 75, right? So it's not like I wanted to stay at Google and carry on working, but I felt I couldn't because of
AI safety. It was, I was ready to retire anyway. I wasn't so good at doing research anymore. I kept forgetting what the variables stood for. I was not so good at doing research anymore. I kept forgetting what the variables stood for. I was not so good at doing research anymore. I kept forgetting what the variables stood for. Yes. And so it was time to retire. And I thought I could just, as I went out the door, I could just mention that AI, or these AI safety issues. I wasn't quite expecting what happened next. Now, you
also did mention this in another interview about how, as you're now 75, 76, it keeps changing. It keeps changing every year, huh? 77. Yeah, okay. You mentioned publicly that, yes, you keep forgetting the variable names as you're programming. And so you think you're going to move to philosophy as you get older. Which is what we've been talking about quite a lot. Yes, yes. But it's basically philosophy I did when I was doing philosophy as when I was about 20. I'm going back to the insights I had when I was doing philosophy and exploring those further.
Got it. So what's on the horizon? Um, old age. I think the world's going to change a whole lot fairly quickly because of AI. And some of it's going to be very good and some of it's going to be very bad. And we need to do what we can to mitigate the bad consequences. And I think what I can still do usefully is encourage young researchers to work on the safety issues. So that's what I've been doing quite a lot of. Safety and within that, there's something called alignment. Now we as people don't have alignment.
So do you see that we could solve the alignment problem? I kind of agree with that statement. Alignment is like asking you to find a line that's parallel to two lines at right angles. Yeah. There's a lot of, people talk very naively about alignment. Like there's sort of human good. Well, what some people think is good, other people think is bad. You see that a lot in the Middle East. So alignment is a very tricky issue. Alignment with whom? Now you just were speaking to young AI researchers. Now you're speaking to young math researchers, young
philosophers, young students coming into whatever new STEM field, even though philosophy is not a STEM field. What is your advice? Well, I mean, one piece of advice is a lot of the excitement in scientific research is now around neural networks, which are now called AI. In fact, the physicists sort of now want to say that's physics. Or someone got a Nobel, who got a Nobel Prize in physics for their work in neural nets? You can't remember? I don't remember, but anyhow, continue. You're serious? No, I'm joking. Right, I thought you were joking. I'm a great
actor, huh? Right. So yeah, clearly the Nobel committees recognized that a lot of the excitement in science is now in AI. And so for both physics and chemistry, the Nobel Prizes were awarded to people doing AI or using AI. So I guess my advice to young researchers would be that's where a lot of the excitement is. But I think there's also other areas where there's gonna be very important progress, like if we could get room temperature superconductors, that would make it easy to have solar power a long way away, things like that. So that's not
the only area that's exciting. Nanomaterials are very exciting, but they will use AI. So I think probably most exciting areas of science will at least use AI tools. Now, we just alluded to this. Now let's make an explicit reference. You won the Nobel Prize last year in physics for your work in AI and neural nets, so. Right. How do you feel? How do you feel about that? What was it like hearing the news? And in physics, do you consider yourself a physicist? What does this mean? No, I'm not a physicist. I was quite good at
physics when I did it in my first year at university. I got a first in physics based on being able to do things intuitively, but I was never very good at the math. And I gave up physics because I wasn't good enough at math. I think if I'd been better at math, I'd have stayed in physics and I wouldn't have got a Nobel Prize. So probably it was lucky I wasn't very good at math. How do I feel about it? I still feel somewhat confused about it. The main problem is that the work I did
on neural nets that related closely to physics was a learning algorithm called Boltzmann machines that I developed with Terry Sanofsky. And it used statistical physics in a nice way. So I can see why physicists would claim that. But it wasn't really on the path to the current successful AI systems. It was a different algorithm I also worked on called backpropagation that gave rise to this huge new AI industry. So I still feel sort of awkward about the fact that we got rewarded for Boltzmann machines, but it wasn't Boltzmann machines. They were helpful, but they weren't
the thing that was really successful. Professor, it's been a pleasure. Okay. Take me into your home. I'm getting to meet your cats. Okay. Thank you. New update. Started a sub stack. Writings on there are currently about language and ill-defined concepts as well as some other mathematical details. Much more being written there. This is content that isn't anywhere else. It's not on Theories of Everything. It's not on Patreon. Also, full transcripts will be placed there at some point in the future. Several people ask me, Hey Curt, you've spoken to so many people in the fields of
theoretical physics, philosophy, and consciousness. What are your thoughts? While I remain impartial in interviews, this sub stack is a way to peer into my present deliberations on these topics. Also, thank you to our partner, The Economist. Firstly, thank you for watching. Thank you for listening. If you haven't subscribed or clicked that like button, now is the time to do so. Why? Because each subscribe, each like, helps YouTube push this content to more people like yourself. Plus, it helps out Curt directly, AKA me. I also found out last year that external links count plenty toward the
algorithm, which means that whenever you share on Twitter, say on Facebook or even on Reddit, et cetera, it shows YouTube, Hey, people are talking about this content outside of YouTube, which in turn greatly aids the distribution on YouTube. Thirdly, there's a remarkably active Discord and subreddit for theories of everything where people explicate TOEs, they disagree respectfully about theories and build as a community our own TOE. Links to both are in the description. Fourthly, you should know this podcast is on iTunes. It's on Spotify. It's on all of the audio platforms. All you have to do
is type in theories of everything and you'll find it. Personally, I gain from rewatching lectures and podcasts. I also read in the comments that, Hey, TOE listeners also gain from replaying. So how about instead you read, listen on those platforms like iTunes, Spotify, Google podcasts, whichever podcast catcher you use. And finally, if you'd like to support more conversations like this, more content like this, then do consider visiting patreon.com slash CURTJAIMUNGAL and donating with whatever you like. There's also PayPal. There's also crypto. There's also just joining on YouTube. Again, keep in mind it's support from the
sponsors and you that allow me to work on TOE full time. You also get early access to add free episodes, whether it's audio or video. It's audio in the case of Patreon, video in the case of YouTube. For instance, this episode that you're listening to right now was released a few days earlier. Every dollar helps far more than you think. Either way, your viewership is generosity enough. Thank you so much. Thank you.
Related Videos
Neil & Anil Seth Discuss Consciousness in the Universe
50:19
Neil & Anil Seth Discuss Consciousness in ...
StarTalk
387,968 views
Possible End of Humanity from AI?  Geoffrey Hinton at MIT Technology Review's EmTech Digital
39:15
Possible End of Humanity from AI? Geoffre...
Joseph Raczynski
577,057 views
Roger Penrose: Physics of Consciousness and the Infinite Universe | Lex Fridman Podcast #85
1:27:57
Roger Penrose: Physics of Consciousness an...
Lex Fridman
2,307,939 views
Global Capitalism: What Trump 2.0 Means
1:02:56
Global Capitalism: What Trump 2.0 Means
Democracy At Work
1,566,143 views
Geoff Hinton - Will Digital Intelligence Replace Biological Intelligence? | Vector's Remarkable 2024
41:45
Geoff Hinton - Will Digital Intelligence R...
Vector Institute
80,731 views
Ex-Google Officer Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
1:56:32
Ex-Google Officer Speaks Out On The Danger...
The Diary Of A CEO
11,192,251 views
Economist spells out why stopping immigration won't save the economy | Gary Stevenson interview
57:52
Economist spells out why stopping immigrat...
PoliticsJOE
51,367 views
Seminar with Professor Geoffrey Hinton, at the Royal Swedish Academy of Engineering Sciences (IVA)
1:31:24
Seminar with Professor Geoffrey Hinton, at...
Kungl. Ingenjörsvetenskapsakademien IVA
43,228 views
Victor Davis Hanson: President Donald J. Trump and the Fate of the United States
1:16:43
Victor Davis Hanson: President Donald J. T...
Robinson Erhardt
451,104 views
Geoffrey Hinton | On working with Ilya, choosing problems, and the power of intuition
45:46
Geoffrey Hinton | On working with Ilya, ch...
Sana
316,424 views
YUDKOWSKY + WOLFRAM ON AI RISK.
4:17:08
YUDKOWSKY + WOLFRAM ON AI RISK.
Machine Learning Street Talk
97,118 views
The Science of Interstellar with Science Advisor, Kip Thorne
1:43:06
The Science of Interstellar with Science A...
StarTalk
1,624,063 views
Geoffrey Hinton | Will digital intelligence replace biological intelligence?
1:58:38
Geoffrey Hinton | Will digital intelligenc...
Schwartz Reisman Institute
179,092 views
Google DeepMind CEO Demis Hassabis: The Path To AGI, Deceptive AIs, Building a Virtual Cell
54:58
Google DeepMind CEO Demis Hassabis: The Pa...
Alex Kantrowitz
134,399 views
Nobel Minds 2024
52:30
Nobel Minds 2024
Nobel Prize
781,941 views
Graham Hancock: Lost Civilization of the Ice Age & Ancient Human History | Lex Fridman Podcast #449
2:33:02
Graham Hancock: Lost Civilization of the I...
Lex Fridman
5,250,328 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
362,707 views
Ex Google CEO: AI Can Create Deadly Viruses! If We See This, We Must Turn Off AI!
1:49:37
Ex Google CEO: AI Can Create Deadly Viruse...
The Diary Of A CEO
1,836,355 views
The Godfather in Conversation: Why Geoffrey Hinton is worried about the future of AI
46:21
The Godfather in Conversation: Why Geoffre...
University of Toronto
382,205 views
Donald Hoffman - Consciousness, Mysteries Beyond Spacetime, and Waking up from the Dream of Life
1:04:46
Donald Hoffman - Consciousness, Mysteries ...
The Weekend University
591,809 views
Copyright © 2025. Made with ♥ in London by YTScribe.com