CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman

1.87M views17193 WordsCopy TextShare
The Diary Of A CEO
If You Enjoyed This Episode You Will LOVE This One With Mo Gawdat: https://youtu.be/bk-nQ7HF6k4?si=X...
Video Transcript:
are you uncomfortable talking about this yeah I mean it's pretty wild right Mustafa suan the billionaire founder of Google's AI technology he's played a key role in the development of AI from its first critical steps in 2020 I moved to work on Google's chat box it was the ultimate technology we can use them to turbocharge our knowledge unlike anything else why didn't they release it we were nervous we were nervous every organization is going to race to get their hands on intelligence and that's going to be incredibly destructive this technology can be used to identify
cancerous tumors as it can to identify a Target on the battlefield a tiny group of people who wish to cause harm are going to have access to tools that can instantly destabilize our world that's the challenge how to stop something that can cause harm or potentially kill that's where we need containment do you think that it is containable it has to be possible why it must be possible why must it be because otherwise it contains us yet you chose to build a company in this space why did you do that because I want to design
an AI that on your side I honestly think that if we succeed everything is a lot cheaper it's going to power New forms of transportation reduce the cost of healthcare but what if we fail the really painful answer to that question is that do you ever get sad about it yeah it's intense I think this is fascinating I looked at the back end of our YouTube channel and it says that since this channel started 69.9% of you that watch it frequently haven't yet hit the Subscribe button so I have a favor to ask you if
you've ever watched this Channel and enjoyed the content if you're enjoying this episode right now please could I ask a small favor please hit the Subscribe button helps this channel more than I can explain and I promise if you do that to return the favor we will make the show better and better and better and better and better that's the promise I'm willing to make you if you hit the Subscribe button do we have a [Music] deal everything that's going on with artificial intelligence now and um this new wave and all these terms like AGI
and saw another term in your your book called ACI first time I'd heard that term how do you feel about it emotionally if you had to incapsulate how you feel emotionally about what's going on in this moment how would you do what words would you use I would say say in the past it would have been petrified and I think that over time as you really think through the consequences and the pros and cons and the trajectory that we're on you adapt and you understand that actually there is something incredibly inevitable about this trajectory and
that we have to wrap our arms around it and guide it and control it as a collective species as a as humanity and I think the more you realize how much influence we collectively can have over this outcome the more empowering it is because on the face of it this is really going to be the tool that helps us tackle all the challenges that we're facing as a species right we need to fix water desalination we need to grow food 100x cheaper than we currently do we need renewable energy to be you know ubiquitous and
everywhere in our lives we need to adapt to climate change everywhere you look in the next 50 years we have to do more with less and there are very very few proposals let alone practical solutions for how we get there training machines to help us as AIDS scientific research Partners inventors creators is absolutely essential and so the upside is phenomenal it's enormous but AI isn't just a thing it's not an inevitable whole its form isn't inevitable right its form the exact way that it manifests and appears in our everyday lives and the way that it's
governed and who it's owned by and how it's trained that is a question that is up to us collectively as a species to figure out over the next decade because if we don't Embrace that challenge then it happens to us and that's really what I'm I have been wrestling with for 15 years of my career is how to intervene in a way that this really does benefit everybody and those benefits far far outweigh the potential risks at what stage were you petrified so I founded Deep Mind in 2010 and you know over the course of
the first few years our progress was fairly modest but quite quickly in sort of 2013 as the Deep learning Revolution began to take off I could see glimmers of very early versions of AIS learning to do really clever things so for example one of our big initial achievements was to Teach an AI to play the Atari games so remember Space Invaders and and pong where you batter a ball from left to right and we trained this initial AI to purely look at the raw pixels screen by screen flickering or moving in front of the AI
and then control the actions up down left right shoot or not and it got so good at learning to play this simple game simply through attaching a value between the reward like it was it was getting score and taking an action that it learned some really clever strategies uh to play the game really well that Us games players and humans hadn't really even noticed at least people in the office hadn't noticed it some professionals did um and that was amazing to me because I was like wow this simple system that learns through a set of
stimuli Plus a reward to take some actions can actually discover many strategies clever tricks to play the game well that us humans hadn't occurred to us right and that to me is both thrilling because it presents the opportunity to invent new knowledge and Advance our civilization but of course in the same measure is also petrifying was there a particular moment when you were you were at Deep Mind where you go where you had that kind of Eureka Moment Like a day when something happened and and it caused that that Epiphany I guess was it yeah
it it it was actually a moment even before 2013 where I remember standing in the office and watching a very early prototype of one of these image recognition image generation models that had um was trained to generate new handwritten black and white digits so imagine 0er to 1 2 3 4 5 6 789 all in different style of handwriting on a tiny grid of like 300 pixels by 300 pixels in black and white and we were trying to train the AI to generate a new version of one of those digits a number seven in a
new handwriting sounds so simplistic today given the incredible photorealistic images that are being generated right um and I just remember so clearly it it took sort of 10 or 15 seconds and it just resolved it the the number appeared it went from complete Black to like slowly gray and then suddenly these were like white pixels appeared out of the the black darkness and it revealed a number seven and that sounds so simplistic in hindsight but it was amazing I was like wow the model kind of understands the representation of a seven well enough to generate
a new example of a number seven an image of a number seven you know and you roll forward 10 years and our predictions were correct in fact it was quite predictable in hindsight the trajectory that we were on more compute plus vast amounts of data has enabled us within a decade to go from predicting black and white digits generating new versions of those images to now generating unbelievable photorealistic not just images but videos novel videos with a simple natural language instruction or a prompt what has surprised you you said you referred to that as predictable
but what has surprised you about what's happened over the last decade so I think what was predictable to me back then was the generation of images and of audio um because the structure of an image is locally contained so pixels that are near one another create straight lines and edges and corners and then eventually they create eyebrows and noses and eyes and faces and entire scenes and I could just intuitively in a very simplistic way I could get my head around the fact that okay well we're predicting these number sevens you can imagine how you
then can expand that out to enre images maybe even to videos maybe you know to audio too you know what I said you know a couple seconds ago is connected in phon space in the spectrogram but what was much more surprising to me was that those same methods for Generation applied in the space of language you know language seems like such a different abstract space of ideas when I say like the cat sat on the most people would probably predict Matt right but it could be table car chair tree it could be Mountain Cloud I
mean there's a gazillion possible next word predictions and so the space is so much larger the ideas are so much more abstract I just couldn't wrap my intuition around the idea that we would be able to create the incredible large language models that you see today your chat gpts chat GPT Google B the Google's Bard inflection my new company has an AI called Pi pi. a which stands for personal intelligence and it's as good as chat GPT but much more emotional and empathetic and kind so it's just super surprising to me that just growing the
size of these large language models as we have done by 10x every single year for the last 10 years we've been able to produce this and that that that's just an amazingly large number if you just kind of pause for a moment to Grapple with the numbers here in 2013 when we trained the Atari AI that I mentioned to you at Deep Mind that used two Peta flops of computation so peta peta stands for a million billion calculations a flop is a calculation so 2 million billion right which is already an insane number of calculations
lost me at two it's totally crazy yeah just two of these units that are already really large and every year since then we've 10x the number of calculations that can be done such that today the biggest language model that we train at inflection uses 10 billion Peta flops so 10 billion million billion calculations I mean it's just unfathomably large number and what we've really observed is that scaling these models by 10x every single year produces this magical experience of talking to an AI that feels like you're talking to a human that is super knowledgeable and
super smart there's so much that's happened in public conversation around AI um and there's so many questions that I have I've I've been speaking to a few people about artificial intelligence and understand it and I'm I think where I am right now is I feel quite scared um but when I get scared I don't get it's not the type of scared that makes me anxious it's not like an emotional scared it's a very logical scared it's my very logical brain hasn't been able to figure out how the inevitable outcome that I've arrived at which is
that humans become the less dominant species on this planet um how that is to be avoided in any way the first chapter of your book The Coming way wave is a is is a is titled appropriately to how I feel containment is not possible you you say in that chapter the widespread emotional reaction I I was observing is something I've come to call the pessimism aversion trap correct what is the pessimism aversion trap well so all of us being included feel what you just described when you first get to grips with the idea of this
new new coming wave it's scary it's petrifying it's threatening is it going to take my job is my daughter or son going to fall in love with it you know what does this mean what does it mean to be human in a world where there's these other humanlike things that aren't human how do I make sense of that it's super scary and a lot of people over the last few years I think things have changed in the last six months I have to say but o over the last few years I would say the default
reaction has been to avoid the pessimism and the fear right to just kind of recoil from it and pretend that it's like either not happening or that it's all going to work out to be Rosy it's going to be fine we don't have to worry about it people often say well we've always created new jobs we've never permanently displaced jobs we've only ever seen new jobs be created unemployment is at an all-time low right so there's this default optimism bias that we have and I think it's less about a need for optimism and more about
a fear of P pessimism and so that trap particularly in Elite circles means that often we aren't having the tough conversations that we need to have in order to respond to the coming wave are you scared in part about having those tough conversations because of how it might be received um not so much anymore so I've spent most of my career trying to put those tough questions on the policy table right I've been raising these questions the ethics of AI safety and questions of containment for as long as I can remember with governments and civil
societies and all the rest of it and so I've become used to talking about that and you know I think it's essential that we have the honest conversation because we can't let let it happen to us we have to openly talk about it is I mean this is a this is a big a big question but as you sit here now do you think that it is containable because I I I can't see how I can't see how it can be contained chapter 3 is the containment problem or you give give the example of how
Technologies are often invented for good reasons and for certain use cases like the hammer you know which is used you know maybe to build something but it also can be used to kill people um and you say in history we haven't been able to ban a technology ever really it has always found a way into society um because of other societies have an incentive to have it even if we don't and then we need we need it like the nuclear bomb because if they have it then we don't then we're at a disadvantage so are
you optimistic honestly I don't think an optimism or a pessimism frame is the right one because the E both are equally biased in ways that I think distract us as I say in the book on the face of it it does look like containment isn't possible we haven't contained or permanently banned a technology of this type in the past there are some that we have done right so we banned cfc's for example because they were producing a hole in the ozone layer we've banned certain weapons chemical and biological weapons for example or blinding lasers Believe
It or Not There are such things as lasers that will instantly blind you you know so we have stepped back from the frontier in some cases but that's largely where there's either cheaper or you know equally effective Alternatives that are quickly adopted in this case these Technologies are Omni use so the same core technology can be used to identify you know cancerous tumors in chest x-rays as it can to identify a Target on the battlefield for an aerial strike so that mixed use or Omni use is going to drive the proliferation because there's huge commercial
incentives because it's going to deliver a huge benefit and do a lot of good and that's the challenge that we have to figure out is how to stop something which on the face of it is so good but at the same time can be used in really bad ways too do you think we will I do think we will so I think that nation states Remain the backbone of our civilization we have chosen to concentrate power in a single Authority the nation state and we pay our tax is and we've given the nation state a
monopoly over the use of violence and now the nation state is going to have to update itself quickly to be able to contain this technology because without that kind of essentially oversight both of those of us who are making it but also crucially of the open source then it will proliferate and it will spread but regulation is still a real tool and we we can use it and we must what does what does the world look like in um let's say 30 years if that doesn't happen in your view people because people the average person
can't really gra grapple their head around artificial intelligence when they think of it they think of like these large Lang large language models that you can chat to and ask it about your homework that's like the average person's understanding of artificial intelligence because that's all they've ever been exposed to of it you have a different view because of the work you've spent the last decade doing so to try and give Dave who's I don't know an Uber driver in Birmingham an idea who's listening to this right now what artificial intelligence intelligence is and its potential
capabilities if you know there's no there's no containment what does it what does the world look like in 30 years so I think it's going to feel largely like another human so think about the things that you can do not again in the physical world but in the digital world 2050 I'm thinking of I'm in 2050 2050 we will have robots 2050 we will definitely have robots I mean more than that 2050 we will have new biological beings as well because the same trajectory that we've been on with hardware and software is also going to
apply to the platform of biology are you uncomfortable talking about this yeah I mean it's pretty wild right don't know you crossed your arms and no I always I always look I always I always use that as as a cue for someone when when a subject matter is uncomfortable and it's interesting because I know you know so much more than me and about this and I know youve spent way more hours thinking off into the future about the consequences of this I mean you've written a book about it so P like you spent 10 years
at the very deep mind is one of the the Pinnacle companies the Pioneers in this whole Space so you know you know some stuff and it's funny because when I was I watched an interview with Elon Musk and he was asked a question similar to this I know he speaks in certain certain tone of voice but he said that he he's almost he's gotten to the point where he thinks he's living in suspended disbelief where he thinks that if he spent too long thinking about it he wouldn't understand the purpose of what he's doing right
now and he he says that it's more dangerous than nuclear weapons um and that it's too late too late to stop it there this's one interview that's chilling and I was filming Dragons Den the other day and I showed the dragons the clip I was said look what El musk said when he was asked about what his child what advice he should give to his children in a world of in an an inevitable world of artificial intelligence it's the first time I've seen Elon Musk stop for like 20 seconds and not know what to say
stumble stumble stumble stumble stumble and then conclude that he's living in suspended disbelief yeah I mean I think it's a great phrase that is the moment we're in we have to it's what I said to you about the pessimism verion trap and we have to confront the probab ility of seriously dark outcomes and we have to spend time really thinking about those consequences because the competitive nature of companies and of nation states is going to mean that every organization is going to race to get their hands on intelligence intelligence is going to be a new
form of of capital right just as there was a grab for land or there's a grab for oil there's a grab for anything that enables you to do more with less faster better smarter right and we can clearly see the predictable trajectory of the exponential improvements in these Technologies and so we should expect that wherever there is power there's now a new tool to amplify that power accelerate that power turbocharge it right and you know in 2050 if you ask me to look out there I mean of of course it makes me Grimace that's why
I was like oh my God it's it really does feel like a new species and and that has to be brought under control we cannot allow ourselves to be dislodged from our position as the dominant species on this planet we cannot allow that you mentioned robots so these are sort of adjacent technologies that Rising with artificial intelligence robots you mentioned um biological new biological species give me some light on what you mean by that well so so far the dream of Robotics hasn't really come to fruition right I mean we we still have the most
we have now are sort of drones and little bit of self-driving cars but that is broadly on the same trajectory as these other Technologies and I think that over for the next 30 Years you know we are going to have humanoid robotics we're going to have um you know physical tools within our everyday system that we can rely on that will be pretty good that would be pretty good to do many of the physical tasks and that's a little bit further out because I think it you know there's a lot of tough problems there but
it's still coming in the same way and likewise with Biology you know we can now see sequence a genome for a millionth of the cost of the first genome which took place in 2000 so 20ish years ago the cost has come down by a million times and we can now increasingly synthesize that is create or manufacture new bits of DNA which obviously give rise to life in every possible form and we're starting to engineer that DNA to either remove traits uh or capabilities that we don't like or indeed to add new things that we want
it to do we want you know fruit to last longer or we want meat to have higher protein etc etc synthetic meat to have higher protein levels and what's the implications of that potential implications I think the the darkest scenario there is that people will experiment with pathogens engineered you know synthetic pathogens that might end up accidentally or intentionally being more transmissible I.E they they can spread faster um or more lethal I.E you know they cause more harm or potentially kill like a pandemic like a pandemic um and that's where we need containment right we
have to limit access to the tool TOs and the knowhow to carry out that kind of experimentation so one framework of thinking about this with respect to making containment possible is that we really are experimenting with dangerous materials and Anthrax is not something that can be bought over the Internet that can be freely experimented with and likewise the very best of these tools in a few years time are going to be capable of creating you know new synthetic um pandemic pathogens and so we have to restrict access to those things that means restricting access to
the compute it means restricting access to the software that runs the models to the cloud environments that provide apis provide you access to experiment with those things um and of course on the biology side it means restricting access to some of the substances and people aren't going to like this people are not going to like that claim because it means that those who want to do good with those tools those who want to create a start up the small guy the little developer that struggles to comply with all the regulations they're going to be pissed
off understandably right but that is the age we're in deal with it like we have to confront that reality that means that we have to approach this with the precautionary principle right never before in the invention of a technology or in the creation of a regulation have we proactively said we need to go slowly we need to make sure that this first does no harm the precautionary principle and that is just an unprecedented moment no other Technology's done that right because I think we collectively in the industry those of us who are closest to the
work can see a place in 5 years or 10 years where it could get out of control and we have to get on top of it now now and it's better to forgo like that is give up some of those potential upsides or benefits until we can be more sure that it can be contained that it can be controlled that it always serves our Collective interests and I I think about that so I think about what you've just said there about being able to create these pathogens these diseases and viruses Etc that you know could
become weapons or whatever else but with artificial intelligence and the power of that intelligence with these um pathogens you could theoretically ask one of these systems to create a virus that a very deadly virus um you could ask the artificial intelligence to create a very deadly virus that has certain properties um maybe even that mutates over time in a certain way so it only kills a certain amount of people kind of like a nuclear bomb of of viruses that you could just pop hit an enemy with now if I'm if I hear that and I
go okay that's powerful I would like one of those you know there might be an adversary out there that goes I would like one of those just in case America get out of hand and America's thinking you know I want one of those in case Russia gets out of hand and so okay you might take a precautionary approach in the United States but that's only going to put you on the back foot when China or Russia or one of your adversaries accelerates forward in that in that path and this was the same with the the
nuclear bomb and you know you nailed it I mean that is is the race condition we refer to that as the race condition the idea that if I don't do it the other party is going to do it and therefore I must do it but the problem with that is that it creates a self-fulfilling prophecy so the default there is that we all end up doing it and that can't be right because there is a opportunity for massive cooperation here there's a shared that is between us and China and every other quote unquote them or
they or enemy that we want to create we've all got a shared interest in advancing the collective health and well-being of humans and Humanity how well have we done at promoting shared interest well in the development of Technologies over the years even at like a corporate level even you know you know the nuclear nonproliferation treaty has been reasonably successful there's only nine nuclear states in the world today we've stopped many like three countries actually gave up nuclear weapons because we incentivize them with sanctions and threats and economic rewards um small groups have tried to get
access to nuclear weapons and so far have largely failed it's expensive though right and hard to like uranium as a as a chemical to keep it stable and to to buy it and to house it I mean I couldn't just put it in the shed you certainly couldn't put it in a shed you can't download uranium 235 off the Internet it's not available open source that is totally true so it's got different characteristics for sure but a kid in Russia could you know in his bedroom could download something onto his computer that's incredibly harmful in
the artificial intelligence Department right I think that that will be possible at some point in the next five years it's true because there's a weird Trend that's going on here on the one hand You've Got The Cutting Edge AI models that are built by Google and open Ai and my company inflection and they cost hundreds of millions of dollars and there's only a few of them but on the other hand the what was cutting edge a few years ago is now open source today so gpt3 which came out in the summer of 2020 is now
reproduced as an open- Source model so the code and the weights of the model the design of the model and the actual implementation code is completely freely available on the web and it's tiny it's like 60 times or 60 70 times smaller than the original model which means that it's cheaper to use and cheaper to run and that's as you know we've said earlier like that's the natural trajectory of technologies that become useful they get more efficient they get cheaper and they spread further and so that's the containment challenge that's really the essence of what
I'm sort of trying to raise in my book is to frame the challenge of the next 30 to 50 years as around containment um and around confronting proliferation do you believe because we're both going to be alive unless this you know there some robot kills us but we're both going to be alive in 30 years time I hope so maybe the podcast will still be going unless AI is now taking my job it's very possible so I'm I'm going to sit you here and you know when you're you you'll be what 60 68 years old
I'll be 60 um and I'll say at that point when we have that conversation do you think we would have been successful in containment at a global level I think we have to be I can't even think that we're not why because I'm fundamentally a humanist and I think that we have to make a choice to put our species first and I think that that's what we have to be defending for the next 50 years that's what we have to defend because look it it's it's certainly possible that we invent these agis in such a
way that they are always going to be provably um subservient uh to humans and take instructions you know from their human controller every single time but enough of us think that we can't be sure about that that I don't think we should take the gamble basically so that's why I think that we should focus on containment and non proliferation because some people if they do have access of the technology will want to take those risks and they will just want to see like what's on the other side of the door you know and they might
end up opening Pandora's Box and that's a decision that affects all of us and that's the challenge of the networked age you know we live in this globalized world and we use these words like globalization and we you sort of forget what globalization means this is what globalization is this is what a networked world is it means that someone taking one small action can suddenly spread everywhere instantly regardless of their intentions when they took the action it maybe you know unintentional like you say may be that they're never they weren't ever meaning to do harm
well I think I asked you when I said it you 30 years time you said that there will be like human level intelligence you'll be interacting with you know this new species but the species for me to think the the species will want to interact with me is feels like wishful thinking because what will I be to them you know like I've got a French Bulldog Pablo and I can't imagine our IQ is that far apart like like you know in relative terms my the IQ between me and my dog Pablo I can't imagine that's
that far apart even when I think about is it like the orangutang where we only have like 1% difference in DNA or something crazy and yet they throw their poop around and I'm sat here broadcasting around the world there's quite a difference in that 1% you know and then I think about this new species where as you write in your book in chapter 4 there seems to be no upper limit to ai's potential intelligence why would such an intelligence want to interact with me well it depends how you design it so I think that our
goal one of the challenges of containment is to design AIS that we want to interact with that want to interact with us right if you set an objective function for an AI a goal for an AI by its design which you know inherently disregards or disrespects you as a human and your goals then it's going to wander off and do a lot of strange things what if it has kids and the kids are you know what I mean what if it replicates in a way where because because I've I've heard this this conversation around like
it depends how we design it but you know I think about it's kind of like if I have a kid and the kid grows up to be a thousand times more intelligent than me to think that I could have any influence on it on it when it's a thinking sentient developing species again feels like I'm overestimating my version of intelligence and importance and significance in the face of something that is incomprehensibly like a even a hundred times more intelligent than me and the speed of its computation is a thousand times what my the meat in
my skull can do yeah like how how how is it gonna how how do I know it's going to respect me or care about me or understand you know that I me you know I think that comes back down to the containment challenge I think that if we can't be confident that it's going to respect you and understand you and work for you and us as a species overall then that's where we have to adopt the precautionary principle I don't think we should be taking those kinds of risks in experimentation and design and now I'm
not saying it's possible to design an AI that doesn't have those self-improvement capabilities in the limit in like 30 or 50 years I think it you know that's kind of what I was saying is like it seems likely that if you have one like that it's going to take advantage of infinite amounts of data and infinite amounts of computation and it's going to kind of outstrip our ability to act and so I think we have to step back from that precipice that's what the containment problem is is that it's it's actually saying no sometimes it's
saying no and that's a different sort of muscle that we've never really exercised as a civilization and and that's obviously why containment appears not to be possible because we've never done it before we've never done it before and every inch of our you know Comm and politics and our war and all of our instincts are just like Clash compete Clash compete prophit profit grow beat exactly dominate you know fear them be paranoid like now all this nonsense about like China being this new evil like it how does that slip into our culture how are we
suddenly all shifted from thinking it's the the the Muslim terrorists about to blow us all up to now it's the Chinese who are about to you know blow up Kansas it's just like what are we talking about that like we really have to pair back the paranoia and the fear and the othering um because those are the incentive dynamics that are going to drive us to you know cause self harm to humanity thinking the worst in each other the the there's couple of key moments when in my understanding of artificial intelligence that have been kind
of Paradigm Paradigm shifts for me because I think like many people I thought of artificial intelligence as you know like a like a child I was Raising and I would program I would code it to do certain things so I would code it to play chess and I would tell it the moves that are conducive with being successful in chess and then I remember watching that like Alpha go documentary right which I think was deep deep mind wasn't it that was us yeah you guys so you programmed this this um artificial intelligence to play the
game go which is kind of like just think of it kind of like a chess or a black am or whatever and it eventually just beats the best player in the world of all time and it and the way it learned how to beat the best player in the world of all time the world champion who was by the way depressed when he got beat um was just by playing itself right and then there's this moment I think in is it game four or something where right it does this move that no one could have
predicted a move that seemingly makes absolutely no sense right in those moments where no one trained it to do that and it did something unexpected Beyond where humans are trying to figure it out in hindsight this is where I go how do you how do you train it if it's doing things we didn't anticipate right like how do you control it when it's doing things that humans couldn't anticipate it doing where we're looking at that move it's called like move 37 or something correct yeah is it move 37 it is look at my intelligence F
nice work yeah I'm I'm going to survive a bit longer than I thought it's like move 37 you at least another decade in you um move 37 does this crazy thing and you see everybody like lean in and go why has it done that and it turns out to be brilliant that humans couldn't couldn't forecast the commentator actually thought it was a mistake yeah he was a pro and he was like this this definitely a mistake you know it's the the alpha go lost the game but it was so far ahead of us that it
knew something we didn't right right that's when that's when I lost hope in this whole idea of like oh train it to do what we want like a dog like sit pour roll over right well the real challenge is that we actually want it to do those things like when it discovers a new strategy or it invents a new idea or it helps us find like you know a cure for some disease like that's why we're building it right because we're reaching the limits of what we as you know humans can invent and solve right
especially with what we're facing of you know in terms of population growth over the next 30 Years and how climate change is going to affect that and so on like we really want these tools to turbocharge us right and yet like it's that creativity and that invention which obviously makes us also feel well maybe it it is really going to do things that we don't like for sure right so interesting how do you contend with all of this how do you contend with the the clearup side and then you must like Elon must be completely
aware of the the horrifying existential risk at the same time and and you're building a big company in this space which I think is valued at 4 billion now inflection AI which has got this its own model called Pi so you're building in this space you understand the incentives at both a nation state level and a corporate level that we're we're going to keep planing forward even if the US stops there's going to be some other country that sees that as a huge Advantage their economy will swell because they did if this company stops then
this one's going to get a get a huge advantage and their shareholders are you know everyone's investing in AI full steam ahead but you feel you can see this huge existential risk is it suspended is that the pathw suspended disbelief I mean just to kind of like just know that it's I feel like I know that it's going to happen no one's been able to tell me otherwise but just don't think too much about it and you'll be okay I think you can't give up right I think that in some ways you're realization exactly what
you've just described like weighing up two conflicting and horrible truths about what is likely to happen those contradictions that is a kind of honesty and a wisdom I think that we need all collectively to realize because the only path through this is to be straight up and embrace you know the risks and embrace the default trajectory of all these competing incentives driving forward to kind of make this feel like inevitable and if you put the blinkers on and you kind of just ignore it or if you just be super Rosy and it's all going to
be all right and if you say that we've always figured it out anyway then you we're not going to get the energy and the dynamism and engagement from everybody to try to figure this out and that's what gives me like reason to be hopeful because I think that we make progress by getting everybody paying attention to this it isn't going to be about those who are currently the AI scientists or those who are the technologists you know like me or the Venture capitalists or just the politicians like all of those people no one's got answers
so that's what we have to confront there are no obvious answers to this profound question and I've basically written the book to say prove that I'm wrong you know containment must be possible and I it must be it must be possible it has to be possible it has to be you want it to be I I desperately want it to be yeah why must it be because otherwise I think you're in the camp of believing that this is the inevitable evolution of humans the transhuman kind of view you know some people would argue like what
is okay let's part let's let's stretch the timelines out okay so let's not talk about 30 years let's talk about 200 years like what is this going to look like in 2200 you tell me you're smarter than me I mean it's mindblowing it's mind-blowing we'll have quantum computers by then what's a quantum computer a quantum computer is a completely different type of computing architecture which in simple terms basically allows you to those those calculations that I described at the beginning billions and billions of flops those billions of flops can be done in a single computation
so everything that you see in the digital world today relies on computers processing information and and the speed of that processing is a friction it kind of slows things down right you remember back in the day old School modems 56k modem the dialup sound and the image pixel loading like pixel by pixel that was because the computers were slow and we're getting to a point now where the computers are getting faster and faster and faster and Quantum Computing is like a whole new leap like way way way Beyond where we where we currently are and
so by analogy how would I understand that so like if my I've got my dialup modem over here and then Quantum Computing over here right what's the how do I what's the difference well I don't know what it's really difficult to exp a billion times faster oh it's it's it's like it's like billions of billions times faster it's it's it's much more than that I mean one way of think about it is like a floppy disc which I guess most people remember 1.4 megabytes a physical thing back in the day in 1960 or so that
was basically an entire pallets worth of computer that was moved around by a forklift truck right which is insane today you know you have billions and billions of times that floppy disc in your smartphone in your pocket tomorrow you're going to have billions and billions of smartphones in minuscule wearable devices there'll be cheap fridge magnets that you know are constantly on everywhere sensing all the time monitoring processing analyzing improving optimizing you know and they'll be super cheap so it's super unclear what do you do with all of that knowledge and information I mean it's ultimately
knowledge creates value when you know the relationship between things you can improve them you know make it more efficient and so more data is what has enabled us to build all the value of you know online in the last 25 years and so what does that look like in 150 years I can't really even imagine to be honest with you it's very hard to say I don't think everybody is going to be working why would we yeah what we wouldn't be working in that kind of environment I mean look the other trajectory to add to
this is the cost of energy production you know AI if it really helps us solve battery storage which is the missing piece I think to really tackle climate change then we will be able to Source basically source and store infinite energy from the Sun and I think in 20 or so years time 20 30 years time that is going to be a cheap and widely available if not completely freely available resource and if you think about it everything in life has the cost of energy built into its production value and so if you strip that
out everything is likely to get a lot cheaper we'll be able to desalinate water we'll be able to grow crops much much cheaper we'll be able to grow much higher quality food right it's going to power New forms of transportation it's going to reduce the cost of drug production and Healthcare right so all of those gains obviously there'll be a huge commercial incentive to drive the production of those gains but the cost of producing them is going to go through the floor I think that's one key thing that a lot of people don't realize that
is a reason to be hugely hopeful and optimistic about the future everything is going to get radically cheaper in 30 to 50 years it's a 200 years time we have no idea what the world looks like it's uh this goes back to the point about being is it did you say transhumanist right what does that mean transhumanism I mean it's a group of people who basically believe that you that that humans and our soul and our being will one day transcend or move beyond our biological substrate okay so our physical body our brain our biology
is just an enabler for your intelligence and who you are as a person and there's a group of kind of crack Bots basically I think who think that we're going to be able to upload ourselves to a silicon substrate right a computer that can hold the essence of what it means to be Stephen so you in 200 in 20 uh in in 2200 will could well still be You by their reasoning but you'll live on a server somewhere why are they wrong I think about all these adjacent Technologies like biological um biological advancements did you
call it like biosynthesis or something was yeah synthetic biology syn synthetic biology um I think about the nanotechnology development right think about Quantum Computing the the progress in artificial intelligence everything becoming cheaper and I think why why are they wrong it's hard to say precisely but broadly speaking I haven't seen any evidence yet that we're able to extract the essence of a being from a brain right it's that that that that kind of dualism that you know there is a mind and a body and a spirit that is a I I don't think I don't
see much evidence for that even in Neuroscience um that actually it's much more one and the same so I don't think you know you're going to be able to emulate the entire brain so their thesis is that well some of them cryogenically store their brain after death Jesus so they they have it they' they they wear these like you know how you have like an organ donor tag or whatever so they have a cryogenically freeze me when I die tag and so they there's like a special like ambulance services that will come pick you up
because obviously you need to do it really quickly the moment you die you need to get put into a cryogenic freezer to preserve your you know brain forever I personally think this is this is is nuts but you know their belief is that you'll then be able to reboot that biological brain and then transfer you over um it it doesn't seem plausible to me when you said at the start of this this little topic here that you it must be possible to contain it said it must be possible um the the reason why I I
struggle with that is because in chapter 7 you say line in your book that AI is more autonomous than any other technology in history for centuries the idea that technology is is somehow running out of control a self-directed and self-propelling force beyond the Realms of human agency remained a fiction not anym and this idea of autonomous technology that is acting uninstructed um and is intelligent and then you say we must be able to contain it it's kind of like a massive dog like a big rottweiler yeah that is you know a thousand times bigger than
me and me looking up at it and going I'm going to get take you for a walk y yeah and then it's just looking down at me and just stepping over me or stepping on me well that's actually a good example because we have actually contained Rottweilers before we've contained gorillas and you know tigers and crocodiles and pandemic pathogens and nuclear weapons and so you know it's easy to be you know a hater on what we've achieved but this is the most peaceful moment in the history of our species this is a moment when our
biggest problem is that people eat too much think about that we've spent our entire evolutionary period running around looking for food and trying to stop you know our enemies throwing rocks at us and we've had this incredible period of 500 years where you know each year things have broadly well maybe each each Century let's say there's been a few ups and downs but things have broadly got better and we're on a trajectory for you know lifespans to increase and quality of life to increase and health and well-being to improve and I think that's because in
many ways we have succeeded in containing forces that appear to be more powerful than ourselves it just requires unbelievable creativity and adaptation it requires compromise and it requires an a new tone right a much more humble tone to governance and politics and and how we run our world not this kind of like hyper aggressive adversarial paranoia tone that we talked about previously but one that is like much more wise than that much more accepting that we are unleashing this force that does have that that potential to be the Rott riler that you described but that
we must contain that as our number one priority that has to be the thing that we focus on because otherwise it contains us i' I've been thinking a lot recently about cyber security as well just broadly on an individual level in a world where there are these kinds of tools which seems to be quite close um large language models brings up this whole new question about cyber security and cyber safety and you know in a world where there's these ability to generate audio and language and videos that seem to be real um what can we
trust and you know I was watching a video of a of a of a young girl whose grandmother was called up by a voice that was made to sound like her son saying he'd been in a car accident and asking for money and her nearly sending the money or this whole you know because this really brings into Focus that we our lives are build on built on trust trusting the things we see here in watch and in in and now we're at feels like a a a moment where we're no longer going to be able
to trust what we see on the internet on the phone what what what advice do you do we you have for people who were worried about this so skepticism I think is healthy and necessary and I think that we're going to need it um even more than than we ever did right and so if you think about how we've adapted to the first wave of this which was spammy email scams um everybody got them and over time people learned to identify them and be skeptical of them and reject them likewise you know I'm sure many
of us get like text messages I certainly get loads of text messages trying to fish me and ask me to meet up or do this that and the other and we've adapted right now I think we should all know and expect that criminals will use these tools to manipulate us just as you've described I mean you know the voice is going to be humanlike the Deep fake is going to be super convincing and there are actually ways around those things so for example the reason why the banks invented OTP um one-time passwords where they send
you a text message with a special code um is precisely for this reason so that you have a 2fa a two Factor authentication increasingly we will have a three or four Factor authentication where you have to triangulate between multiple separate independent sources and it won't just be like call your bank manager and release the funds right so this is where we need the creativity and energy and attention of everybody because defense the kind of defensive measures have to evolve as quickly as the potential offensive measures the attacks that are coming I heard you say this
that you think um some people are for many of these problems we're going to need to develop AIS to defend us from the AIS right we kind of already have that right so we have automated ways of detecting spam online these days you know most of the time there are um machine Learning Systems which are trying to identify when your credit card is used in a fraudulent way that's not a human sitting there looking at patterns of spending traffic in real time that's an AI that is like flagging that something looks off um likewise with
data centers or security cameras a lot of those security cameras these days are you know have tracking algorithms that look for you know surprising sounds or like if a if a glass window is is smashed that'll be detected by an AI often that is you know listening on the security camera so you know that's kind of what I mean by that is that increasingly those AIS will get more capable and we'll want to use them for defensive purposes and that's exactly what it looks like to have good healthy well-functioning controlled AIS that serve us I
went on one of these large language models and and said to me give I said to the large language model give me an example where in artificial intelligence takes over the world or whatever and just and results in the destruction of humanity and then tell me what we'd need to do to prevent it and it said it gave me this wonderful example of this AI called Cynthia that threatens to destroy the world and it says the way to defend that would be a different AI which had a different name and it said that this one
would be acting in human interests and we'd basically be fighting one AI with another Ai and of and of course of course of course that level if Cynthia started to wreak hav havoc on the world and take control of the nuclear weapons and infrastructure and all that we would need an equally intelligent weapon to fight it although one of the interesting things that we found um over the last few decades is that it so far tended to be the AI plus the human that has that is still dominating that's the case in chess uh in
go and other games um that in go it's still yeah so there was a paper that came out a few months ago two months ago that showed that a human was actually able to beat The Cutting Edge go program um even one that was better than Alpha go with a new strategy that they had discovered um you know so obviously it's not just a sort of game over environment where the AI just arrives and it gets better like humans also adapt they get super smart they like I say get more cynical ask get more more
skeptical ask you know good questions invent their own things use their own AIS to adapt and that's the evolutionary nature of what it means to have a technology right I mean everything is a technology like your pair of glasses made you smarter in a way like before there were glasses and people got bad eyesight they weren't able to read you know suddenly those who did adopt those Technologies were able to read for you know longer in their lives or under low light conditions and they were able to consume more information and got smarter and so
that is the trajectory of Technology it's this iterative interplay between you know human and machine that makes us better over time you know the potential um consequences if if we don't reach a point of containment yet you chose to build a company in this space yeah why why that why did you do that because I believe that the best way to uh demonstrate how to build safe and and contained AI is to actually experiment with it in practice and I think that if we are just Skeptics or critics and we stand back from The Cutting
Edge then we give up that opportunity to shape outcomes to you know all of those other actors that we referred to whether it's like China and the US going at each other's throats uh you know you know or other big companies that are purely pursuing profit at all costs and so it doesn't solve all the problems of course it's super hard and again it's full of contradictions but I honestly think it's the right way for everybody to proceed you know if experiment at the front yeah if you're afraid Russia Putin understand right what reduces fear
is deep understanding spend time playing with these models look at their weaknesses they're not superhuman yet they make tons of mistakes they're crappy in lots of ways they're actually not that hard to make the more you've experimented has it has that correlated with a reduction in fear cheeky question no but that's yes and no you're totally right yes it has in the sense that you know the problem is the more you learn the more you realize yeah that's what I'm saying I was fine before I started talking about Ai and now more I've talked about
it it's true it's true it's it's sort of pulling on a thread which it's a crazy spiral um yeah I mean like I think in the short term It's Made Me way less afraid because I I don't see that kind of existential harm that we've been talking about in the next decade or two but longer term that's that's where I struggle to wrap my head around how things play out in 30 years some people say government regulation will sorted out you discussed this in Chapter 13 of your book where you which is titled containment must
be possible I love how you didn't say is yeah containment must be containment must be possible um what do you say to people that say government regulation will sorted out I had rishy sunak did some announcement and he's got a cobra committee coming together they'll handle it that's right and the EU have a huge piece of regulation called the EU AI act um um you know Joe President Joe Biden has you know gotten his own you know set of proposals and um you know we've been working with with both you know Rishi sunak and and
Biden and you know trying to contribute and shape it in the best way that we can look it isn't going to happen without regulation so regulation is essential is critical um again going back to the precautionary principle but at the same time regulation isn't enough you know I often hear people say well we'll just regulate it we'll just stop we'll just stop we'll just stop we'll slow down um and the problem with that is that it kind of ignores the fact that the people who are putting together the regulation don't really understand enough about the
detail today you know in their defense they're rapidly trying to wrap their head around it especially in in the last 6 months and that's a great relief to me cuz I feel the burden is now increasingly shared and you know just from a personal perspective I'm like I feel like I've been saying this for about a decade and just in the last six months now everyone's coming at me and saying like you know what's going on I'm like great this is the conversation we need to be having because everybody can start to see the glimmers
of the future like what will happen if a chat GPT like product or a piie like product really does improve over the next 10 years and so when I say you know regulation is not enough what I mean is it needs movement it needs culture it needs people who are actually building and making you know in like modern creative critical ways not just like giving it up to you know companies or small groups of people right we need lots of different people experimenting with strategies for containment isn't it predicted that this industry is A1 15
trillion dollar industry or something like that yeah I've heard that it is a lot so if I'm rishy and I know that I'm going to be chucked out of office Rish is the prime minister of the UK If I'm going to be trucked out of office in two years unless this economy gets good I don't want to do anything to slow down that $15 trillion dollar bag that I could be on the receiving end of I would I would definitely not want to slow that 15 billion trillion dollar bag and give it to like America
or Canada or some other country I'd want that $15 trillion doll windfall to be on my country right so I have I have no other than the long-term you know health and success of humanity in my four-year election window I've got to do everything I can to boost these numbers right and get us looking good so I I could give you lip service but but but listen I'm not going to be here unless these numbers look good right exactly that's another one of the problems short-termism is everywhere who is responsible for thinking about the 20-year
future who is it I mean that's a deep question right I mean we we we the world is happening to us on a decade by decade time scale it's also happening hour by hour so change is just ripping through us and this arbitrary window of governance of like a four-year election cycle where actually it's not even four years because by the time you've got in you do some stuff for six months and then by month you know 12 or 18 you're starting to think about the next cycle and are you going to pull you know
this just like the short-termism is killing us right and we don't have an Institutional body whose responsibility is stability you could think of it as like a you know like a global technology stability function what is the global strategy for containment that has the ability to to introduce friction when necessary to implement the precautionary principle and to basically keep the peace that I think is the missing governance piece which we have to invent in the next 20 years and it's insane because I'm basically describing the UN Security Council plus the World Trade Organization all these
huge you know Global institutions which formed after you know the horrors of the second world war have actually been incredible they've created interdependence and alignment and stability right obviously there's been a lot of bumps along the way in the last 70 years but broadly speaking it's an unprecedented period of peace and when there's peace we can create prosperity and that's actually what we're lacking at the moment is that we don't have an international mechanism for coordinating among competing Nations competing corporations um to drive the peace in fact we're actually going kind of in the opposite
direction we're resorting to the old school language of a clash of civilizations with like China is the new enemy they're going to come to dominate us we have to dominate them it's a it's a battle between two polls China's taking over Africa China's taking over the Middle East we have to count I mean it's just like that can only lead to conflict that just assumes that conflict is inevitable and so when I say regulation is not enough no amount of good regulation in the UK or in Europe or in the US is going to deal
with that Clash of civilizations language which we seem to have been become addicted to if we need that Global collaboration to be successful here are you optimistic now that we'll we'll get it because the same incentives are at play with climate change in AI you know why would I want to reduce my carbon emissions when it's making me loads of money or why you know why would I want to reduce my AI development when it's going to make us 15 trillion yeah so the the the really painful answer to that question is that we've only
really ever driven extreme compromise and consensus in two scenarios one off the back of unimaginable catastrophe and suffering you know Hiroshima Nagasaki and the Holocaust and World War II which drove 10 years of consensus and new political structures right and then the second is um we did fire the bullet though didn't we we fired a couple of those nuclear bombs exactly and that that's why I'm saying the brutal truth of that is that it takes a catastrophe to trigger the need for alignment right so that that's one the second is where there is an obvious
mutually assured destruction um you know Dynamic where both parties are afraid that this would trigger nuclear meltdown right and that means suicide and when there was few parties exactly when there was just nine people exactly you could get all nine but in in when we're talking about artificial technology there's going to be more than nine people right that have P access to the full sort of power of that technology for NE various reasons I don't think it has to be like that I think that's the challenge of containment is to reduce the number of actors
that have access to the existential threat Technologies to an absolute minimum and then use the existing military and economic incentives which have driven World Order and peace so far um to to prevent the proliferation of access to these super intelligences or these agis a quick word on hu as you know they're a sponsor of this podcast and I'm an investor in the company and I have to say it's moments like this in my life where I'm extreme busy and I'm flying all over the place and I'm recording TV shows and I'm recording shows in America
and here in the UK that hu is a necessity in my life I'm someone that regardless of external circumstances or professional demands wants to stay healthy and nutritionally complete and that's exactly where heal fits in my life it's enabled me to get all of the vitamins and minerals and nutrients that I need in my diet to be aligned with my health goals while also not dropping the ball on my professional goals because it's convenient and because I can get it online in Tesco in supermarkets all over the country if you're one of those people that
hasn't yet tried hu or you have before but for whatever reason you're not a Hu consumer right now I would highly recommend giving hu a go and Tesco have now increased the listings with hu so you can now get the RTD ready to drink in Tesco expresses all across the UK 10 areas of focus for containment you're the first person I've met that's really hazarded a laid out a blueprint for the things that need to be done um cohesively to try and reach this point of containment so I super excited to talk to you about
these the first one is about safety and you mentioned there that's kind of what we talked about a little bit about there being AIS that are currently being developed to help contain other AIS two audits um which is being able to from what I understand being able to audit what's being built in the these open source models three choke points what's that yeah so choke point refers to points in the supply chain where you can throttle who has access to what okay so on the internet today everyone thinks of the internet as an idea this
kind of abstract Cloud thing that hovers around above our heads but really the internet is a bunch of cables those cables you know are physical things that transmit information you know under the sea and you know the those points the end points can be stopped and you can monitor traffic you can control basically what traffic moves back and forth and then the second choke point is access to chips so the gpus graphics processing units which are used to train these super large clusters I mean we now have the second largest supercomputer in the world today
uh at least you know just for this next six months we will other people will catch up soon but we're ahead of the curve we're very luy cost a billion dollars and those chips are really the raw commodity that we use to build these large language models and access to those chips is something that governments can should and are um you know restricting that's a choke point you spent a billion dollars on a computer we did yeah it's bit more than that actually about 1.3 a couple of years time that'll be the price of an
iPhone that's the problem everyone's going to have it number six is quite curious you say that um the need for governments to put increased taxation on AI companies to be able to find um fund the massive changes in society such as paying for reskilling and education yeah um you put massive tax on over here I'm going to go over here if you tax it if I'm an AI company and you're taxing me heavily over here I'm going to Dubai yep or Portugal yep so if it's that much of a competitive disadvantage I will not build
my company where the taxation's high right right so the way to think about this is what are the strategies for containment if we're agreed that long-term we want to contain that is close down slow down control both the proliferation of these Technologies and the way the really big AIS are used then the way to do that is to tax things tax things taxing things slows them down and that's what you're looking for provided you can coordinate internationally so you're totally right that you know some people will move to Singapore or to Abu Dhabi or Dubai
or whatever the reality is that at least for the next you know sort of period I would say 10 years or so the concentrations of intellectual you know horsepower will remain the big mega cities right you know I I moved from from London in 2020 to go to Silicon Valley and I started my new company in Silicon Valley because the concentration of talent there is overwhelming all the very best people are there on in in Ai and software engineering so I think it's quite likely that that's going to remain the case for the foreseeable future
but in the long term you're totally right how do you it's another coordination problem how do we get nation states to collectively agree that we want to try and contain that we want to slow down because as we've discussed with the proliferation of dangerous materials or on the military side there's no use one person doing it or one country doing it if others race ahead and that's the conundrum that we face I am I don't consider myself to be a pessimist in my life I consider myself to be an optimist generally I think and I
always I think that as you've said I think we have no choice but to be optimistic and I have faith in humanity we've done so much so many incredible things and so overcome so many things and I also think I'm really logical as in I'm the type of person that needs evidence to change my beliefs either way um when I look at all of the whole picture having spoken to you and several others on this subject matter I see more reasons why we won't be able to contain than reasons why we will especially when I
dig into those incentives um you talk about incentives at length in your book um at different different points and it's clear that all the incentives are pushing towards a lack of containment especially in the short and Midterm which tends to happen with new technology in the short and Midterm it's like a land grab the gold is in the Stream we all rush to get the the shovels and the you know the cves and stuff and then we realize the unintended consequences of that hopefully not before it's too late in chapter 8 you talk about Unstoppable
incentives at play here the coming wave represents the greatest economic prize in history and scientists and technologists are all too human they crave status success and Legacy and they want to be recognized as the first and the best they're competitive and clever with a carefully nurtured sense of their place in the world and in history right I look at you I look at people like Sam um from open AI Elon you're all humans with the same understanding of your place in history and status and success you all want that right right there's a lot of
people that maybe aren't as don't have as good a track record of you at doing the right thing which you certainly have that will just want the status and the success and the money incredibly strong incentives I always think about incentives as being the thing that you look at when you want to understand how people will behave all of the incentives on a on a geopolitical like on a global level suggest that containment won't happen am I right in that assumption that all the incentives suggest containment won't happen in the short or midterm until there
is a c a a tragic event that makes us forces us towards that idea of containment or if there is a threat of mutually assured destruction right so that and that's the case that I'm trying to make is that let's not wait for something catastrophic to happen so it's self-evident that we all have to work towards containment right I mean you you would have thought that the Potential Threat the potential idea that covid-19 was a side effect let's call it of a laboratory in Wuhan that was exploring gain of function research where it was deliberately
trying to basically make the pathogen more transmissible you would have thought that warning to all of us let's let's not even debate whether it was or wasn't but just the fact that it's conceivable that it could be that should really in my opinion have forced all of us to instantly agree that this kind of research should just be shut down we should just not be doing gain of function research on what planet could we possibly persuade ourselves that we can overcome the containment problem in biology because we've proven that we can't cuz it could have
potentially got out and there's a number of other examples of where it did get out of other diseases like foot and mouth disease mhm back in the '90s in the UK so but that didn't change our Behavior right well foot and mouth disease clearly didn't cause enough harm because it only killed a bunch of cattle right um and the pandemic we can't seem you know covid-19 pandemic we can't seem to agree you know that it really was from a lab and not from a bunch of bats right and so that's where I struggle where you
know now you catch me in a moment where I feel angry and sad and pessimistic because to me that's like a straightforwardly obvious conclusion that you know this is a type of research that we should be closing down and I think we should be using these moments to give us insight and wisdom about how we handle other technology trajectories in the next few decades should we should we should that's what I'm advocating for must that's the best I can do I want to know will will I think be a low I can only do my
best I'm doing my best to advocate for it I mean you know like I'll give you an example like I think autonomy is a type of AI capability that we should not be pursuing really like autonomous cars and stuff well I I autonomous cars I think are slightly different because autonomous cars operate within a much more constrained physical domain right like you know you you really can the containment strategies for autonomous cars are quite reassuring right they have you know GPS control you know we know exactly all the Telemetry and how exactly all of those
you know components on board a car operate and we can observe repeatedly that it behaves exactly as intended right whereas I think with with other forms of autonomy that people might be pursuing like online okay you know where you have an an AI that is like designed to self-improve without any human oversight or a battle Battlefield weapon which you know like unlike a car hasn't been you know over that particular moment in the battlefield millions of times but is actually facing a new enemy every time you know every single time and we're just going to
go and you know allow these autonomous weapons to have you know the these autonomous military robots to have lethal Force I think that's something that we should really resist I don't think we want to have autonomous robots that have lethal Force you're a super smart guy and I I struggle to believe that you're you you because you you demonstrate such a clear understanding of the incentives in your book that I struggle to believe that you don't think the incentives will win out especially in the short and near term and then the problem is in the
short and near term as is the case with most of these waves is we we wake up and 10 years time ago how the hell did we get here right and why like and we and as you say this precautionary approach of we should have ranged the Bell earlier we should have sounded the alarm earlier but we waltzed in with optimism right and with that kind of aversion to confronting the realities of it and then we woke up in 30 years and we're on a leash right and there's a big rottweiler and we're we've lost
control we've lost you know I I I I would love to know someone as smart as you I don't I don't believe can be can believe that containment is possible and that's me just being completely honest I'm not saying you're lying to me but I just can't see how someone as smart as you and in the know as you can believe that containment is going to happen well I didn't say it is possible I said it must be right which is this is what we keep discussing right that's an important distinction is that on the
face of it look what I I care about I care about science I care about facts I care about describing the world as I see it and what I've set out to do in the book is des describe a set of interlocking incentives which drive a technology production process which produces potentially really dangerous outcomes and what I'm trying to do is frame those outcomes in the context of the containment problem and say this is the big challenge of the 21st century containment is the challenge and if it isn't possible then we have serious issues and
on the face of it like I've said in the book I mean the CH the first chapter is called containment is not possible right the last chapter is called con must be possible for all our sakes it must be possible but that but I agree with you that I'm not I'm not saying it is I'm saying this is what we have to be working on we have no choice we have no choice but to work on this problem this is a critical problem how much of your time are you focusing on this problem basically all
my time I mean bu building and creating is about understanding how these models work what their limitations are how to build it safely and ethically I mean we have designed the structure of the company to focus on the safety and ethics aspects so for example we are a public benefit Corporation right which is a new type of corporation which gives us a legal obligation to balance profit making with the consequences of our actions as a company on the rest of the world the way that we affect the environment you know the way that we affect
people the way that we affect users that people who aren't users of our products and that's a really interesting I think and important New Direction it's a new Evolution in corporate structure because it says we have a responsibility to proactively do our best to do the right thing right and I think that if if you were a tobacco company back in the day or an oil company back in the day and your legal Charter said that your directors are liable in if they don't meet the criteria of stewarding your work in a way that doesn't
just optimize profit which is what all companies are incentivized to do at the moment talking about incentives but actually in equal measure attends to the importance of doing good in the world to me that's a incremental but important innovation in how we organize society and how we incentivize our work so it doesn't solve everything it's it's it's not a Panacea but that's my effort to try and take a small step in the right direction do you ever get sad about it about what's happening yeah for sure for sure it's intense it's intense it's a lot
to take in this is it's a it's a very real reality does that weigh on you yeah it does I mean every day every day I mean I've been working on this for many years now and it's uh you know it's it's emotionally a lot to take in it's it's it's hard to think about the far out future and how your actions today our actions collectively our weaknesses our failures that you know that irritation that I have that we can't learn the lessons from the pandemic right like all of those moments where you feel the
frustration governments not working properly or corporations not listening or some of the obsessions that we have in culture where we're debating like small things you know and you're just like Whoa We need to focus on the big picture here you must feel a certain sense of responsibility as well that most people won't carry because you've spent so much of your life at the very cutting edge of this technology and you understand it better than most you can speak to it better than most so you have a a great chance than many at steering that's a
responsibility yeah I embrace that I try to treat that as a privilege I feel lucky to have the opportunity to try and do that there's this wonderful thing in my favorite theatrical play called Hamilton where he says history has its eyes on you do you feel that yeah I feel the I feel that I feel that it's a good way of putting it I do feel that you're happy right well what is happiness to know um what's the range of emotions that you you contend with on a on a frequent basis if you're being honest
I think is kind of exhausting and exhilarating in equal measure because for me it is beautiful to see people interact with AIS and get huge benefit out of it I mean you know every day now millions of people have a super smart tool in their pocket that is making them wiser and healthier and happier providing emotional support answering questions of every type making you more intelligent and so on the face of it in the short term that feels incredible it's amazing what we're all building but in the longer term it is exhausting to keep making
this argument and you know have been doing it for a long time and in a weird way I feel a bit of a sense of relief in the last six months because after chat gbt and you know this this wave feels like it's started to arrive and everybody gets it so I feel like it's a shared problem now and uh that feels nice it's not just bouncing around in your head a little bit it's not just in my head and a few other people at Deep Mind and open Ai and other places that have been
talking about it for a long time ultimately human beings May no longer be the primary planetary drivers as we have become accustomed to being we are going to live in an Epoch where the majority of our daily interactions are not with other people but with a eyes page 284 of your book The Last Page [Laughter] yeah think about how much of your day you spend looking uh screen 12 hours pretty much right whether it's a phone or an iPad or a desktop versus how much time you spend looking into the eyes of your friends and
your loved ones and so to me it's like we're already there in a way you know what I meant by that was you know this is a world that we're kind of already in you know the last three years people have been talking about metaverse metaverse metaverse and the mischaracterization of the metaverse was that it's over there it was this like virtual world that we would all Bop around in and talk to each other as these little characters and but that was totally wrong that was a complete misframing the metaverse is already here it's the
digital space that exists in parallel time to our everyday life it's the conversation that you will have on Twitter or you know the video that you'll post on YouTube or this podcast that will go out and connect with other people it's that meta space of interaction you know and I use meta to mean Beyond this space not just that weird other over there space that people seem to point to and that's really what is emerging here it's this parallel digital space that is going to live alongside with and in relation to our physical world your
kids come to you you got kids no I don't have kids your future kids if you ever have kids a young child walks up to you and says asks that question that Elon was asks what should I do about with my future what should I pursue in the light of everything you know about how artificial intelligence is going to change the world and computational power and all of these things what should I dedicate my life to what do you say I would say knowledge is power Embrace understand grapple with the consequences don't look the other
way when it feels scary and do everything you can to understand and part anticipate and shape because it is coming and if someone's listening to this and they want to do something to help this battle for which I think you present as the solution containment what can the individual do read listen use the tools try to make the tools understand the current state of Regulation see which organization are organizing around it like you know campaign groups activism groups you know find solidarity connect with other people spend time online ask these questions mention it at the
pub you know ask your parents ask your mom how she's reacting to you know talking to Alexa or whatever it is that she might do pay attention I think that's already enough and there's no need to be more prescriptive than that because I think people are creative and independent and will it will it will be obvious to you what you as an individual feel you need to contribute In This Moment provided you're paying attention last question what if we fail and what if we succeed what if we fail in containment and what if we succeed
in containment of artificial intelligence I honestly think that if we succeed this is going to be the the most productive and the most meritocratic moment in the history of our species we are about to make intelligence widely available to hundreds of millions if not billions of people and that is all going to make us smarter and much more creative and much more productive and I think over the next few decades we will solve many of our biggest Social Challenges I really believe that I really believe we're going to reduce the cost of energy production storage
and distribution to zero marginal cost we're going to reduce the cost of producing healthy food and make that widely available to everybody and I think the same trajectory with healthc care with Transportation with education I think that ends up producing radical abundance over a 30-year period and in a world of radical abundance what do I do with my day I think that's another profound question and believe me that is a good problem to have if we can absolutely if do we don't need meaning and purpose and oh man that is a better problem to have
than what we've just been talking about for the last like 90 minutes yeah and I think that's wonderful isn't that amazing I don't know I I don't know the reason I I I'm unsure is because everything that seems wonderful has a has a unintended consequence I'm sure it does we live in a world of food abundance in the west and our biggest problem is obesity right so I'll take that problem in the grand scheme of everything not need struggle do we not need that kind of meaningful voluntary struggle I think we'll create new other you
know opportunities to Quest okay you know I I think that's an easier problem to solve and I think it's an amazing problem like many people really don't want to work right they they want to pursue their passion and their Hobby and you know all the things that you talk about and so on and absolutely like we're now I think going to be heading towards a world where we can liberate people from the the Les of work unless you really want to Universal basic income I've long been an advocate of Ubi very long time everyone gets
a check every month I don't think it's going to quite take that form I actually think it's going to be that we basically reduce the cost of producing basic Goods so that you're not as dependent on income like imagine if you did have basically free energy and food and you you you could use that free energy to grow your own food you could grow in a desert because you would have adapted seeds and so on you would have you know desalination and so on that really changes the structure of cities it changes the structure of
Nations it means that you really can live in quite different ways for very extended periods without contact with the kind of Center I mean I'm actually not a huge advocate of that kind of libertarian you know wet dream but like I think if you think about it in theory it's kind of a really interesting Dynamic that's what proliferation of power means power isn't just about access to intelligence it's about access to these tools which allow you to take control of your own destiny and your life and create meaning and purpose in the way that you
you know might Envision and that's incredibly creative incredibly creative time that's what success looks like to me and well in some ways the downside of that I think this that failure is not achieving a world of radical abundance in my opinion and and more more importantly failure is a failure to contain right what does that lead to I think it leads to a mass proliferation of power and people who have really bad you know intentions what does that lead to will potentially use that power to cause harm to others this is part of the challenge
right a small in this networked globalized World a tiny group of people who wish to deliberately cause harm are going to have access to tools that can instantly quickly have large scale impact on many many other people and that's the challenge of proliferation is preventing those Bad actors from getting access to the means to completely destabilize um our world that's what containment is about we have a closing tradition on this podcast where the last guest leaves a question for the next guest not knowing who they're leaving the question for the question left for you is
what is a space or place that you consider the most sacred well I think one of the most beautiful places I remember going to as a child was um windir Lake in the Lake District um and I was pretty young and on a on a dingy with uh some family members and I just remember it being incredibly Serene and beautiful and and calm I actually haven't been back there since but that was a pretty beautiful place seems like the antithesis of the world we live in right maybe I should go back there and chill out
maybe thank you so much for writing such a great book it's wonderful to to to read a book on this subject matter that does present Solutions because not many of them do and it presents them in a balanced way that appreciates both sides of the argument doesn't isn't tempted to just play to either what do they call it playing to like the crowd they call like playing to the orchestra I can't remember right but just it doesn't attempt to play to either side or Ponder to either side in order to score points it seems to
be entirely nuanced incredibly smart and Incredibly necessary because of the stakes that the book confronts um that are at play in the world at the moment and and that's really important it's very very very important and it's important that I think everybody reads this book it's incredibly accessible as well and I said to Jack who's the director of this podcast before we started recording that there's so many term there's so many terms like nanotechnology and um all the stuff about like biotechnologies and Quantum Computing that reading through the book suddenly I understood what they meant
and these had been kind of exclusive ter terms and Technologies and I also had never understood the relationship that all of these Technologies now have with each other and how like robotics emerging with artificial intelligence is going to cause this whole new range of possibilities that again have a good side and a potential downside um It's a Wonderful book and it's perfectly timed it's perfectly timed wonderfully written perfectly timed I'm so thankful that I got to read it and I highly recommend that anybody that's curious on this subject matter goes and gets the book so
thank you Mustafa really really appreciate your time and hopefully it wasn't too uncomfortable for you thank you this was awesome I loved it it was really fun and uh thanks for such a amazing amazing wide ranging conversation thank you if you've been listening to this podcast over the last few months you'll know that we're sponsored and supported by Airbnb but it amazes me how many people don't realize they could actually be sitting on their very own Airbnb for me as someone who works away a lot it just makes sense to Airbnb my place at home
whilst I'm away if your job requires you to be away from home for extended periods of time why leave your home empty you can so easily turn your home into an Airbnb and let it generate income for you whilst you're on the road whether you could use a little extra money to cover some bills or for something a little bit more fun your home might just be worth more than you think and you can find out how much it's worth at airbnb.co /host that's airbnb.co slost [Music] ah [Music]
Related Videos
EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
1:56:32
EMERGENCY EPISODE: Ex-Google Officer Final...
The Diary Of A CEO
10,503,628 views
Scott Galloway: “I Bet $358,000 That They Win The Election!”
1:54:48
Scott Galloway: “I Bet $358,000 That They ...
The Diary Of A CEO
87,500 views
CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival
45:22
CEO of Microsoft AI speaks about the futur...
NBC News
352,145 views
Kevin O'Leary Gets Real About Why You Must Be Ruthless in Business | Inc.
44:01
Kevin O'Leary Gets Real About Why You Must...
Inc.
2,405,385 views
WARNING: ChatGPT Could Be The Start Of The End! Sam Harris
1:50:42
WARNING: ChatGPT Could Be The Start Of The...
The Diary Of A CEO
2,016,507 views
The Breathing Expert: Mouth Breathing Linked To ADHD, Diabetes & Child Sickness!
1:58:21
The Breathing Expert: Mouth Breathing Link...
The Diary Of A CEO
1,498,170 views
Nuclear War Expert: 72 Minutes To Wipe Out 60% Of Humans, In The Hands Of 1 Person! - Annie Jacobsen
2:11:42
Nuclear War Expert: 72 Minutes To Wipe Out...
The Diary Of A CEO
7,898,613 views
Mustafa Suleyman on The Coming Wave of AI, with Zanny Minton Beddoes
1:23:31
Mustafa Suleyman on The Coming Wave of AI,...
Intelligence Squared
80,630 views
Yuval Noah Harari: An Urgent Warning They Hope You Ignore. More War Is Coming!
1:46:10
Yuval Noah Harari: An Urgent Warning They ...
The Diary Of A CEO
2,299,325 views
CIA Spy: "Leave The USA Before 2030!" Why You Shouldn't Trust Your Gut! - Andrew Bustamante
2:02:20
CIA Spy: "Leave The USA Before 2030!" Why ...
The Diary Of A CEO
11,687,331 views
A fireside chat with Sam Altman OpenAI CEO at Harvard University
1:00:15
A fireside chat with Sam Altman OpenAI CEO...
Harvard Business School
196,093 views
The Groundbreaking Cancer Expert: (New Research) "This Common Food Is Making Cancer Worse!"
1:37:34
The Groundbreaking Cancer Expert: (New Res...
The Diary Of A CEO
2,933,523 views
The Money Making Expert: The Exact Formula For Turning $100 into $100k Per Month! - Daniel Priestley
1:56:08
The Money Making Expert: The Exact Formula...
The Diary Of A CEO
2,865,895 views
The Divorce Expert: 86% Of People Who Divorce Remarry! Why Sex Is Causing Divorces!
2:20:03
The Divorce Expert: 86% Of People Who Divo...
The Diary Of A CEO
6,127,193 views
Lawyer Reveals #1 Conversation Technique To Instantly Gain Authority, Respect & High Status
59:54
Lawyer Reveals #1 Conversation Technique T...
Doug Bopst
1,345,393 views
Trevor Noah: My Depression Was Linked To ADHD! Why I Left The Daily Show!
2:38:57
Trevor Noah: My Depression Was Linked To A...
The Diary Of A CEO
3,044,592 views
Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034!
1:54:17
Yuval Noah Harari: This Election Will Tear...
The Diary Of A CEO
929,089 views
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
2:23:57
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, ...
Lex Fridman
6,552,021 views
The ADHD Doctor: “I’ve Scanned 250,000 Brains” You (Steven Bartlett) Have ADHD!!! Dr Daniel Amen
1:49:14
The ADHD Doctor: “I’ve Scanned 250,000 Bra...
The Diary Of A CEO
6,447,419 views
"Life As We Know It Will Will Be Gone Soon" - Dangers Of AI & Humanity's Future | Mo Gawdat
2:58:32
"Life As We Know It Will Will Be Gone Soon...
Tom Bilyeu
1,438,916 views
Copyright © 2024. Made with ♥ in London by YTScribe.com