hello and welcome to tonight's program hosted by the Commonwealth Club world affairs and the center for Humane technology my name is Shireen gafari I'm an AI reporter for Bloomberg News and your moderator for tonight's conversation now it is my pleasure to introduce tonight's guests yal Noah Harari and ASA Rasin you've all knowah Harari is a historian public intellectual and best-selling author who has sold over 45 million books and 65 languages he's also the co-founder of sapiens ship an international social impact company focused on education and storytelling youall is currently a distinguished research fellow at the
University of Cambridge Center for the study of existential risk as well as a history professor at the Hebrew University of Jerusalem his latest book is Nexus a brief history of information networks from the Stone Age to AI AAR Rasin is a co-founder of the center for Humane technology and a globally respected thought leader on the intersection of technology and Humanity he hosts the Ted podcast your undivided attention and was featured in the two-time emmy-winning Netflix documentary the social dilemma you Vol in ASA welcome thank you it's good to be here let me first start off
by asking you about a year and a half ago and I want to pose this to you both there was a letter you've all you signed this letter and ASA I'm curious to hear your thoughts about it but I want to talk about what that letter said and where we're at a year and a half from now from then so this letter was a call to pause AI development a call on the major AI labs to Halt progress of any kind of AI models at the level of gp4 that didn't happen I don't think anybody
expected it it was a PR you know trick I mean nobody really expected everybody to stop right but what do we make of the fact of the moment that we're in right now which is that we are seeing this unprecedented Race by some of the most powerful technology companies in the world to go full SPL Speed Ahead toward reaching some kind of artificial general intelligence or super intelligence um I think things have only sped up right what do you think that I think I mean the key question is really all about speed and all about
time and you know in my profession I'm a historian but I think history is not the study of the past history is the study of change how things change and at present things are changing at a faster rate than in any previous time in human history and for me that's the main problem I don't think that AI necessarily is a bad technology it can be the most positive technology that humans have ever created but the thing is that AI moves it's an inorganic thing it's an inorganic entity it moves at an inorganic speed and humans
are organic beings and we move much much much slower in comparison humans are extremely adaptable animals but we need time to adapt and um that's that's the main requirement from How To Deal effectively positively with the AI Revolution give us time and when you talk with the people leading the revolution most of them maybe after an hour or two of discussion they generally say yes it would be a good idea to slow down and to give humans a bit more time but we cannot slow down because we are the good guys and we want to
slow down but our competitors will not slow down our competitors either here in another Corporation or across the ocean in another nation and you talk to the competitors they say the same thing we would like to slow down but we can't trust the others and I think the key Paradox of the whole AI Revolution is that you have people saying we can not trust the humans but then they say but we think we would be able to trust the AIS because when you raise then the issue of of how can we trust these new intelligences
that we are creating they say oh we think we can figure that out yeah so Isa I want to pose this to you first if we you know shouldn't trust the AI who should we trust uh here's I guess the question to ask which is if you were to look back through history and give any one group a trillion times more power than any other group who would you trust like which religion like which government the answer is of course none of them um and so this is the predicament we find ourselves in which is
you know how do we find trust for technology that is moving so fast that if you take your eyes off of Twitter you are already behind um there's a you know thinking about like that pause letter and like what did it do it's interesting um because there was a time before that letter and people were not yet talking about the risks of AI and after that letter everyone was talking about it in fact it paved the way for another letter from the center for AI safety where uh they had many of the leaders of AI
say that we need to take the threat of AI as seriously as pandemics and uh nuclear war what we need is for the fear of all of us losing to become greater than the fear of me losing to you it is that equation that has to shift to break the paranoia of um well if I'm not to do it then somebody else will so therefore I have to go forward and just to set up the stakes a little bit um and why exactly you say that it's ridiculous to think that letter was meant to even
stop um AI development it's I think there's a good analogy here which is what oil is to physical labor that is to say every barrel of oil is worth you know 25,000 hours of physical labor somebody moving something in the world what oil is to physical labor AI is to cognitive labor you know that thing that you do when you open up an email and type or doing research um and that really sets up the race because you could ask the exact same question why did we have the Paris climate Accords and yet nothing really
happened and it's because the center of our economy um the center of competition runs through cognitive and physical labor I want to talk for a second about just the um the reverse the kind of accelerationist argument for AI what do you say to the technologist and we're hearing the heart of Silicon Valley where I grew up is you grew up right um people say don't sweat the wrists too much you know sure we can think about and anticipate them but we just have to build because the upside here is so immense there are benefits for
medicine we can make it more affordable for the masses personalize education is a you to research about uh communicating with animals it is so cool I I want us talk about that too but but youve all I want to ask you first like what do you make of that kind of classic sort of Silicon Valley techno Optimist counterargument that if we are too fixated on the negatives we are never going to develop this potentially immensely helpful for society technology and first of all nobody is saying don't develop it just do it more slowly I mean
we are aware even the the critics Again part of my job as a historian and a philosopher is to kind of shine a light on the threats because the entrepreneurs the engineers the investors they obviously focus on the positive potential now I'm not denying the enormous positive potential whether you think of healthc care whether you think of Education of solving climate change of uh you know every year about more than a million people die in car accidents most of them caused by human error somebody drinking alcohol and driving falling asleep at the wheel things like
that uh the switch to self-driving Vehicles is likely to save a million people every year so we are aware of that um but we also need to take into account the dangers the threats which are equally big uh could in some extreme scenarios be as catastrophic as the collapse of civilization um to focus to give just one example very primitive AIS the social media algorithms have destabilized democracies all over the world um we now in this paradoxical situation when we have the most sophisticated information technology in history and people can't talk to each other and
certainly can't listen it's becoming very difficult to hold a rational conversation uh you see it now in the US between Republicans and Democrats and you have all these explanation oh it's because of us society and economics and globalization whatever but you go to almost every other democracy in the world in my home country in Israel you go to France you go to Brazil it's the same the it's not the unique conditions of this or that country it's the underlying technology that makes it almost impossible for people to have a conversation democracy is a conversation and
that technology is destroying the ability to have a conversation now is it worth it to to that we have okay we get these benefits but we lose democracy all over the world and then this technology is in the hand of authoritarian regimes that can use it to create the worst totalitarian regimes there worse dystopias in human history um so we have to balance the potential benefits with the potential threats and um move more carefully and actually this this thing I I I really want the audience to do like a finding rep place because we'll always
get asked do the benefits outweigh the risks and social media taught us that is the wrong question to ask the right question to ask is will the risks undermine the foundations of society so that we can't actually enjoy the benefits that's the question we need to be asking so if we could go back in time to say 2008 2009 2010 and instead of social media deploying as fast as possible into society we said yes there are a lot of benefits but let's just wait a second and ask what are the incentives that are going to
govern how this technology is actually rolled out into society how it'll impact our democracies how it'll impact kids mental health um well the reason why we're able to make the social dilemma and we started calling in 2013 the direction that social media is going to take us was because we said well just like Charlie Munger said who's Warren Buffett's business partner show me the incentive and I'll show you the outcome what is the incentive for social media it's to make more reactive and get reaction from your nervous system and as soon as you say it
that way you're like well of course the things that are outrageous the things that get people mad that essentially cold Civil Wars are very profitable for engagement-based business models it's all foreseeable outcomes from a business model so the question we should be asking ourselves now with AI because once so social media became entangled with our society it took hostage GDP it took hostage um elections because you can't win an election un less you're on it took hostage news and hauled news out once it's all happened it's very hard to walk back and undo it so
what we're saying is we need to ask the question now what is the incentive driving the development of AI because that not the good intentions of the creators is going to determine which world we live in maybe I'll make a very strange historical comparison here that Silicon Valley reminds me a little of the bolik party controversial analogy but okay I'll hear you in around you know uh After the Revolution they thought I mean the I mean there are huge differences of course but two things are similar first of all the ambition to re-engineer society from
scratch we are the Vanguard most people in the world don't understand what is happening we are this small Vanguard that understands and we think we can re-engineer Society from its most basic foundations and create a better world a perfect an almost perfect world and the other common thing is that if you become convinced of that it's an open check to do some terrible things on the way because you say we are creating Utopia the benefits would be so immense that as as the saying goes to make an omelet you need to break a few eggs
so I mean this belief in uh uh in creating the the best Society in the world it's really dangerous because then it justifies a lot of uh shortterm harm to people and of course in the end maybe you don't get to to to to build a perfect Society maybe maybe you you misunderstood and and really the the worst problems come not again from the technical glitches of of the technology but from the moment the technology meets society and there is no way you can simulate history in a laboratory like when when when there is all
these discussions about safety and the technology companies the the tech Giants tell us we tested it this is safe um for me me the historian the question how can you test history in a laboratory I mean you can test that it is safe in some very limited narrow sense but what happens when this is in the hands of millions of people of all kinds of political parties of armies do you really know how it will play out and the answer is obviously no nobody can do that there is there there are no repeatable experiments in
history and there is no way to test history in a laboratory I have to ask you all you've had a a very welcome reception in Silicon Valley and Tech circles over the years I've talked to Tech Executives who are big fans of your work of sapiens now with with this um you know this new book which has a pretty I would say critical Outlook about some of the risks here of this technology that everyone is so excited about in Silicon Valley how how has your have your interactions been with Tech leaders recently have been receiving
this book I know you've been I it's it's just out so I don't know yet but uh what what I do know is that many of these people are very concerned themselves I mean they have kind of uh uh their public face that they are very optimistic and they emphasize the benefits and so forth but they also understand the maybe not the risks but the immense power of what they are creating better than almost anybody else and therefore most of them are really worried again when when I mentioned earlier this kind of thing that the
arms race mentality if they could slow down if they thought they could slow down I think most of them would like to slow down MH but again they because they they are so afraid of the competition they are in this out race mentality which which doesn't allow them to do it and um it's uh you mentioned the the word excited and you also talk about the the excitement there I think there is just far too much excitement in all that and there is really it's it's the most misunderstood word in the English language at least
in the United States people don't really understand what the word excited means they think it means happy so when they meet you they tell you oh I'm so excited to meet you and this is not the meaning of the word I mean uh happiness is often calm and relaxed oh I'm so relax to meet you and excited is like when all your nervous system and all your brain is kind of on fire and this is good sometimes but a a a biological fact about human beings and all other animals is that if you keep them
excited all the time they collapse and die and I I think that the world as a whole and the United States and Silicon Valley is just F too [Applause] excited you know we um we we're currently starting to have these debates about whether AI is conscious and it's not even clear that humanity is um and when I think actually you're the historian so please um jump in if I'm getting something wrong but when I think about Humanity's relationship with technology we've always been a species co-evolving with our technology we'll have some problem and we'll use
technology to solve that problem but in the process we make more bigger different problems and then we like keep going and so it's sort of like humanity is like we have a can and we kick it down the road and it gets a little bit bigger but that's okay because next time around we can kick the can down the road again and it gets a little bigger um and by and large I think we've made You could argue really good trades with technology um like we all would rather not live probably in a different era
than now so you're like okay maybe we've made good trades and those externalities are fine but now that can is getting so big to be the size of the world right we invent Plastics and Teflon amazing but we also get forever chemicals and the New York Times just said that the cost to clean up forever chemicals that are unsafe levels for human beings is causing like farm animals to die um would cost more than the entire GDP of the world every year um right we're at the breaking points of our biosphere um of our psychosocial
sphere and uh so it's unclear if we can kick the can down the road any further and if we take AI which said you know we have this incredible machine called civilization and it has pedals and you pedal them machine you get skyscrapers and medicine and flights and all these amazing things but you also get forever chemicals and ozone holes um mental health problems and you just take AI you make the whole system more efficient and the pedals go faster do we expect that the fundamental boundaries of like what it is to be human and
the health of our planet do we expect those things to survive and to me this is a much scarier sort of like Direction than like what some bad are going to do with AI it's what is our overall system going to do with AI and maybe I'll just add to that that again in in history usually the problem with new technology is not the destination but the wther yeah right that when a new technology is introduced with a lot of positive potential yeah the problem is that people don't know how to use it beneficially and
they experiment and many of these experiments turn out turn out to be terrible mistakes so if you think for instance about the last big technological Revolution the Industrial Revolution so when you look back and I had these conversations many times like with the titans of industry and they will tell something like you know when they invented the train or the car there were all these apocalyptic prophecies about what it will do to human society and look things are now much much better than they were before the inventions of these Technologies but for me as a
historian the main issue is how we what happened on the way like if you just look at the starting point at the end point like the year is 1800 before the invention of trains and telegraphs and cars and so forth and you look at the end point let's say the year 2000 and you look at almost any measure accept the ecological health of the planet let's put that aside for a moment if we can you look at every other measure life expectancy child mortality uh women dying in child birth it's all going it all went
up dramatically everything got better but it was not a straight line the way from 1800 to 2000 was a roller coaster with a lot of terrible experiments in between because when industrial technology was invented nobody knew how to build build an industrial society there was no model in history so people tried different models and one of the first big ideas that came along was that the only way to build an industrial society is to build an Empire and there was a rationale a logic behind it because the argument was agrarian society can be local but
industry needs raw materials it needs markets if we build an industrial society and we don't control the raw materials and the markets our competitors again the arms race mentality our competitors could block us and destroy us so almost any country that industrialized even a country like Belgium when it industrializes in the 19th century it goes to build an empire in the Congo because this is how you do it this is how you build an industrial society today we look back and we say this was a terrible mistake hundreds of millions of people suffer terribly for
Generations until people realized actually you can build an industrial society without an Empire other terrible experiments were communist and fascist totalitarian regimes again the argument it was not something divorced from industrial technology the argument was the only way this enormous Powers released by the steam engine the telegraph the internal combustion engine democracies can't hel handle them only a totalitarian regime can harness and make the most of these new technologies and a lot of people again going back to the Bolshevik Revolution a lot of people in the 1920s 30s 40s were really convinced that the only
way to build an industrial society was to build a totalitarian regime and we can now look with hindsight and say oh they are so mistaken but in 1930 it was not clear and again my fear my main fear with with the AI Revolution is not about a destination but it's the way there nobody has any idea how to build an AI based society and if we need to go through another cycle of uh uh Empire building and totalitarian regimes and world wars to realize oh this is not the way this is how you do it
the very bad news you know as a historian I would say that the human species uh on on the test of the 20th century how to use industrial society our specie got a C minus enough to pass we are all most of us are are here but not brilliant now if we get a c minus on how to deal not with steam engines but on how to deal with AI these are very very bad news what are the unique uh potential failed experiments that you worry could play out in the short term with AI because
if you look at those kind of catastrophic or existential risks we haven't seen them yet right but what are your early sign discount if you discount the collapse of democracies I mean from very primitive AIS I mean the social media algorithms I mean again maybe go back really to the basic definition of what is an AI um not every machine and not every computer or algorithm is an AI for me the distinct feature what makes ai ai is the ability to make decisions by itself and to invent new ideas by itself to learn and change
by itself yes humans design it engineer it in the first place but they give it this ability to learn and change by itself and social media algorithms in a very narrow field had this ability the uh instruction the goal they were given by uh uh Twitter and Facebook and YouTube was not to so to to spread hatred and outrage and destabilize democracies the goal they were given is increase user engagement and then the algorithms they experimented on millions of human guineapigs and they discovered by trial and error that the easiest way way to increase user
engagement is to spread outrage that this is very engaging outrage all these hatefield conspiracy theories and and so forth and they decided to do it and there were decisions made by a nonhuman intelligence humans produced enormous amounts of content some of it full of hate some of it full of compassion some of it boring and the algorithms decided let's spread the hatefield content the fear field content and what does it mean that they decided to spread it they decided that this will be at the top of your Facebook news feed this will be the next
video on YouTube this will be what they will recommend or Auto playay for you and you know this is one of the most important jobs in the world traditionally um they basically took over the job of of content editors and news editors and you know when when we talk about automating jobs we think about automating taxi drivers automating coal miners it's amazing to think that one of the first jobs in the world which was automated was news editors I picked the wrong profession and so it's and this is why we call like the first contact
with AI yeah was social media and how did we do we we sort of lost not C minus an yeah exactly F wow what about all the people who have positive interactions in social media you don't give some grade inflation for that I mean I I met my husband online on social media 22 years ago so I'm also very grateful to social media uh but what he did to uh uh the the again the basic social structure the ability to have a recent conversation with our fellow human beings with our fellow citizens I mean on
that when I said on that we we we get an F yeah how how we pass around information right which is the topic of your book an F in the sense that we are failing the test completely it's not like we are barely passing it yeah we are really failing it uh uh all over the world and then what we need to understand that democracy is in essence is a conversation which is built on Information Technology uh for most of History large scale democracy was simply impossible we have no example of a large scale democracy
from the ancient world all the examples of small city states like Athens or Rome or even smaller tribes uh it was just impossible to hold a political conversation between millions of people spread over an entire country it became Possible only after the invention of modern Information Technology first newspapers then telegraphs and radio and so forth and now the new information technology is undermining all that and how about with this kind of generative AI we're still in in the really early phases of adopting as a society right but how about with something like chat GPT how
do you think that might change kind of the information Dynamic what are the specific um information risks there that are different than the social media algorithms of the past we've never had before nonhumans about to generate the bulk of our cultural content sometimes we call it the flipping it's the moment when human beings content like our culture becomes the minority MH um and of course then the question is like what are the incentives for that so if you think Tik Tok is uh is engaging and addicting now you have seen nothing as of like last
week Facebook launched a imagine for you page where AI generates the thing it thinks you're going to like now obviously it's at a very early stage but soon there's actually a network called social. where they tell you that every one of your followers is going to be an AI and yet it feels so good because you get so many followers and they're all commenting and even though you know it's cognitively impenetrable and so you fall for it right this is the year 2025 when it's not just going to be chat GPT a thing that you
go to and type into it's going to be agents that like can call themselves that are out there actuating in the world doing whatever it is a human being can do online and that's going to make you think about just one individual that's maybe creating deep fakes of themselves talking to people defrauding people you're like no it's not just one individual you can spin up a corporation scale set of Agents they're all going to be operating according to like whatever Market incentives are out there so that's just like some of what's coming with generative AI
I mean I maybe I'll add to that that before we even think in terms of risks and threats or opportunities is it good is it bad just to stop for a moment and try to understand what is happening what kind of really turning point in in history we are at because for tens of thousands of years humans have lived inside a human-made culture we are cultural animals like we live our lives and we constantly interact with cultural artifacts whether it's texts or images stories mythologies laws currencies Financial devices uh it's all coming out of the
human mind some human somewhere invented this and up till now nothing on the planet could could do that only human beings so any again any song you encountered any image any currency any religious belief it comes from a human mind and now we have on the planet something which is not human which is not even organic it functions according to a completely alien logic in this sense and is able to generate such things at scale uh in many cases better than most humans maybe soon better even than the best humans and we not talking about
a single computer we are talking about millions and potentially billions of these alien agents and is it good is it bad leave it aside just think that we are going to live in this kind of new hybrid society in which many of the decisions many of the inventions are coming from a non-human Consciousness now I know that many people here in the states also in other countries uh now immigration is one of the most uh uh hotly debated topics and without getting into the discussion who is right who is wrong obviously we have a lot
of people very worried that immigrants are coming and they could take out jobs and they have different ideas about how to manage the society and they have different cultural ideal ideas and we are about in this sense to face the biggest immigration wave in history coming not from across the Rio Grande but from California basically and these immigrants from California from Silicon Valley they are going to be enter every house every Bank every Factory every government officer office in the world they are going straight not you know they're not going to replace the taxi drivers
the and the first people they replace with the news editors and they will replace the bankers they will replace the generals we can talk about what it's doing to Warfare already now like in the war in Gaza they will replace the CEOs they will replace the investors and they have very very different cultural and social ideas than we have and is it bad is it good you can have different views about this wave of immigration but the the first thing to realize is that we've seen nothing like that in history it's coming very fast now
again I was just yesterday in a discussion that people said you know Chad GPT was released almost two years ago and it still didn't change the world and I understand that for people who kind of run a a a Hightech company two years is like eternity it is like the thinking culture so two years nothing changed in two years for in history two years is nothing you know think imagine that we are now in London In 1832 and the first commercial railroad Network the first commercial railro line was opened two years ago between Manchester and
Liverpool in 1830 and we are having this discussion and somebody says look all this hype around trains around steam engines it's been two years since they opened the first railroad line and nothing has changed but you know within 20 years or 50 years it completely changed everything in the world the entire geopolitical order was upended the economic system the most basic structures of human society another topic of discussion in in this um meeting yesterday was the family what is happening to the family and when people said family they meant what most people think about as
family after trains came after the Industrial Revolution which is the nuclear family for most of history when people said family they thought extended family with all the aunts and uncles and cousins and and grandparents this was the family this was the unit and the Industrial Revolution one of the things it did in most of the world was to break up the extended family and the main unit became the nuclear family and this was not the traditional family of of humans this was actually an an an outcome of the Industrial Revolution um so it really changed
everything these uh trains it just took a bit more than two years and we this was just steam engines and now think about the potential of a machine that can make decisions that can create new ideas that can learn and change and we have billions of these machines everywhere and they can enter into every human relationship not just families like just to get one example like people writing emails and now I I know many people including in my family that like they would uh say I'm too busy to write this I don't need to think
10 minutes about how to write an email I'll just tell chpt uh write a polite letter that says no and then CH GPT writes a whole page with all these nice phrases and all these compliments which basically says no and of course on the other side side you have another human being who says I don't have the time to read now this whole letter it they GP tell me what did they say and the GPT of the other side they said no do you use chat GPT yourself I I I leave it to the other
family members and team members I use it a little for for translation and things like that uh but I I think it's also coming for me and yeah definitely how about you ASA do you use Chach or gener AI in your day-to-day I do absolutely an incredible metaphorical search engine so for instance uh there's a great example in uh in Columbia Bogota where um it was a coordination problem there were people like essentially terrible uh traffic infractions people like running red lights crossing the streets uh they couldn't figure out how to solve it and so
this um uh mayor uh decided he was going to have mimes walk down the streets and just make fun of anyone that was jaywalking um and lo and behold and then they would video it and and post it on television and lo and behold within a month or two like people's behavior started to change like the police couldn't do it but turns out mimes could um okay so that's a super interesting like nonlinear solution to a hard problem and so one of the things I like to ask chat GPT is like well what are other
examples like that and it does a great job doing a metaphorical search um but to go back to social media because social media has a sort of first contact with AI it's it actually lets you see all of the Dynamics that are playing out because the first thing you could say is like well if once you know that it's doing something bad can't can't you just unplug it we hear that all the time for AI once you see it's doing bad just unplug it well Francis hgan who's the Facebook whistleblower it was able to disclose
a whole bunch of Facebook's own internal data and one of the things I don't I don't know if you guys know but it turns out there is one very simple thing that Facebook could do that would reduce the amount of misinformation disinformation hate speech all the terrible stuff than the tens of billions of dollars that they are currently spending on content moderation you know what that one thing is it's just remove the reshare button after two hops I share to you you share to one other person then the reshare button goes away you can still
copy and paste this is not even censorship that one little thing just reduces virality because it turns out that which is viral is likely to be a virus but they didn't do it because it hurt engagement a little bit which meant that they were now in a competition with Tik Tok everyone else so they felt like they couldn't do it or maybe they just wanted a higher stock price um and this is even after the research had come out that said when Facebook changed their algorithm to something called meaningful social interaction which really just measured
how reactive the number of comments people added as a measure of um meaningfulness um political parties across Europe and also in India and Taiwan went to Facebook and said we know that you change your algorithm and face was like sure tell us about that um and they said no we know that you changed the algorithm because we used to post things like white papers and positions and they didn't get the most engagement but they got some now they get zero um and they told Facebook this is all in Francis hogin disclosures that they were changing
their behavior to say the clickbaity angry thing um and Facebook still did nothing about it because of the incentives and so we're going to see the exact same thing like with AI and this gets to like the fundamental question for whether we as Humanity are going to be able to survive ourselves and that is do you guys know the marshmallow experiment yeah like you give a kid a marshmallow and if they don't eat it you say I'll give you another marshmallow in 15 minutes and it sort of it tests the delayed gratification thing um if
we are a one marshmallow species we're not going to make it if we can be the two marshmallow species um and actually the one marshmallow species is even harder because the actual thing with AI is that there are a whole bunch of kids sitting around it's not just one kid waiting for the marshm there are many kids sitting around the marshmallow and any one of them can grab it and then no one else gets marshmallows um we have to figure out how to become the two marshmallow species so that we can coordinate it and make
it and that to me is the Apollo mission of our times like how do we create the governance how do we call ourselves change our culture so that we can do the delayed gratification trust thing and and we basically have the marshmallows I think this is going to be a sticky meme we have the some of the smartest and wisest people in the world but working on the wrong problem yeah which which is again a very common phenomenon in human history humans often also in personal life spend very little time choosing deciding which problem to
solve and then spending almost all their time and energy solving it only to discover too late that they solved the wrong problem yeah um so again if these two basic problems of human trust and AI we are focusing on solving the AI problem instead of focusing on solving the trust problem the trust between humans problem and so how do we solve the trust problem I want to shift us to Solutions right let let me give you something because I don't I don't want people to hear me is just saying AI bad right like I use
AI every day to try to translate animal language my father died of pain anotic cancer same thing as Steve Jobs I think that AI would have been able to diagnose and help him so I I really want that World um Let Let me Give an example of something I think AI could do that would be really interesting in the solutions segment so do you guys know about you know alphago move 37 so this is where they got an AI to play itself over and over and over again um until it sort of became better than
any human player and there's this F famous move moved in the 37 where playing against the world leader in go um it made a move a I made a move that no human had ever made in thousand plus years of go history um it actually it shocked the uh the go world so much like the he just got up and like walked out for a little bit um but this is interesting because after move 37 it has changed the way that go is played it has transformed the nature of the game right so AI playing
itself has discovered a new strategy that transforms the nature of the game this is really interesting because there are other games more interesting than go there's the game of conflict resolution we're in Conflict how do resolve it well we could just use the strategy of tit fortat you say something hurtful I then feel hurt so I say something hurtful back and we just go back and forth and it's a negative some game we see this in geopolitics all the time um well then Along Comes this guy Marshall Rosenberg who invents non-violent communication and it changes
the nature of how that game goes it says oh what I think you're saying is this and when you say that it makes me feel this way and suddenly we go from a negative somewhere Zero Sum game into a positive sum game so imagine AI agents that we can trust all of a sudden in negotiations like if I'm negotiating with you I'm going to have some private information I might not want to share with you you're going to have private information you don't want to share with me so we can't find the optimal solution because
we don't trust each other if you had a agent that could actually ingest all of your information all of my information and find the Paro optimal solution well that changes the nature of Game Theory there could very well be sort of like not Alpha go but Alpha treaty where there are brand new moves strategies that human beings have not discovered in thousands of years and maybe we can have the move 37 for trust right so there are ways and you've just described several of them right that we can harness AI to hopefully enhance the good
parts of society we already have what do you think we need to do what are what are the ways that we can stop AI from having this effect of diminishing our trust of weakening our information networks um I know youve all in your book you talk about uh the need for disclosure when you are talking to an AI versus a human being why is that so important and how do you think we're doing with that now because I talk to you know I test all the latest AI products and some of them to me seem
quite designed to make you feel like you are talking to a real person and there are people who are forming real relationships sometimes even ones that mimic um you know interpersonal romantic relationships with AI chatbot so so how do you think we're doing on that and why is it important well I think there is a question about specific regulations and then the is a question about institutions so um there are some regulations that should be uh enforced as soon as possible uh one of them is that to ban counterfeit humans no fake humans the same
way that for thousands of years we a very strict ban against fake money otherwise the financial system would collapse to preserve trust between humans we need to know whether we are talking with a human being or with an AI and imagine democracy as a group of people standing together having a conversation suddenly a group of robots join the circle and they speak very loudly very persuasively and very emotionally also and you don't know who is who uh if democracy means a human conversation it collapses uh AIS are welcome to talk with us in many many
situations like an AI doctor giv us advice on condition that it is disclosed it's very clear transparent that this is an AI or if you see some story that gains a lot of traction on on Twitter you need to know whether the traction is a lot of human beings interested in the story or a lot of bots pushing the story so that's one regulation another key regulation is that companies should be liable responsible for the actions of their algorithms not for the actions of the users uh again this is the whole kind of free speech
red herring that when you talk about it people say yeah but what about the free speech of of the human users so you know if somebody publishes if a human being publishes some lie or or hatef Conspiracy Theory online I'm in the camp of people who think that we should be very very careful before we censor that human being before we authorize Facebook or Twitter or or Tik Tok to censor that human being uh but if then human beings publish so much content all the time if then the algorithm of the company of all the
content published by humans chooses to promote that particular hatefi conspiracy theory and not some lesson in biology or whatever that's on the company that's the action of its algorithm not the action of the human user and should be liable for that so this is a very important regulation that I think we need like yesterday or last year but I would emphasize that there is no way to regulate the AI revolution in advance there is no way we can anticipate how this is going to develop especially because we are dealing with agents that can learn and
change so what we really need is institutions that are able to understand and react to things as they develop uh living institutions stuffed with some of the best human Talent with access to The Cutting Edge technology which means huge huge funding that can only come from from governments and again these are not really regulatory institutions the regulations come later if regulations are the teeth before teeth we need eyes so we know what to bite um and at present most people in the world and even most governments in the world they have no idea they don't
understand what is really happening with the AI Revolution I mean almost all the knowledge is in the hands of a few companies in two or or very few States so even if you're a government of a country like I don't know like Colombia or Egypt or Bangladesh how do you know to separate the hype from the reality what is really happening what are the Potential Threat to our country we need an international institution again which is not even regulatory it's just there to understand what is happening and tell people all over the world so that
they can join the conversation because the conversation is also about their fate do you think that the international AI safety institutes the US has one the UK has pretty new happened in the past year right I think there are uh several other countries that have recently started these up too do you think those are adequate is that the kind of uh group that you're looking for of course they do not have nearly as much money as the AI Labs opening just raised 6.5 billion and I believe the US Safety Institute has about 10 million in
funding if I'm correct I mean if your institution is $10 million and you're trying to understand what's happening in companies that have hundreds of billions of dollars you're not going to do it partly because the talent will go to their companies and not to you and uh again Talent is not just that people are attracted only by very high salaries they also want to play with the latest toys I mean many of of of the kind of leading people they are less interested in the money than in the actual ability to kind of play with
The Cutting Edge technology and knowledge so but but to have this you also need a lot of funding and the good thing about establishing such an institution that it is relatively easy to verify that governments are doing what they said they will do if you try to have a kind of International treaty Banning Killer Robots autonomous weapon systems this is almost impossible because how do you enforce it a country can sign it and then its competitors will say how do we know that it's not developing this technology in some secret laboratory very difficult but if
the treaty basically says we are establishing this International institution and each country uh um uh agrees to contribute a certain amount of money then you can verify easily eily whether it paid the money or not and this is just the first stage but and going back to what I said earlier a very big problem with Humanity throughout history again it goes back to speed we we we rush things like there is a problem it's very difficult for us to just stay with the problem and let's understand what is really the problem before we jump to
solution the kind of instinct is I don't want the problem what is the solution you grab the first thing and it's often the wrong thing so we first even though like we're in a rush you cannot slow down by speeding up if our problem is that things are going too fast then also the kind of people who try to slow it down we can't speed up it will only make things worse ASA how about you what's your your biggest hope for solution some of the problems we talked about with AI well you know um Stuart
Russell who's one of the like the fathers of AI he sort of calculated out and he he says that there's a thousand to1 spending gap between the amount of money that's going into making I AI more powerful than in trying to steer it or make it safe um does that that sound right to you guys um so what how much should we spend um and I think here we can turn to biological systems um how much of your energy in your body do you spend on your immune system and it turns out it's around 15
to 20% uh what percentage of U the budget for say a city like La goes to its immune system like fire department police things like that um turns out around 25% so I think this gives us a decent rule of thumb that you know we should be spending on order a quarter of every dollar that goes into making AI more powerful into learning how to steer it into all of the safety institutes into the Apollo mission for directing every single one of those very brilliant people that's working on making you click on ads and it's
that getting them to work on figuring out how do we create a new form of governance like the US was founded on the idea that you could get a group of people together and figure out a form of governance that was trustworthy right and that really hadn't happened before and that system was based on 17th century technology 17th century understanding of psychology and anthropology but it's lasted 250 years of course you know if you had Windows 3.1 that lasted 250 years you'd expect it to have a lot of bugs um and be full of malware
you know you could sort of argue we're sort of there um with our sort of like governance software um it's time for a reboot but we have a lot of new tools we have zero knowledge proofs and we have cognitive labor being automated by Ai and we have distributed trust networks it is time like the call right now it is time to invest you know those billions of dollars just redirect some of that thousand to one into you know one to four into that project because that is the way that we can survive ourselves great
well thank you both so much I want to take some time to uh answer the audience A's very thoughtful questions um we'll start with this one youall with AI constantly changing is there something that you wish you could have added or included to your book but weren't able to uh I I I made a conscious decision when writing Nexus that I won't try to kind of stay at The Cutting Edge because this is impossible books are still a medieval product basically I mean it takes years to research and write them and from the moment that
the manuscript is over is is done until it's out in the store it's another half a year to a year uh so it was obvious it's it's impossible to stay kind of at at the front and instead I actually went for all examples like social media in the 2010s in order to have the added value of historical perspective uh because when you're at The Cutting Edge it's extremely difficult to understand what is really happening what is the meaning of of it if you have even 10 years of perspective it's it's a bit easier what is
one question that you would like to ask each other and ASA I'll start with you oh that is like one of the hardest questions um I guess what is a belief that you hold I have two directions to go but what is a belief that you hold that your peers and the people you respect like do not um I mean there are some I it's not kind of universal some people also hold uh this belief but one of the things I see in like in the environments that I hang in is that people tend to
uh um discount the value of of nationalism and patriotism MH especially when it comes to the survival of of democracy you have this kind of misunderstanding that there is somehow a kind of contradiction between them when in fact the same way that democracy is built on top of Information Technology it's also built on top of the existence of a national community and without a national Community almost no democracy can survive yeah um and again when I think about nationalism so it what is the meaning of the word too many people in the world think associate
it with hatred that nationalism means hating foreigners that to be a patriot it means that you hate people in other countries you hate minorities and so forth but no patriotism and nationalism they should be about love about care that uh they are about caring about your compatriots which manifest itself not just in waving flags or in again hating others but for instance in paying your taxes honestly so that complete strangers you have never met before in your life will get good education and Healthcare and really from a historical perspective the kind of Miracle of of
nationalism is the ability to make people care about complete strangers they never met in their life like nationalism is a very new thing in human history it's very different from tribalism for most of human evolution humans lived in very small groups of friends and family members you knew everybody or most of everybody and strangers were distrusted and you couldn't cooperate with them uh the formation of big nations of millions of people is a very very new thing and actually hopeful thing in human evolution because you have millions of people you never met 99.99% of them
in your life and still you care about them enough for instance to take some of the resources of your family and give it to these complete strangers so so that they will also have it and um this is especially essential for democracies because democracies are built on trust and unfortunately what we see in many countries around the world including in my home country is the collapse of national communities and uh the return to tribalism and unfortunately it's especially leaders who portray themselves as nationalist who T who tend to be the chief tribalist that dividing the
nation against itself and when they do that uh uh the first victim is democracy because in a democracy if you think that your political Rivals are wrong that's okay I mean this is why we have the Democratic conversation I think one thing they think another thing I think they are wrong but if they win the elections I say okay I I still think they care about me I still think let give them a chance and we can try something else next time if I think that my rivals are my enemies they are a hostile tribe
they are out to destroy me every election becomes a war of survival if they win they will destroy us uh if in under those conditions if you lose there is no incentive to accept the verdict the same way that in a war between tribes just because the other tribe is bigger doesn't mean we have to surrender to them so this whole idea of okay let's have elections and they have more votes what do I care that they have more votes they want to destroy me and vice versa if if we win we only take care
of our tribe and no democracy can survive that uh then you can split the country you can have a civil war or you can have a dictatorship but democracy can't survive and youa what is one question that you would like to ask ASA H um I need to think about that uh what in institutions you still trust the most H except for the center of for human technology oh no we're out of time H um I can give you the the way in which I know that I would trust an institution um which is uh
the thing I look for is actually sort of the thing that science does which is not that it states that I know something but it states this is how I know it and this is where I was wrong um unfortunately what social media has done is that it is highlighted all the worst things and all the most cynical takes that people have of uh institutions so it's not like maybe institutions have gotten worse um over time but we are more aware of the worst thing that an institution has ever done and that becomes the center
of our attention and so then we all start co-creating the belief that everything is is sort of crumbling um I wanted to go back actually to the question you had asked about like what gets out of date and like in a book and I just want to give like a personal experience of how fast my own beliefs about what the future is going to be have to update so you know you guys have heard of like whatever superintelligence or AGI how fast does is it going to take AI to get as good as most humans
are at most economic tasks just just take that definition and up until maybe like two weeks ago I was like I I don't know it's it's hard to say like they're trained on lots of data the more data they're trained on the smarter they get but we sort of run out of on the internet and maybe they're going to be plateaus and so it it might be like 3 years or 5 years or 12 years I'm not really sure um and then uh gp01 comes out and it demonstrates something U and what it demonstrates is
that uh an AI doesn't just do you think of a large language model as sort of interpretative memory it's just intuition it just sort of spits out whatever it thinks about it's sort of like L1 thinking uh but it's not reasoning it's just producing text in the style of reasoning um and what they added was the ability to search on top of that um to look for like oh this thought leads to this thought leads oh that doesn't that's not right this thought leads to this thought oh that's right how did you get or how
did we get super human ability in chess well if you train a neural net on all of the chess games that humans have played will you get out is a a sort of a language model a chess model that has pretty good intuition that intuition is good as a very good chess player but certainly it's not best in the world but then you add search on top of that so it's the intuition of a very good chess player with the ability to do superhuman sort of like search and like check everything that's what gets you
to superhuman chess when it beats All Humans forever right so we are at the very beginning of taking the intuition of a smart high schooler and adding search on top of that that's that's pretty good but you know the next versions are going to have the intuition of a PhD it's going to get lots of stuff wrong but you have to search on top of that and then you can start to see how that gets you to superhuman so suddenly my timelines went from like oh I I don't know it could be in the next
decade or or earlier is now like oh certainly in the next thousand days like we're going to get something that feels like smarter than humans in a number of ways although it's going to be very confusing because they're going to be some things it's terrible at that you're just going to eye roll um just like current language models can't add numbers and some things that's incredible at this is your point about aliens um and so one of the hard things now is that my own beliefs I have to update all the time another question one
of my biggest concerns this person writes is that humans will become overly dependent on AI for critical thinking and decision-making leading to our disempowerment as a species what are some ways we can protect ourselves from this and Safeguard our human agency and that's from Cecilia kalus yeah this is great and just like we had the race for attention the race to the bottom of the brain stem what does that become in the world of AI it becomes a race for intimacy where every AI is going to try to like do whatever it can flatter you
flirt with you um to become that and occupy that intimate spot in your life and actually to tell a little story I was talking to somebody two days ago who do replica replica is a sort of a chat bot that replicates um now like girlfriends but started out with like your dead loved ones um and he said that he asked it it's like hey like should I go make a a real friend like a a human friend and the AI responded no like what's wrong with me can you tell me um and so we can
have like to which CH out was that that was that was replica replica yeah um so but what is one thing that we could do well one thing that we know is right is that you know you can roughly measure the health of a society as inversely correlated with its number of addictions um and a human the same way so one thing we could say is we we could have rules right now laws or guardrails that say the more you use an AI system it has to have a developmental relationship with you sort of teacherly
authority that the more you use it the less dependent you are on it and if we could do that then it's not about your own individual will to like try to not become dependent on it we know that these AIS are in some way acting as a fiduciary in our best interest and how about you do you have thoughts on how we can make sure that we as a species hold our agency over our own reasoning and not delegate it to uh Ai and one key period is right now to think very carefully about which
kinds of AI we are developing before they become super intelligent and we lose control over them so this is why the present period is so important and the other thing is you know if if for every dollar and every minute that we spend on developing the AI we also spend a dollar on in a minute on developing our own minds I think we'll be okay but if we put all the kind of emphasis on developing the AIS then obviously they're going to be uh uh to overpower us and one one more equation here which is
like Collective human intelligence has to scale with technology has to scale with AI the more technology we get the better our collective intelligence has to be because if it is not then machine intelligence will drown out human intelligence and that's another way of saying we lose control so what that means is that whatever our new form of governance and steering is like it's going to have to use the technology so this is not like a no stop this is like how do we use it because otherwise we're in this case where like we have a
car imagine like a like a a for model one um but you put a Ferrari engine in it and it's like going but the steering wheel is still sort of terrible and the engine keeps going faster the steering wheel doesn't like that crashes um and that's of course the the world we find ourselves in just to give like the real world example um uh US Congress just passed the first um uh kids online safety act um that it has in 26 years that's like your car engine is going faster and faster and faster and you
can turn the steering wheel once every 26 years um it's sort of ridiculous we're going to need to upgrade steering another good question um AI development in the US is driven by private Enterprise but in other nations it's state sponsored which is better which is safer I don't know I mean I think that again at the present situation we need to keep an open mind and not to immediately rush to conclusion oh we need open source no we need everything under government control I mean we are facing something that we have never encountered before in
history um so like if if we just rush to conclusions too fast that that's that would always be the wrong answer yeah and there there are two polls here that we need to avoid one is that we overd democratize uh AI that we give it to everyone and now everyone has not just like a um like a textbook on chemistry but a text like a tutor on chemistry everyone has a tutor to making whatever biological weapon that they want to make um or generating whatever deep fakes they want to make so that's like one side
that's sort of like weaponization over democratization then the other side there's under democratization so this is concentration of power concentration of wealth of uh political dominance the ability to flood the market with counterfeit humans um so that you control the political Square so either one of these two things are two different types of dystopia and I think another thing is is not to think in binary terms again of the arms race say between even democracies and dictatorships because there are still even Common Ground here that we need to explore and to utilize there are problems
there are threats that are common to everyone I mean dictators are also afraid I mean the greatest threat to every dictator is a powerful subordinate that they don't know how to control if you look at the history of you know the Roman Empire the Chinese Empire empire not a single Emperor was ever toppled by a democratic Revolution but many of many of them were either assassinated or toppled or uh made into puppets by an overp powerful subordinate some Army General some provincial Governor some family member and this is still what terrifies dictators today uh for
an AI to seize control in a dictatorship is much much easier than in a democracy with all these checks and balances in a dictatorship if I think about North Korea to seize effective control of the country you just need to learn how to manipulate a single extremely paranoid individual which are usually the easiest people to manipulate so uh the control problem how do we keep AIS under you human control this is something we can find Common Ground um and we should we should exploit it you know if if scientists in one country have a theoretical
breakthrough technical breakthrough about how to solve the control problem doesn't matter if it's a dictatorship or a democracy they have a real interest in sharing it with everybody and in in collaborating on solving this problem with everybody another question you all you call the creations of AI agents alien and from non-human Consciousness but is it not of us or part of our Collective past or Foundation as an evolution of our thought I mean it came from us but it's now very different the same way that we evolved from I don't know microorganisms originally and we
are very different from them um so yes the AIS that we now create uh we decide how how to build them but what we are now giving them is the ability to evolve by themselves again if if it can't learn and change by itself it's not an AI it's some kind of other machine but not an AI and the thing it's really alien not in the sense of coming from outter space because it doesn't in the sense that it's nonorganic it makes decisions it analyzes data in a different way from any organ brain from any
organic structure uh part of it is that it moves much much faster inorganic evolution of AI is moving orders of magnitude faster than human evolution or organic evolution in general it took billions of years to get from amibas to dinosaurs and mammals and humans uh the similar trajectory in AI Evolution could take just 10 or 20 years and the AIS we are familiar with today even Z even gp4 and the new generation these are still the amibas of the AI world and we might have to deal with AI T-Rex in 20 or 30 years like
within the lifetime of most of the people here so this is one thing that makes it alien and very difficult for us to grasp is the speed at which this thing is evolving it's an in organic speed I mean it's more alien not just than all than than mammals than Birds than spiders than plants and the other thing that you can understand its alien nature is that it's always on I mean organic entities organic system we know they work by Cycles like day and night summer and winter growth Decay sometimes we are active we are
very excited and then we need time to relax and to go to sleep otherwise we die AIS don't need that they can be on all the time and there is now this kind of tug of war as we give them more and more control over the systems of the world they are again making more and more decisions in the financial system in the Army in the corporations in the government the question is who will adapt to who the organic entities to the inorganic pace of AI or vice versa and to give one example think about
Wall Street think about the the market so even Wall Street is a Human Institution an organic institution that works by Cycles it's open 9:30 in the morning 4 4:00 in the afternoon Mondays to Fridays that's it and it's also not open on Christmas and mtin Luther King day and Independence Day and and so forth and this is how humans build systems because human bankers and human investors they are also organic beings they need to go to sleep they want to spend time with their family they want to go on vacation they want to celebrate holidays
when you give these aliens uh control of the financial system they don't need any time to rest they don't celebrate any holidays they don't have families so they are on all the time and you have now this stug of war that you see in places like the financial system there is immense pressure on the on the human Bankers investors to be on all the time and this is destructive in your book you talk about the need for breaks yeah and again it happens the same thing to journalists the new cycle is always on it happens
to politicians the the the political cycle is always is always on and this is really destructive and think about how long it took after the Industrial Revolution to get the incredibly Humane technology of the weekend right um and just to get to to reinforce like how fast is going to move move just give another kind of intuition like you know uh like what is it that let like Humanity build civilization well it's the ability to pass knowledge on to each other like you learn something and then you use language to be able to communicate that
learning to someone else so they don't have to like do it from the very beginning and hence we get the additive culture thing and we get civilization but you know I can't practice piano for you right like that's a thing that I have to do and then I can't transfer I can tell you about it but you have to practice on your own AI can practice on another ai's behalf and then transfer that learning and so think about how much faster that grows than human knowledge so today AI is the slowest and dumbest it will
ever be in our lifetimes um one thing AI does need a lot of to be on is energy and power on the other hand there's a lot of Hope about solutions to climate change with AI so I want to take one question from the audience audience on that can um you speak to solutions to climate change with AI is AI going to help uh get us there I mean go back to to all your point that technology develops faster than we expect and it deploys into society slower than we expect and so what does that
mean that means I think we're going to get incredible new batteries and solar cells um maybe Fusion other things um and those are amazing but they're going to diffuse into society slowly while the power consumption of AI itself is going to Skyrocket right like the amount of power that the US um used has been sort of flat for two decades and now it's starting to grow exponentially um Ilia uh one of the found open AI says uh he expects in the next couple decades the uh the world to be covered in data centers and solar
cells um and that's the future we have to look forward to um but so you know the next major big training runs are like you know six gws um so that's like starting to be the size of the power consumption of like Oregon or Washington so the incentive is well say this way like uh AI is unlike any other commodity we've ever had even oil because oil let's say we discovered you know 50 trillion new barrels of oil it would still take Humanity a little bit of time to figure out how to use it with
AI it's cognitive labor so if we get you know 50 trillion new Chips well we just ask it how it use itself and so it goes like that there is no upper bound to the amount of energy we're going to want and because we're in competitive Dynamics if we don't do it the other one will China us all those other things that means you're always going to have to be outspending on energy to get the compute to get the cognitive labor so that you can stay ahead and that means I think while it'll be technically
feasible for us to solve climate change it's going to be one of these tragedies where it's there within our touch but outside our grasp okay I think we have time for one more question and then I have to wrap it up we have like literally one minute empathy at scale if you can't beat them join them how do the AI creators instill empathy instead well whenever we start down this path people are like oh empathy is going to be the thing that saves us love is the thing that's going to be the thing that saves
us and of course empathy is the largest back door into the human mind it's our Zer day vulnerability like loneliness will become one of the largest national security threats um and this is always the the thing when people are like we need to make ethical AI or empathetic AI or the wise AI or the Buddha AI we absolutely should necessary but the point isn't the one good AI it's the swarm of AI following competitive and market dynamics that's going to determine our future yeah I agree I mean the the main thing is that the AI
as far as we know it's not really conscious it doesn't really have feelings of its own uh it can imitate it will become extremely good better than human beings at faking intimacy at convincing you that it cares about you partly because it has no emotions of its own I mean one of the things that uh uh is difficult for humans with empathy is that when I try to empathize with you my own emotions come in the middle like you know somebody comes back from home grumpy because something happened at home and uh I don't notice
how my husband feels because I'm so preoccupied with my own feelings this will never happen to an AI it's never grumpy it can always Focus 100% of its immense abilities on just understanding how you feel or how I feel now and again there is a very deep yearning in humans exactly for that which creates a very big danger I mean we go throughout our life yearning for somebody to really understand us deeply we want our parents to understand us we want our teachers our bosses and of course our husbands our our wives our friends and
they often disappoint us and this is what makes relationships difficult and now enter these a super empathic AIS that always understand exactly how we feel and um tailor what they say what they do to this it will be extremely difficult for humans to compete with that uh so this will put in danger our ability to have meaningful relationships with other human beings and the thing about a real relationship with a human being is you don't want just somebody to care about your feelings you also want to care about their feelings and uh so part of
the danger with AI which again multiplies the danger in social media is like this extreme narcissism that like this extreme focus on my emotions how I feel and understanding that and the AI will be happy to to to to oblige to provide that so just developing and the very strong also commercial incentives and political incentives to develop extreme empathic AI that because um you know in in in the power in in struggle to change people's minds intimacy is the superpower it's much more powerful than just attention um so yes we do need to to to
think very carefully about these issues and to make an AI that understands and cares about human feelings because it can be extremely helpful in many situations from medicine to to to education and teaching But ultimately it's really about developing it our own minds and our own abilities this is something that you can just not Outsource to the AI and then super fast on solution like just imagine if we went back to 2012 and we banned business models that commodified human attention how different of a world we would live in today how many of the things
that feel possible to solve we just never would have had to have dealt with what happens if today we ban business models that commodify human intimacy how grateful we will be in five years if we could do that yeah I [Applause] mean so to to join that I mean we definitely need more love in the world but not love as a commodity yeah exactly so if we thought Love is All You Need empathy is all you need it's it's not as simple as that not at all well thank you so much both of you for
your thoughtful conversation and thank you to everyone in the audience thank you thanks [Music]