Can AI Actually Create? Yuval Noah Harari on Artificial Intelligence (Part 2)

15.23k views3469 WordsCopy TextShare
How To Academy
Join us for an engaging discussion featuring global bestselling author Yuval Noah Harari as he explo...
Video Transcript:
[Music] so let's go to Ai and how it fits with this and that's partly um because I think people even if they don't subscribe to the naive view that in each iteration or generation things get better that they understand the complexity you've said laid out they would imagine that AI is part of that Continuum it's just the latest form of uh Machinery that maybe instead of spreading more and more truth spreads more and more disinformation in the way that newspapers or radio or the printing press did before you set out that actually it is categorically
different it's not part of a Continuum with those earlier forms just explain to us why you see it as being in a c new category well AI is different from the atom bomb it's different from the printing press from every previous technology because it's the first technology in history which is not a tool in our hands it is an independent agent it can make decisions by itself it can invent new ideas by itself it can learn and change by itself so what is happening right now is not the invention of another tool it's uh the
addition to Human Society to the planetary Society of of of Earth of millions and potential billions of new nonorganic agents which will increasingly make decisions about us about about the world as a whole which will invent more and more new things new medicines also new weapons also new types of financial devices also new theories about about science also new mythologies and before we I mean people kind of try to rush to conclusions is it good is it bad what are the dangers what are the potential benefits it's important to just slow down a little our
thought process don't rush to judge it just try to understand what we are facing we are are facing something that we have never encountered before in history I like to think about the the the very word or the acronym AI not as an acronym for artifici IAL intelligence but as an acronym for alien intelligence alien not in the sense that it's coming from out of space it's not it's coming from California and China and these places alien in the sense that it processes information and creates new things and makes decisions in a really alien way
in a very different way than humans do it in a very different way that animals are all organic entities do it it's like an nonorganic alien invasion of Planet Earth just explain that point about how it as it were thinks differently or how it processes information differently because some of the people who are on the other side of this argument say it's not really creating new ideas it's just engaged in sort of probab probabilistic rearrangement of other existing ideas in other words it's almost you know as a simple way of understanding the autocomplete you know
it only it's guessing what the next word is going to be we shouldn't dignify that by calling it creating something new generating a new idea the question is how do we understand human creativity I write a book what do I do I for a couple of years I've read other other books I've read articles I talk to people and I take an idea from here and an idea from her from there and and put it together and it's a new book how is this different if you think about autocomplete think about the sentences in our
head I mean brain science still doesn't understand how we complete a sentence like when I now speak this sentence I don't know how it will end I start it and when I look at what's happening in my mind I see the word forming and coming out of my mouth and I don't know how the sentence will end and we don't really understand what's what's happening there so um is a I really doing something fundamentally kind of lesser than what humans are doing if you think about you know just now the Nobel Prize in chemistry went
to the creators of alpha fold and which predicts how proteins fold what took humans you know years of research to understand the folding of a single protein this AI has done with been a much much shorter time for hundreds of thousands of proteins now to me it looks like creativity again you you can argue about the kind of artistic creativity there but in terms of creating something new in the world there was a world where this was impossible to do now it is possible there was a world without a certain video a certain image and
now it is there now most of the things that humans create are not very impressive most of the things that AI creates are not very impressive but it is constantly creating new things unexpected things things that people couldn't do by themselves and the the last thing to to take into account is that it is just beginning yeah I mean the AI Revolution is basically 10 years old more or less we haven't seen anything yet the AIS of today are still very very very primitive first stages in the evolution of nonorganic intelligence which might continue for
centuries for thousands of years for millions of years but part of what makes it alien is that it is just moving much much much faster than organic evolution organic evolution took billions of four billion years to get from the first microorganisms to dinosaurs and then mammals and and finally humans uh for the evolution of AI to cover a kind of parallel or an analog analogous uh uh span could be just a few decades the the AIS we know today say CH GPT or Alpha gold or Alpha fold you we should think about like the amibas
of of the AI world of the AI evolutionary trajectory but we might encounter the dinosaurs within our own Lifetime and if chpt is the amiba how does AIT rex look like and how do we incorporate millions and billions of AIT rexes into our society and isn't isn't it almost How do they ad adapt and absorb us into their society that's the big question and when when people fear AI they have in mind the kind of Hollywood science fiction scenario of the kind of AI Terminator that the the the big robot Rebellion the moment that the
robots decide to rebel against us and destroy us and because the technology is not there people tend to be complacent and certainly the technology is not there the AIS of today and even of the coming few years is incapable of building a robot army and deciding to rebel against us um but they don't need to because we are increasingly giving them control of the levels of power in the world I think that um in a way Kafka is a better Guide to the AI dystopia potential AI dystopia than than the Terminator in Hollywood um and
because the key I mean the world is run by bureaucracies and if you think about the power of bureaucracies and bureaucrats you think about for instance about the power of a lawyer if you take a lawyer like the best corporate lawyer lawyer in the world and you take her out of the bureaucratic system and drop her in the African Savannah she's powerless she can't do anything she's Mig to something small but she's weaker than any chimpanzee or lion or elephant or hyena but within the bureaucratic system she is more powerful than all the lions put
together you take all the Lions of the world put them here and you take this one lawyer and put her she is more powerful within the bureaucratic system because she knows how to push the levels of power in a way that the Lions don't and this is the power that now ai is gaining if you take Alpha fold or Alpha go or the next generation of AI and just drop them in the world let's see you organize the big robot Rebellion they can't they can't just organize okay let's start mining iron and building robots and
doing they can't but they don't need to they are already being uh uh given more and more power within the bureaucratic systems that we have constructed and I give one example so it doesn't remain that be concrete the first place we really saw it big time was with social media um that the the power to control human attention of billions of people around the world such immense power the power to control uh the new cycle what will people be talking about previous ly this power was held by some of the most important people in the
world news editors if you think about the Modern Age and the power of human news editors the editors of the guardian of the New York Times of the Wall Street Journal uh Lenin before he was dictator of the Soviet Union his one job basically was editor of iscra musolini melini's rise to power he was first a journalist in a socialist newspaper then he was Chief editor of a fascist newspaper Avanti then he was dictator of Italy so this was the lad of promotion journalist editor dictator Boris Johnson was editor of The Spectator yeah many this
is not kind of a one of many because these are very important positions you think who are the editors of the most important information platforms news platforms today in the world what what are their names they don't have names because they are algorithms the most important news outlets today are Twitter and Facebook and YouTube and WeChat and wayo and who edits them who decides what you see at the top of your Facebook Newsfeed who decides what will be the next recommendation in Tik Tok or in YouTube an algorithm that's so much power and this power
we already seees was enough and this is very very very primitive Ai and this was enough to destabilize every single democracy on the planet there's one very specific example which you write about which I think is really instructive because you say we don't have to imagine the science fiction how in 2001 or the Terminator RoboCop or whatever instead we can go to the real and I think you've explained how this Happ but the case of fa Facebook in Myanmar yeah I mean this is already almost 10 years ago so we we have some historical perspective
on it and what happened in in Myanmar in the middle of the 2010s is that there were ethnic tensions for Generations before between the majority bumes uh uh Buddhist population and the minority Muslim rohinga population uh but this exploded in the middle of the 2010s into to an ethnic cleansing campaign when thousands of rohinga were killed massacred tens of thousands were raped and hundreds of thousands were expelled and you now have these massive refugee camps in in mainly in Bangladesh and this was fueled to a large extent by a propaganda campaign on social media and
primarily on Facebook and Facebook has been often accused of of its uh responsibility for this atrocity and Facebook often the basic defense is we didn't do it we are just a platform all the propaganda against the rohinga that they are all foreigners who entered Myanmar in recent generations and they are not part of this place that they are all terrorists that they want to destroy us all of this was written by human beings not by Facebook we just we are just a platform you can't use radio of the of the radio technology of the the
genocide in Rwanda you can't accuse uh the printing press for the fact that the Nazis printed uh uh their books m c so why do you accuse us we are just the platform and the thing is that they are not just the platform they are also the editors they are also the curators people in Myanmar in the middle of the 2010s produced enormous amounts of different kinds of content there was definitely people people who were producing hate filled conspiracy theories against the rohinga other people were producing sermons on compassion other people were uh producing biology
lessons and then the big question was who are the editors who decides what's get get attention who decides what content to recommend to people and this position was filled by the Facebook algorithms and the algorithms were for given a very simple task by the Facebook Management in California who had again no ill feelings against the rohinga they didn't even know that ringas existed they didn't speak boures they didn't they were completely cut off from the reality of the on the ground they gave the algorithm a seemingly very simple and benign task increase user engagement engagement
sounds good to be engaged uh which in simple English make people spend more time on the platform why because the more time people spend on the platform uh the more advertisements we can show them the more uh uh data we can Harvest from them and the investors are very impressed when they see uh user in user time goes up so our stock also goes up and this was the task given to the algorithms and then the algorithms experimented the algorithms experimented on millions of human Guin pigs how do you increase user engagement and they discovered
uh that the easiest way to keep people glued to the screen is to press the hate button in the human brain to press the greed button to press the fear button and this is what they did they deliberately promoted recommended even autoplate this hate filled content and this and this of course was not the only reason for the atrocities but with this was a main reason it's the first maybe the the first major historical event which is in part maybe a small part maybe 1% because of decisions made by a nonhuman intelligence and that was
even when the AI was at the pre- amoeba stage I mean this is then they we were still in just RNA floating the in the swamp kind of thing at that stage 10 years ago but the I want to just ask you about intention as you explained there was no editor of Facebook or executive of Facebook sitting in California saying how can I turn the people of Myanmar against each other and cause a genocide absolutely not the algorithm developed a kind of momentum of its own in that case I'm just thinking almost ethically where where
blame lies where we think whether we need there to be intention to be um to be scared of this thing or whether the lack of intention is actually what makes it scary and let me just give you one example you know you imagine in the book the idea that of AI engineering a new pandemic virus or just that being a possibility um what what's the scenario in which I mean that sounds terrifying it's not the Terminator but it's terrifying in a different way absent an intention to do that how could that come about and how
might we we stop it the intention can be is provided by the humans uh the intention it could be a group of terrorists telling an AI produced for us a new deadly virus to start a pandemic it can be a state actor who it can be somebody who doesn't understand that he gives the AI a completely different Mission again like like in Myanmar that nobody in Facebook headquarters thought that if we instruct the AI to increase user engagement this is somehow going to lead to an ethnic lensing campaign this was the scary thing is this
uh uh mismatch the the unintended consequences and the thing is that ultimately it's it's true about humans as well if you look at human motivation and how does human motivation that most humans you know basically um we pursue pleasure we try to avoid pain we pursue Pleasant feelings we try to avoid painful feelings and how does this lead to war and genocide and all the the other Terrible Things uh it's very difficult to see the the the connections there which is also why it's difficult to anticipate and to prevent the harm and maybe I'll talk
in even broader sense because if you go to places like Silicon Valley and talk about these things so they will basically tell you that every time a new major technology comes along so people have all these scary scenarios in their head and usually they don't materialize and the world becomes a better place and one of their favorite examples is the Industrial Revolution that look when people invented trains and cars and things like that so you had all these doomsday scenarios that it will destroy our brains or destroy our society or whatever and look the world
is much better as a result of the Industrial Revolution compared to the situation in 1800 but the thing is that in history history it's often the problem is not with the destination the problem is with the path with the way there if you just take two instances in human history 1800 the year 2000 and you compare then I think it's fair to say that for most humans on Earth if you ignore the ecological disaster which we don't know still how to handle if you ignore the ecological damage you just look at the situation of humans
everywhere not just Britain or the US also India and Indonesia and Brazil you look at living conditions in 1800 you look at 2000 it's much better think child mortality is down mortality of women in child birth is down less disease less hunger everything improved but when you actually look at the history of the Industrial Revolution of the 19th and 20th Century you see it was a roller coaster it was not a straight line and the problem was not that the technology was was evil there was nothing evil about trains or about cars the problem was
that it changed the world dramatically and people didn't know how to adapt to the change people didn't know how to construct uh benign industrial societies because they never encountered this in history they had no blueprint no model so they experimented and some of the experiments were horrible the first maor experiment in how to build an industrial society was uh modern imperialism because many people thought in the 19th century that the only way to build an industrial society is to build an Empire and have colonies why because in contrast to agrarian societies industry requires raw materials
and markets if we don't control the sources of raw material in markets and other countries could could squeeze us out so we must build an Empire and you see even the very small countries like Belgium when they industrialize they build an Empire we look back and we say this was a terrible idea we know them but they didn't and hundreds of millions of people all over the world paid a terrible price because of that then you had Soviet communism another great idea about how to build an industrial society people thought that this is the way
to do it in the 1920s 30s 40s a lot of people around the world not just in in in the USSR and in Germany they thought that the only viable industrial society would be a totalitarian Society because only a totalitarian system can harness and control and make the best of the immense new powers of Industry so the big debate was whether it would be fascist totalitarianism or communist totalitarianism but it has to be totalitarianism and again we look back and we say oh this was a big mistake but people just didn't know and the big
danger with AI is not again the hollywoodian scenario is something like a repeat of the Industrial Revolution that basically because of ignorance and the lack of models and the speed that this is developing people will have some very bad ideas how to build an AI based uh uh uh society and maybe we'll have a new wave of imperialism and totalitarianism maybe we'll have even worse things
Related Videos
Chat GPT Deliberately Deceives You! Yuval Noah Harari on Artificial Intelligence. (Part 3)
22:48
Chat GPT Deliberately Deceives You! Yuval ...
How To Academy
21,384 views
ET Conversations with Yuval Noah Harari
30:28
ET Conversations with Yuval Noah Harari
The Economic Times
5,255 views
The impact of artificial intelligence in 2025 – Panel discussion with Duke, Google, IBM
57:13
The impact of artificial intelligence in 2...
All Things Open
22,188 views
How 20 Years Of Meditation Changed My Reality : Yuval Noah Harari Opens Up
12:51
How 20 Years Of Meditation Changed My Real...
TRS Clips
117,468 views
Google DeepMind CEO Demis Hassabis: The Path To AGI, Deceptive AIs, Building a Virtual Cell
54:58
Google DeepMind CEO Demis Hassabis: The Pa...
Alex Kantrowitz
97,663 views
“Trump Is Going to Get a Lot of Wins:” Ian Bremmer Forecasts 2025 Geopolitics | Amanpour and Company
18:13
“Trump Is Going to Get a Lot of Wins:” Ian...
Amanpour and Company
292,448 views
“AI will make the world more Kafkaesque than Terminator” Yuval Noah Harari on the Dangers (Part1)
19:04
“AI will make the world more Kafkaesque th...
How To Academy
57,114 views
Yuval Noah Harari Interview: Elon Musk & Silicon Valley Leaders Aren’t Elected but Make Big Decision
29:42
Yuval Noah Harari Interview: Elon Musk & S...
The Indian Express
273,986 views
We can split the atom but not distinguish truth. Our information is failing  us | Yuval Noah Harari
1:15:28
We can split the atom but not distinguish ...
Big Think
712,891 views
Slavoj Žižek meets Yanis Varoufakis (Part 1)
21:33
Slavoj Žižek meets Yanis Varoufakis (Part 1)
How To Academy
225,421 views
The Next Frontier: Sam Altman on the Future of A.I. and Society
36:47
The Next Frontier: Sam Altman on the Futur...
New York Times Events
431,896 views
Inside Anthropic's Race to Build a Smarter Claude and Human-Level AI | WSJ
35:33
Inside Anthropic's Race to Build a Smarter...
WSJ News
109,832 views
Yuval Noah Harari and Asma Mhalla on AI, Democracy, and the Future of Humanity | Cité des Sciences
1:43:15
Yuval Noah Harari and Asma Mhalla on AI, D...
Yuval Noah Harari
16,617 views
The Biggest Global Risks for 2025 | TED Explains the World with Ian Bremmer
44:03
The Biggest Global Risks for 2025 | TED Ex...
TED
384,219 views
Geoff Hinton - Will Digital Intelligence Replace Biological Intelligence? | Vector's Remarkable 2024
41:45
Geoff Hinton - Will Digital Intelligence R...
Vector Institute
78,234 views
‘Godfather of AI’ on AI “exceeding human intelligence” and it “trying to take over”
9:21
‘Godfather of AI’ on AI “exceeding human i...
BBC Newsnight
607,458 views
Yuval Noah Harari: This Election Will Tear The Country Apart! AI Will Control You By 2034!
1:54:17
Yuval Noah Harari: This Election Will Tear...
The Diary Of A CEO
1,058,644 views
Mustafa Suleyman & Yuval Noah Harari -FULL DEBATE- What does the AI revolution mean for our future?
46:17
Mustafa Suleyman & Yuval Noah Harari -FULL...
Yuval Noah Harari
1,059,792 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
356,418 views
Ian Hislop’s unfiltered take on Elon Musk | LBC
8:16
Ian Hislop’s unfiltered take on Elon Musk ...
LBC
1,835,942 views
Copyright © 2025. Made with ♥ in London by YTScribe.com