he's one of the most popular writers of our times it's great to be in conversation with yual Noah Harari historian public intellectual and bestselling author thank you very much for joining us at the Kera literature Festival welcome to India and welcome to KF thank you so much uh it's a very kind invitation and I'm happy to be here with you to all the people at KF who are unable to actually see Mr Harari uh in person I hope that the questions and the answers will satisfy you so we'll get directly into the interview um Nexus
your new book do you believe it begins where sapiens sort of ends U yes I mean sapiens tells the story of how an ape from Africa took over the world conquer the world and Nexus picks up where sapiens ends and it asks if humans are so wise if humans are so smart that we took over the world why are we so stupid why are we so self-destructive that we are now undermining our own civilization in so many ways there is the ecological crisis there is the rise in international tensions that might lead to a third
world war and we are creating AI the most powerful technology in history that might get out of our control and enslave or destroy us so the key question of the book is if we are so wise why are we so self-destructive and of course this is a question that was asked many times in history and there are many Traditions who argue that there is something wrong in human human nature that makes us self-destructive Nexus gives a different answer the problem isn't in human nature humans mostly are good by Nature the problem is in our information
if you give good people bad information they make bad decisions humans almost never react to reality we react to our image of reality and our image of reality can be extremely distorted now then the question comes if indeed the the the the the main problem is information why didn't our information improve as history developed for thousands of years we have developed more and more sophisticated Information Technology you know writing and printing presses and then radio and television and computers and AIS we should have seen an improvement in the quality of our information but it's not
happening if you look in the modern era you see that modern states are as likely as Stone AG tribes to fall victim to mass delusion and mass psychosis people can believe the most harmful and cruel and ridiculous stories about the world uh why is our information not getting better and this is the key question of Nexus so when we read sapias we kind of feel happy being Homo sapiens because you talk about the cognitive Revolution but when you read Nexus you say humans are smart but also stupid and self-destructive so it almost like puts you
down a bit yes we are easily deluded you know um this really also goes back to ancient Hindu and and and Buddhist view of the world that the key problem about the world is not evil it's ignorance it's delusion you know in the monotheistic religions like Christianity and Islam there is a belief in in evil but in Hinduism and Buddhism the main IDE here is that the problem is usually ignorance and delusion and again humans think that will all our technological development especially in the field of Information Technology we will be freed liberated from ignorance
and delusion and it's not happening um people sometimes think that AI maybe will tell us the truth about the world again it's unlikely to happen the danger is that AI will trap us imprison us within a web of delusions even worse than anything we've seen before in history so at many points in the book you say that whenever new technologies come people have made apocalyptic predictions like this is perhaps the end of the world did you attempt to do that or were you resisting from doing that when it comes to AI because in interviews you've
also said that AI will lead to Annihilation it's again it's not a prophecy it's a warning um there is enormous positive potential in AI there is enormous negative potential and it's up to our decisions to choose which way to go at the present moment I think we should not rush to judge too quickly we simply first of all need to understand the enormous power of this new technology the one thing that everybody should know about AI is that this is not a tool this is an agent we are creating millions and billions of new agents
and releasing them to the world you know previous Technologies if you think about Information Technology like the printing press so the printing press was a very powerful machine but it could not make any decisions like a printing press cannot decide whether to print Nexus or whether to print another book you always need a human being to decide okay let's print this book and certainly a printing press cannot write books a printing press cannot come up with any new idea if you take a printing press and you take a stack of paper and lots of ink
and lots of electricity and just sit and wait it will never create a single new idea a single new word now ai is different from the printing press from the radio from television from every previous technology because it can do this AIS can make decisions like if you look in social media so in social media the decision which post to spread like if you go on your social media and you watch some video who decided to show you this video very often the decision was made by an algorithm by an AI so it can make
decisions and it can increasingly create new ideas by itself already to today AIS can write texts they can compose music they can draw images they can create entire videos and this is only the very beginning of the AI Revolution we haven't seen anything yet the AIS of today are still very very primitive and yet you know in my field of writing I think that it's fair to say that even today AI writes better than most humans now still you know the people who win Nobel Prize in literature they write better than AI but the average
person I think AI already writes at a level which is similar or even above the average person so in the book you also not um very much I mean you're not praising the printing press for example right you do say that um scientific innovation did not come because of the printing press you question how the printing press spread the rumors about witches so my question to you is when it was a printing press there was a specific problem where helped people uh uh sort of um propagate misinformation disinformation AI is just going to take that
to a different level is that what you're saying that at different times of the world we had different Technologies every time we invent a new information technology it becomes easier to create and spread more information but it becomes easier to create and spread the truth or fictions and Fantasies and lies and the problem is that most information in the world is not the truth the truth is a very very small subset of all the information why for three reasons first the truth is costly if you write want to write something truthful you need to invest
time energy money looking for evidence comparing evidence uh doing research this is costly fiction is very cheap you just write WR anything you want the truth is complicated whereas fiction can be made as simple as you would like it to be and we can project fiction as truth too we can project fiction as truth yes and it's it's hard to tell the difference and people usually prefer simple stories over complicated stories so they would tend to prefer the fictional story to the true story and finally the truth is often painful from the individual level of
my own life to the life of a whole nation or a whole culture there are many things we don't like to know about ourselves there are painful truth and fiction can be made attractive and flattering and as pleasing as you would like it to be so in this competition between costly complicated painful truth and cheap simple attractive fiction fiction tends to win most information in the world is therefore not the truth if you simply create a new information technology that spreads more information most of that information will be junk if you want the truth in
addition to the technology you also need to build institutions like universities or like courts or like a a responsible newspapers that do the hard work of separating the rare gems of truth from the ocean of junk information and this is the key work it's not inventing the technology it's building the institutions so from what you tell me what I understand is that when our institutions are biased let's say we take the courts we take media organizations that bias is fed into the AI right so we don't have a system we are investing into truth through
our universities our media organizations and that will reflect on our AI too and if we have bad institutions the AI will be used for bad purposes and but there is another difficulty with AI I mean everything I said so far this was through of the printing press as well this was true of radio as well you saw in the 20th century that radio could be used by Democratic societies to spread the truth uh it could be used by totalitarian dictators to impose lies on people um this was an old problem this is also a problem
with AI but with AI we have in addition a new problem which is that the AI itself can decide to spread lies and disinformation and and delusions and the AI can invent can create uh new harmful information which radio couldn't do you know all the if if if radio or the printing press spread harmful information ultimately it came from the mind of some human being but now with AI we'll have to deal with millions and potentially billions of agents that uh uh can generate texts and images and videos and music and and so many things
uh much more efficiently than human beings you know for thousands of years we lived inside a human generated culture everything we encountered all the songs all the poems all the music all the images all the architecture all the physical stuff the chairs the tables religious religious mythologies political ideologies they all came from the human mind now we have something on Earth which can create these things more efficiently than humans so in 10 or 20 years how would it be like especially for young people to be born into a world in which most music is created
by an alien intelligence AI in which most texts don't come from the human brain you know even if you think about something like religion so I'll give the example of of of Judaism Judaism is a religion based on texts you know the holy texts of Judaism it gives immense authority to texts to the written word and what Jews have been doing over thousands of years is reading texts and writing more texts about them interpretations and commentaries and so forth now so many texts were created in Judaism that nobody is able to read and remember all
of them even the the the wisest Rabbi cannot read and remember all the texts now ai can for the first time in history there is an intelligence that can read and remember and analyze all the texts of Judaism and it can start creating new texts and it can even talk back with you you know with a book with a holy book one of the problems with books they never talk back so you read the holy book and you have a question the book can't tell you what is right what is wrong interpretation you have to
ask some human so you're saying like how the holy people people became powerful because they interpreted the Bible in a certain way the AI can now ai can interpret maybe the Bible better than any human being because it has read all the texts of Judaism or all the texts of Christianity or all the texts of Islam and find patterns in it that no priest and no Imam was able ever able to identify let's say you go online you have you have some question about the Bible or about what God or whatever and you get an
excellent answer and you don't even know is this answer coming from an AI or is this coming from a human priest but then I would have to do more research to find if that answer is right or wrong the reason why people go to AI is because they don't want to do that work yes and then if you come to trust it so maybe AIS take over religion because they become the ex especially religions that are based on texts if Authority is in the text and AI become the world's greatest experts on texts they will
take over the religions of texts and they may might even create new religions you know they can write new holy books and you don't even know who is an AI and who is a human at least if you communicate uh on on online via the Internet you don't know I mean previously like two years ago if you have a conversation with somebody online you know for sure after a minute or two whether it's a human being or a bot nobody can talk with you for hours and days unless it's a human being now it's no
longer the case we we will soon encounter again not one or two but millions and billions of AI agents that are able to converse with us maybe they become our friends and um maybe and this gives them the power to shape our view of the world our political ideas our or religious ideas uh because you know if you want to change somebody's opinions intimacy is the the most powerful weapon previously like in the 2010s in the previous decade we saw a big battle for human attention like on social media um the the there so much
information on social media and the the the humans have limited attention so there was this battle who will grab your attention and we saw that content that spread hatred and greed and fear this took over a lot of social media because these are the things that grab our attention most effectively now in the 2020s the battlefronts are shifting from attention to intimacy now AIS and algorithms they are able to fake relationships and to create really intimate relationships with us because they get to know us very very well and they also learn how to manipulate us
emotionally you know when two human beings interact very often it's hard for me to understand your emotions because I am distracted by my own emotions like maybe I'm sad or maybe I'm angry so I don't don't really pay attention to what you're saying and how you're feeling AIS don't have any feelings of their own they can be a 100% focused only on you and your feelings and they could become better than human beings in understanding and therefore also in manipulating human emotions and there is very strong business incentive commercial incentives in developing AIS that can
fake relationships with human beings so I think this is one of the big social dangers that we are facing with the AI Revolution so I'm not even joking but when I told people that I'm going to be interviewing Yu Noah Harari at least five people told me don't read the book just ask Chad GPT what questions you can ask him I said why would I do that I mean his book is about AI of all things and I wouldn't go to Chad GPT and ask but this is what people are advising others just turn to
AI look for the questions so I have one thing to ask you I've heard you saying this and even your book you've written that when good people get bad information they make they do bad things withs yeah but are you being too kind to people because information is not made in vacuum it is bad people who make bad information true so why are we you've been excessively kind to human beings also in one way in this book I feel where you're saying it's always bad information and good people no of course we have a problem
with Bad actors who deliberately spread bad information and we will have increasing problems with AI creating bad information um I I do think that humans have of course a responsibility for what kind of information they consume if you have a very bad information diet it's like you have a very bad food diet if you just eat a lot of junk food and you make yourself sick so the doctor will tell you you know it's partly your responsibility and you should try to have a better food diet similarly if you go on a very bad information
diet and you just stuff yourself with too much junk information which is full of hatred and greed and fear and it makes your mind sick so this is partly your responsibility and you should at least try to have a better information diet I I completely agree I wouldn't kind of take away all responsibility from human beings but I'm just saying that even if you look at huge historical disasters and you ask yourself how could people do such things in many cases they did terrible things because they believed some very very strange and bad stories about
the world as I was telling you before the interview we have a name for people in India who are not on the information diet they're called WhatsApp uncles and WhatsApp aunties we have T-shirts which you can print here saying I'm a WhatsApp uncle uh the other thing is the book's main premise for the for the those watching is it's how the naay view of information right so one thing that you say is that the solution for misinformation or disinformation cannot or for information cannot be to more information yeah that's a common mistake that people say
oh if there is bad information let's just have more information again most information is not the truth if you just flood the world with more and more information it doesn't make people have more knowledge um so what you what we really need the most important thing for societies for individuals is self-correcting mechanisms this is a mechanism for an individual or an entire Society to identify and correct our own mistakes because everybody is fallible everybody makes mistakes the question is do you have mechanisms to identify and correct your mistakes when a child learns how to walk
the child mostly learns by self-correction yes you maybe they get some advice from parents and teachers but mostly you get up you try to walk you fall down you understand I made some mistake you try to do it differently and slowly you learn how to walk similarly in science scientists always assume that they might make make mistakes no scientist is perfect no scientific book is like a holy book that makes no mistakes so scientists constantly look for the mistakes in the existing theories and almost the only thing that scientific journals scientific papers publish is corrections
to the previous theories and similarly in politics the whole idea of democracy is to have a self-correcting mechanism you vote for somebody you give them power for a limited amount of time on condition that they afterwards give power back and the public can say oh we made a mistake last time now let's give it to somebody else but what if there is no self-correcting mechanism how do we correct that information then you're saying giving more information doesn't work doesn't work so then you have like a dictatorship you know a dictatorship is a political system that
has no self-correcting mechanism no matter what mistakes the dictator makes there is no elections it's impossible to replace the dictator and the dictator controls all the newspapers all the radio stations all the internet channels so constantly bombards the public with more information that say that the dictator is a genius that never makes any mistake and any problem is because of enemies or traitors and no mistake is ever ever being corrected in such a system so I'm a journalist for me it's not just about information it's also about truth yes so what about bombarding with truth
if there's information disinformation and misinformation would that help in your perspective the key question is how to how do you know which sources of information to trust everybody is telling you what I'm saying is the truth how do you know who to trust and again here the best advice I can give is trust those institutions that have strong self-correcting mechanisms for instance ask the institution to tell you about mistakes it made in the past an institution that says we never make mistakes we are always correct is not trustworthy because everybody makes makes mistakes everybody is
faible one of the things that makes me trust science is that science is often quite happy to admit its own mistakes you know the most important prizes in science like Nobel Prize is given to people who don't repeat what the previous generation said it's given to the scientists who discover mistakes or missing parts in the previous scientific theories and tell the public about them so science constantly exposes its own mistakes and this is why I trust I tend to trust scientific institutions so for example in India the government now I don't want you to particularly
answer about India but whether this is a phenomenon worldwide the government has been saying that the history which is taught to us till now has been a certain liberal Progressive perspective which also was um not speaking about Hinduism the religion here and what happened when the Kings from when the mugal and other kings came right so there's been a tweaking of our textbooks yeah but that's their version of the truth which they are now imposing on a large population yeah so then it becomes Truth Versus truth it's one person's truth or the other person's truth
the question is always what evidence do you have and is it okay for people to voice other opinions and uh uh look for evidence that supports other opinions that that's the key test in a dictatorial country in a totalitarian regime it is simply against the law or you can go to jail if you question the official narrative or Vigilantes can come and harm you and this is a very bad sign the the sign of of a good system is if people are allowed to voice you know democracy is a conversation conversation can only happen if
there are several voices so if you talk about the history of India for instance it's perfectly legitimate that different people have different views about what happened in the Empire or what happened in previous eras in Indian history and the question is is it okay is it lawful for different people to voice these different opinions and to provide evidence and for people to judge on the basis of the evidence being provided again the most basic thing in science including in history is evidence evidence is the key word evidence is the key word yeah and evidence again
um it's not something that invent it's like you go to an archive and you find ancient documents you dig in the ground and you find ancient archaeological remains to prove your point another important thing is that every nation in the world is usually very keen to present itself in the best possible light so if you get a version of history that presents your nation as perfect and all the problems are because of somebody else there is good reason to distrust it uh because again every nation has dark episodes in its history every nation has committed
mistakes that's committed crimes as a historian I would say that uh uh the the way I evaluate the maturity of uh of of of nations of the way they understand their history they are mature if they are able to admit and accept their mistakes and crimes it doesn't mean that you start hating yourself it's like an individual the fact that I'm able to admit I made a mistake I caused pain to this person I shouldn't have done this doesn't mean I start hating myself and saying oh I'm the worst person in the world I don't
deserve anything no I'm human I made a mistake and I'm trying to learn from it it should be the same on the level of Nations so I'll just close the thread on AI itself you are one of the global voices who have always spoken and warned against the advancement of a you even signed a a open letter last year so my question to you is what is your fundamental problem with AI is it the humans wielding it is the technology itself or is it that it's developing too fast that we are not able to understand
what's happening it's all all of this I mean again with every technology there is always the danger that in the wrong hands it will be used for wrong purposes but with AI there is the new danger that the AI itself is an agent that might Escape our control and start making decisions and inventing new ideas and new products that would harm us now this is not inevitable humans are extremely adaptable but we simply need time to adapt so I'm not saying let's stop all development of AI this is not going to happen this is undesirable
because there is also immense positive potential in AI so what do you suggest we basically need to slow down invest more in safety and give human societies time to adapt I'll give a historical analogy if you look at the last big technological Revolution the Industrial Revolution in the 19th century so the problem with the Industrial Revolution wasn't that the technology was evil steam engines train electricity all these things they are not evil the problem was people did not know how to use them because they had no historical models there was no model for how to
build an industrial society so people experimented and many of these experiments led to terrible consequences one big experiment was imperialism when the Industrial Revolution erupted a lot of of people said that the only way to build an industrial society is to build an Empire why because you need to control the sources of raw materials and the markets of the industry like the engl yes so any country that industrialized first Britain but also then France and Belgium and Germany and Japan they all built Empires because people thought this is the only way to build an industrial
society with terrible consequences for hundreds of millions of people not just in India but in many other parts of the world another experiment was totalitarian regimes that people thought the only way to build an industrial society is to build a totalitarian regime like Nazi Germany or like the Soviet Union because they thought the only way to control the immense power of industry is if the government controls everything the economy Society culture totalitarianism now we know that these were mistakes that there are better ways to build industrial societies but people didn't know and the danger is
any evidence of slowing down there is no no it's just going faster and faster and you talk with many of the people who lead the AI Revolution and they tell you we are aware of the danger but we can't slow down because if we slow down we are afraid our competitors will not slow down and then they win the AI race and they dominate the world and you talk with the competitors and they tell you the same thing so essentially this is a problem of not enough trust between humans now the Paradox of of the
AI Revolution is that the same people who tell you we must move faster because we can't trust the other humans when you ask them but can you trust the AI eyes you are creating they say yes so they have no trust in humans but they do think they can trust the AIS and this is very dangerous and very strange because we have thousands of years of experience with human beings we know that humans can lie and cheat and deceive but we also know how to build trust between humans and with AIS we have no experience
we know that a eyes can also lie and cheat and deceive but we don't understand how would a society of millions of AIS interacting with each other how would this look like and will we be able to trust it and prevent it from doing very dangerous things we have no experience in that so something very interesting which struck me in the book is you describe how while playing football all nations decided that players should not use steroids yeah if one country wanted to actually do a penalty they could have but everybody listens to that advice
because that's how the sport can go forward so are you suggesting that if we have to reign in AI at some point it has to be a global convergence of thoughts that every country has to agree to it this is the idea but this seems at the present moment almost impossible especially because the country who is leading the AI race which is United States just has a new Administration which is totally against any regulation or any global cooporation and um so it seems very unlikely that we will have some kind of of agreement like that
um for me the most important thing at this moment is simply to get more people to understand what is happening and to join the debate there is a debate right now about what to do with AI maybe the most important debate in history the decisions about developing AI is really the decisions about developing a new species that might take over the planet at the present moment the key decisions are being made by a very small number of people in just a few countries mostly United States and China because they are the only ones who really
understand what is happening my aim with Nexus and with interviews like this is to make more people around the world understand what is happening so they can also join the debate and voice their opinions their views in the hope that with a more people engaged we can make better decisions whether this will happen or no this is we'll see in the future because you mentioned America and the new Administration one thing that happened recently is Elon Musk taking over X and then camouflaging it has free speech endorsement where anybody can say everything that's created its
own problem sitting in India when I use AI I always feel that the AI does not because it's an American AI it does not understand India and then I start fearing if it does start understanding India and it take what source will it take from here what is the bias it's going to feed back on us AI is not this kind of perfect machine that will tell us the truth about the world AI can be as biased as human beings it depends on the information you feed it and on how you build it again uh
the the decisions of the engineers that are now creating these AIS will change the shape of the world in generations to come and you can create very different kinds of AIS now all this talk about freedom of speech is in many ways misleading because only humans have freedom of speech but on platforms like X like Tik Tok like many others the ones who actually control the platforms are AIS I don't think that companies should censor what humans say on social media should be very careful about that but so many people are saying so many things
and the ones who choose which posts to promote who magnify the message they are algorithms not human beings and there there should not be any any protection of freedom of speech for the algorithms because the algorithms don't have freedom of speech so you're all for ban Banning algorithms and Bots no not for Banning them but for regulating them uh one key regulation is that an AI should never pretend to be a human being if you are interacting with an somebody online you should be able to know if it's an AI or not it's okay to
interact with AIS but on condition that it reveals it is an AI otherwise um it could manipulate humans on a scale never seen before in history so this is one key regulation the other key regulation is that companies should be liable responsible for the actions of their algorithms which they never will be this is up to us as Citizens to demand that governments make the companies resp you know we have it in all the other Industries if a company produces a medicine which has side effects harmful to people that causes disease then you can take
this company to courot and you can get damages and maybe the managers the executives go to jail if it was a very big disaster we should have the same thing with AIS and other technology companies if AI is causing some disaster social disaster or some other kind of disaster the company should be responsible for it I can go on and on asking questions but he has to go therefore I'm going to wrap up with just two or three short questions one is that I want to know do you really not use a smartphone I have
a smartphone but it's not on me all the time no there are many services that you can no longer use without a smartphone like because internet believes you don't have a phone for many years I just didn't have a phone then it became too difficult because again many services just demanded like the bank or the doctor that I have a phone so I keep it but I don't I try to use it instead of being used by it so I only you know it's it's it's not on me all the time I'm not I'm not
addicted to it I'm not checking it all the time I try to use it as a tool and do you meditate for two hours every day yes this this I I I I do and I'm now in India actually coming to sit a very long Meditation Retreat of 60 days the vipasana the vasana ret for those Una that's a meditation where you cannot speak I speak enough for the rest of the year so I can afford to be 60 days of 60 days no speech okay my final question to you before we wrap up we
are sitting in India which saw elections recently we are a democracy but we are also democracy which is sliding in numbers we have SE what do you have to say to democratic countries where we also see a bombarding of information like this impacting the Democracy democracy in in essence is a conversation between a large number of people which has strong self-correcting mechanisms you need to keep the conversation and to have a conversation you must have many voices if there is only one voice which is allowed to be heard this is not a conversation this is
dictatorship this is a monologue a dictate the other thing is uh you need to be a democracy a country must have a self-correcting mechanis ISM especially for the government that if the government make a mistake there is an independent media there are independent ctes that can Expose and correct the mistake of the government and ultimately there are free and fair elections in which the people can replace the government as long as you have these mechanisms democracies are safe you can have different governments with this policy or that policy and experiment as long as there are
these checks and balances and the ultimate ability to replace the government then the democracy is safe the danger in every democracy and this goes back thousands of years in a democracy you give one person or one party the power for a limited duration on condition they afterwards give it back to the people and the people can make a different Choice next time the danger is what if you give power to someone who then doesn't give it back and they corrupt the institutions the time that they're there they use the power they have to take over
the institutions to rig the elections to control the media and then you can never replace them that was always the big dangers in democracy and there is no perfect solution uh you can have strong institutions but government ultimately is able to corrupt all these in institutions so people should be make wise choices and be very careful not to give power to people who might not want to give it back you know like I look at the us and we are now in this transition between the Biden Administration and the Trump Administration and the amazing thing
is you know that Biden and the Democrats they're extremely afraid of trump some of them really hate Trump and never nevertheless they are taking all the power and giving it peacefully to Trump there is no talk of okay Biden still controls the Army Biden still controls uh the the government why not have a coup or have some kind of of mechanism to keep the power no they are taking all the power in the most powerful country on Earth and giving it peacefully to their political Rivals this is the perfect example of democracy in question the
big of democracy in action the big question is what Will trump do I mean we saw in 2020 that this is a person who doesn't like to give power up even when he lost the elections he claimed that he won them so this is a very dangerous gamble of the American public to give power to somebody like Trump who might not uh uh give it back and we'll see what happens we'll see what happens let's not give ideas to Biden either thank you very much so for democracy we need checks and balances that's the final
word thank you very much for joining us at KF and I'm not sure how to wish happy meditation but hopefully you have a good vasana experience again thank you