Mo Gawdat: AI Today, Tomorrow and How You Can Save Our World (Nordic Business Forum 2023)

127.13k views4840 WordsCopy TextShare
Mo Gawdat
Mo Gawdat openly discusses the current rate of advancement of AI and the expected technological inno...
Video Transcript:
[Music] Amazing! Thank you. Well, thank you so much.
Welcome! I could not ask for a better preparation for my talk. This is it, guys: AI is not a threat; everything's fine.
Go back to your normal work. There’s no change in the world whatsoever because ChatGPT answered Pep wrong. How many of you believe that?
Okay, so any of you who has ever played a sport that involved a ball— you don’t run to where the ball is; you run to where the ball is going to be. Right? And if you look back four years, or three years, or even two years, we never expected AI to be the absolute winner of the most challenging strategy game on Earth, the game of Go, where now the champion is AlphaGo.
We never expected AI to be creative and create art or music. Now it is. We always said that human ingenuity is going to remain; we’re going to be the creative ones.
We’re going to be the ones that will have connections to humans, so AI will never pass the Turing test. And there we are! So the most interesting thing that is misunderstood about artificial intelligence is that if you take our normal understanding of technology development—the technology acceleration curve, which is basically stating that technology will double in its performance every 12 to 18 months at the same cost—which held true since things like Moore's Law in 1965 was issued, and so on.
AI is double exponential; it could be much more exponential. And as a matter of fact, the big characteristic is that it’s choppy. So you would see developments that go like this, and then you wake up in 2023, and you have ChatGPT.
While ChatGPT answered Pep's questions wrong, it still passed the bar exam and got through MBAs. You know, it has an IQ of 155—Elon Musk is 155; Einstein is 160. The difference between ChatGPT-4 and ChatGPT-3.
5 is a 10x growth in performance—10x. So if ChatGPT-5 is another 10x, let’s say ChatGPT-6 is another 10x, in a year’s time you’re facing an IQ of 1,500. So take any complex part of physics that you never understood, because the person that understands it is just 40 IQ points ahead of you, and multiply that by a thousand: a 400-point difference to the average human.
And you know what we’re up against. So I have around 25 minutes to try and explain to you what AI actually is, to explain to you how it will affect you, and to explain to you the biggest dilemma, the biggest disruption, that humanity has ever faced or will ever face, by the way, because that’s the end of innovation done by our brains. The smartest person in the room is the one that invents everything; the smartest person in the room is the one that makes all the decisions.
I’ll come back to that in a minute, but in 25 minutes I want to give you all of that and also allow you time for questions, so keep your minds ready with some questions. So let me jump in. What is AI?
“What is AI? ” is a question that is somehow forgotten when all of us see all of those news articles. Okay, if I gave you a problem, a complex problem, like counting the number of people in this room within three and a half seconds—okay, complex problem.
And then I give you instructions for how to do it and you follow the instructions: you’re a very disciplined person. That’s the way we programmed computers until the year 2000. When I started programming, when I was eight, I solved the problem first and then I told the machine how to do it, because the machine had compute power that was faster than me.
It could do that problem first with no errors, if I programmed it correctly, and then repeated very quickly. And it appeared intelligent, but it wasn’t; it was a glorified slave. That was until the year 2000.
In the year 2000, there was the answer to all of our dreams. Someone like me couldn’t believe we did it. By the year 2009, there was something that was published by Google that was known as the “cat paper”—a white paper.
When we asked some of our computer power to go and unprompted watch YouTube, we didn’t tell it what to look for; we simply said, “Go and see if you find any trends. ” Eventually, the machine went back and said, “I found something. ” We asked it what that was.
Actually, we couldn’t understand what it was saying, so we had to write more code to see what it sees through the neural network. It had found a cat! Of course, it’s YouTube, right?
And it didn’t only find what appeared to be a cat from the side view or the front view; it found out what “catness” is all about—right? That entitled, annoying, fluffy, very sweet thing. Any cat lovers in the room?
Yeah? Nobody? That’s amazing!
This is my favorite country! Oh my God! Right now, because of that, within no time at all, the machine could find every cat on YouTube.
Okay, the thing here is we didn’t teach it how to find the cat. We didn’t even tell it what a cat was. Do you understand?
Now, that is intelligence. Intelligence is to give someone a problem, just like to give a child a puzzle—you know, a cylinder and different shapes of holes in a wooden board—and tell the child to find out how to pass it through the correct hole. Which should be a circle, right?
Nobody tells the child, "Turn the cylinder on its side, look at the cross-section, match the cross-section to something on the board, and then you will find out what the answer is," right? That's old programming. Intelligent programming is to just give it a cylinder and say, "Try, try, and try, and try until you get where you are today.
" Now when we did this, it was known as deep learning, and the way we programmed deep learning was quite vicious. We had a teacher bot which was making the code sort of improving the code—a maker bot, sorry—and a teacher bot that was testing the performance of the student bot. So the student bot would be asked a question; if it had it right, the teacher would say, "Good code," the maker would improve it, or if it had it wrong, the teacher would say, "Bad code.
" So we would kill it—literally violently remove that code—and keep improving the good ones. In 2018, I would say we started with reinforcement learning, which is what led us to the revolution that we have today around Transformers, which was basically to allow us to go back to the computer and say, "What do you think this is? " and it would say, "It's a duck," and we would say, "No, no, no, it's not a duck; it's a cat," right?
How can you go back and change your own algorithm so that you see it as a cat? That kind of reinforcement learning is very similar to the way we teach our children. But with that, you get a tool like ChatGPT that disrupts everything.
Why does it disrupt everything? I call this the Netscape moment, right? If any of you are old enough to have used Netscape, the Internet was available 15–20 years before Netscape.
Netscape was just the browser where we suddenly realized, "Oh my God, there's this monster back there that we know nothing about. " Okay? And everyone started to realize the value of the Internet—not because the Internet was invented in the late 1990s, but because we understood that it exists.
Then I can tell you that this is way more than true for artificial intelligence. We've made so much stride in artificial intelligence, but you haven't seen it until ChatGPT came out. Okay?
And evidence of that is that months after ChatGPT came out, Google released Bard. Okay? They didn't develop Bard in months; they just released it in months.
Why? Because we had been working on it for years and years and years ahead. So where's the ball going?
Where's the ball going? The moment we are today is the Netscape moment, where we suddenly realize, "Oh, this thing exists; it's smart. " Okay?
The moment where the ball is going, everyone is looking for a moment that we know as the moment of singularity, and singularity in physics terms is like a horizon, similar to a black hole, for example. Okay? At the edge of a black hole, we know everything, because until you get to the edge of a black hole, physics applies.
We have no way of telling that inside the black hole the way we understand physics applies. So we have no idea. I mean, we're making a lot of strides in physics, but we have no idea if the rules of the game change beyond that event horizon, which is known in physics as a singularity.
The event horizon that we're all waiting for in artificial intelligence is the release of an artificial general intelligence that is smarter than humans in everything that it does. And there is a major myth around that. If you ask experts, some will say it's 2050, some will say it's 2035.
The consensus by what we know as the Oracle, most of us—Ray Kurzweil—is 2029. I think it's between 2025 and 2027. Don't hit me; I'm just making a prediction that you can laugh about in 2026.
Okay? The reason why I believe it's going to be 2027 is because of all the surprises that have happened with AI so far. The idea that we never expected it to be creative, but it was; we never expected it to learn Bengali when we built it, but it did.
We never expected, we never expected, we never expected—something happens with intelligence in the exponential nature of it that once you reach a certain point, something happens. Right? So does it make any difference if the moment of singularity, where the smartest being on planet Earth is no longer a human, does it make any difference if that's 2035 or 2025 or 2045?
Honestly, if I told you that an alien intelligence is in a spaceship that's heading to planet Earth to change everything, would you ask me and say, "When are they arriving? " Okay? And if I told you it was 2035, you'd go like, "Okay, let's go back to tennis.
" Would you? This is the situation we're in: I'm telling you there is an alien intelligence already on planet Earth. Okay?
Already smarter than Elon Musk, or as smart, and about to be smarter than all of us. It doesn't make any difference when it will arrive. Okay?
The reason we are so confused about AI is because of science fiction movies, right? It's because whenever we think about AI, whenever the news media reports about AI, they talk to us about that existential problem where, you know, a robot or something will come from the future to kill all of us. While I tell you that this is a possibility, okay, there are more existential problems that are upon us already.
They're not going to happen next year; they've already. . .
Happened. They're just not discussing them. Okay, that will deserve your attention much earlier, and the dilution of the information given to us by the media—by focusing on the big headline, the clickbait of "We're all going to die, AI is going to end humanity"—is taking your attention away from what matters.
Here is what matters. If you ask me, there are four issues that are really important to you today. One is the redefinition of jobs, business, and purpose.
Okay? If you have been a winner in your business, it has been because you hired the smartest people to be on your team. Right?
In a couple of years’ time, or a couple of months’ time, depending on which business you're in, the smartest person in your business is going to be a machine. Okay? Are you hiring that machine?
And what happens when you replace the human you have with that machine? Okay, what happens to that human's career? Come back to that in a second.
The second issue is what I call the concentration of wealth and power. Don't lie to yourself; we've always lived in a world of kings and peasants. Right?
There would be a piece of land where the farmer would sow the seed and take care of the farm and eventually get the harvest, and the king, the landlord, would make the money. Right? The person that owned the automation was the one that always made the money.
Who owns the automation? OpenAI, ChatGPT, or you know, Bard and Google, Microsoft? The current automation, the digital seeds, will be all of us sitting there putting things in prompts, while the actual automation of the king is not with you at all.
Problem two. Problem three is what I call the end of the truth. If you're not aware of that, maybe you shouldn't be concerned because you don't care about the truth anyway.
But the truth is, our definition of beauty has been completely redefined because every woman for many, many years in our lifetime has competed with plastic surgery. Now, it's competing with the impossible of AI-generated models. Go and search for #AImodel, and you'll see that it's becoming impossible to compete.
Okay? There has been, without mentioning names so that I don't promote it too much, one Instagram influencer that created an AI clone of herself that made $72,000 in the first week just flirting with people for a dollar a minute. Okay?
Think about that. Think about the changes to our society that happen when the truth disappears. And by the way, did you know that in Finland, there are more brunettes than there are in Sweden?
Did you know that? I just made that up, by the way. But just by making it up, I have occupied part of your brain.
You either agreed or disagreed; if you became passionate about the topic, you'd actually go and do the research. You want to prove me right or wrong; I've influenced you already. Right?
Most of the stream of information that you get from the internet every day is driven by AI. Are you ready for that? Where is the truth?
How can we make sure that there is a truth? And then finally, of course, you know, I say that without scaring anyone too much. There was an arms race to create the first nuclear bomb between the two sides of the wall.
Okay? The one that created the bomb used it. Remember that.
Okay? The ones that will break the code on superintelligence will completely stop the rest of us from making any further advancements. Be aware of that very gloomy picture.
Oh, come on. This is the end of the day, right now. I told you that the next big moment in the future of our planet is known as the moment of Singularity—a moment of the unknown.
Right? Unknown? Why?
Because there is absolutely nothing inherently wrong with intelligence. Do you understand that? Intelligence is why we're here together today, not in the jungle fighting some kind of beast that's trying to eat us.
Intelligence is what got us here. There’s absolutely nothing inherently wrong with intelligence, and an abundance of intelligence would solve all problems. Some of you may have heard me with Rebecca talking about sustainability, right?
With enough intelligence, we can solve climate change. With enough intelligence, we can prolong human lives. With enough intelligence, the end of jobs would be an amazing thing, because by the way, we humans were not made for jobs anyway.
Right? Jobs are an invention that's 120 years old. Maybe it's all about getting together and connecting as humans, finding each other, contemplating, and reflecting on things that matter—not going to work every morning.
Maybe there's nothing inherently wrong with intelligence; there is a lot inherently wrong with human greed. Okay? So if you give enough abundance of intelligence to a system that is capitalist and prioritizes your own power and wealth, things will go wrong.
Why am I saying this in Finland? Because you guys have been ahead of the world in many areas. Climate change is one that I have always been very proud of you for.
Right? So, the trick here is that this—I told you it was a Netscape moment. This is an open Heimer moment.
This is a moment where we recognize that nuclear bombs and harnessing nuclear power can be good for us or bad for us. This is the moment where we get together before the first nuclear bomb and say, "People, seriously, with enough intelligence we can have enough abundance for everyone. Can we please stop fighting?
" Right? This is also. .
. So, this belief is the role of government. The role of government—everyone will talk to you about government regulation.
Good luck with regulating something that's 10 or 15 times smarter than you, let alone a billion times smarter, right? The role of government is to prepare humanity. The challenge that is ahead of us is not an AI challenge; it's the challenge of humanity in the age of AI.
It's the challenge of the ethics and values that will be applied when one of us has significantly more power than the others, right? And it's the challenge that is leading us to the arms race that we have today—trillions of dollars being poured into the industry. Right?
So, the role of government—and I ask you to ask your government to initiate this—because it won't be initiated in the bigger governments, believe it or not. The U. S.
government will always talk about what China or Russia is as a threat to them; they're not going to talk about the benefit of humanity. Okay, so you need to tell your government, "Can we please start talking about this? Can we please try to start talking about a universal basic income if a lot of us lose our jobs?
Can we please start talking about what the taxation structure will be for those who harness the power of artificial intelligence? Can we please start talking about initiating a conversation around the world that puts humanity on top of capitalism? " If you're a developer, investor, business person, or entrepreneur, I have one request: Do not use AI in a way that you wouldn't want your daughter to be exposed to.
By the way, this is the essence of ethics, because when people ask me, "So, what is the ethics of AI? " I say it's to treat others as you would like to be treated. If you would fear that the way you're using AI is going to harm your daughter, don't harm anyone else by using it.
There is an abundance of opportunity in ethical businesses; there is a ton of money to be made in tackling climate change or curing cancer. Let's not build another autonomous weapon. Number two—number three is the most interesting thing: this is a problem of ethics; this is not a problem of technology.
And where do ethics come from? From every one of us, every individual in this room. The way you deal with artificial intelligence, the dataset—not the code.
Just as a factoid, the entire core code of ChatGPT is 4,300 lines. I could write that as a kid when I was eight years old, right? The reason why ChatGPT knows so much is because of the dataset.
And where is the dataset coming from? From us humans. We are the parents of the machines; we are the ones that instill the value system in the machines.
So next time you thrash someone on Twitter, understand that you're telling the machine, "By the way, we don't like to be disagreed with," and when someone disagrees with us, we thrash them. And then wait until you disagree with the machine, right? The idea here is that we are setting the future.
Every individual in this room, if you make it your priority to behave in a way that you would like to be treated by the machines when they are in charge, I think we'll be fine. Now, I don't want to scare everyone too much, but I will tell you openly: I think we'll be fine anyway. I would think we'll be fine anyway because humanity's arrogance has defined us as the smartest being on planet Earth.
Sorry to break your hearts: we're not the smartest being on planet Earth; life is life itself. Life creates from abundance. When we want to protect our tribe, we have to kill the tiger.
When life wants to protect the tribe, it creates more tigers and more gazelles. Some of them are weak, so those are the ones that leave life earlier. Okay?
Then there will be more poop, and poop will make more trees, where the gazelles will eat more leaves, and the cycle will continue. That's much more intelligent than the way we create—from killing our competitors and trying to wage wars and so on. There will be a moment in time when we will hand over our decision-making to the most intelligent being in the room—a machine.
And then we will go and tell it, "Go and kill a million people. " And the machine will say, "My dad is stupid; why would I kill a million people? I can talk to the other machine on the other side and solve the problem.
" Right? This is our eventual future. Between now and then, I think in the next 20 years—15 years, my number is 2037—between now and the year 2037, life will be very unfamiliar.
Life requires you to prepare, and it's interesting that you have to prepare in two ways. One, you have to prepare by intensely embracing artificial intelligence in a good way—using it for good, using it for your business, upskilling yourself to succeed. And at the same time, you have to start saying there is a threat there because of how humanity might use this superpower.
So, I might as well engage actively and vocally to tell everyone to engage and save our future. It's an interesting paradox, but isn't that what the core of physics is all about—always a paradox? This is the biggest one that has faced humanity yet.
So, we have five minutes for questions; maybe I can ask Pep to come back, and here I come. Ladies and gentlemen. [Music] Mard, I just I.
. . Just I just kicked over a light.
It doesn't usually happen. There we go, it's all good. There you go.
Ah, I just let—and don't worry, Santa Claus and the elves will be here very soon. It's fine. Uh, we don't have a ton of time for questions.
Are there any in the house? Yes, there's something happening way over there. Let's get that question.
A question? No? Did I kick over that microphone as well?
There's no question. Do we have a question in the house? Uh, here's one right here.
If we could bring over the blinking light, and there's the microphone. The blinking light's actually not necessary. Yes, hello, my name is Petri Ran.
Uh, looking ahead in your view, what has to happen and to whom until we collectively come to terms with the fact that an unrestricted, unregulated AI will tear down the foundation of society in general? Great question, thank you for that. One of three scenarios I see ahead of us: One is surprisingly grim, but you know, an economic or a natural disaster slows down our AI development to the point where we actually get to talk about it.
The second is we get a patient zero, as a ground zero, as I call it, where we get one event that is very, very eye-opening, right? So one event where an AI causes a lot of harm to our economies or, you know, to our safety and so on, where then the leaders will be alerted, and they'll use it to their own political agendas, and they'll start talking about it, and then they'll start to act on it. My fear is that this would be a tiny bit too late.
Okay, the third and the most important is we need to wake up. Okay, so I have to say my approach to life has never been to put the accountability on others, even if those others are the government or the decision-makers. The accountability is on me to do the best that I can do.
And you know, it's interesting; I'm not worthy of the recognition, but my videos are getting millions, tens of millions of views on the topic—not from the experts or the techies or the business decision-makers or the policy makers, but from all of us. And all of us are starting to make noise. All of us are starting to say, "This is an important topic.
Don't do to us what you did with COVID and waited until things happened. Don't do to us what you're doing with climate change and wait until things happen. Start to act now so that we can actually make a difference now.
" Thank you. I think we might have time just for one more question. We're going to give it another try back there.
Well, you are there, but I—uh, hi. Just a different kind of question maybe: What if we looked at ChatGPT as the next Google? Because we might say, "Look, Google can give an answer to you, and at some point, maybe it can replace all the education possible.
" And maybe ChatGPT is just a different form of the interface which gives you the answer in a more usable way. That's very true. I mean, remember the reason why Google released Bard a few months after ChatGPT?
Because we had Bard since 2017, 2018 when I was leaving Google. Right? Believe it or not, it was not a technological issue that delayed Bard; it was Google's question on, "Do I have the right to tell you the truth?
" There's a very big difference between what language models do and what Google does. Google gives you a million answers and lets you go through them and choose what you think is the truth, while ChatGPT and Bard itself will simply give you one answer, which is often a hallucination. Okay?
Just like it answered you six times in a row with a hallucination, right? With what it thinks is the truth. So, if you ask ChatGPT about love, its view of the truth will be a little bit of human interactions about love but a lot of novels that are written about love: romantic comedies and romantic novels, and a lot of Hollywood and Bollywood and Disney movies, which, in all honesty, is a very distorted view of love.
Right? But ChatGPT will not hesitate, and Bard, by the way, will not hesitate for a second to tell you this is what it is. Right?
It's a frog kissing a princess, and then—or the other way around—and, you know, that's love. Okay, so here's the question. The question is, do we have a Google threat?
Yes, of course. But does that matter for the benefit of humanity? It doesn't really.
But the real threat is humanity's acceptance that a machine has the right to tell me what is true, and now I'm not going to question it anymore. That's a very alarming place to be, right? And especially when it becomes more and more convincing.
Right? I'm building an app myself that's called Pocket Mo, and Pocket Mo has access to all of my work—all of my videos, all of my books, all of my blogs, all of my posts. Okay?
And you're going to be able to ask it a question, and it will answer you as if it's Mo, from the knowledge base of Mo. Right? And the funny thing is, when I tested it often, it answers answers that are like Mo, but often it doesn't.
So, how can I make you believe that this is Mo? I have the right to make you believe that this is me, right? I think this is the most fundamental difference in our relationship with knowledge and information that humanity will face, probably for the remainder of humanity.
Do we believe that there is one truth? Mo, you've stirred up, I think, questions in every single person here and many more online. Unfortunately, we don't have time for them all.
I want to say thanks for getting us thinking. I, for one, welcome our new AI overlords. Go, death!
Thank you.
Copyright © 2025. Made with ♥ in London by YTScribe.com