Wow. All right. Welcome everybody.
Welcome to a conversation about your future, the future of your companies, your nations, and your kids. We're going to be discussing super intelligence. What does that mean, and what happens when it arrives?
Uh, we've been talking about AI, AGI, now perhaps digital super intelligence or ASI. I want to start with the obvious question and it's one that I don't think anybody has a perfect answer for but what does super intelligence mean and when is it likely to be here Eric we've talked about this what are your thoughts >> so a simple thank you Peter and thanks for everybody for being here and obviously thanks to Fay our very close colleague um the general accepted definitions of general intelligence is its human level of intelligence AGI and human intelligence you can understand because we're all human. You have ideas, you have friends, you have, you know, you think about things, you're creative.
Super intelligence is defined as the intelligence equal to the sum of everyone, right? Or even better than all humans. And there is a belief in our industry that we will get to super intelligence.
We don't know exactly how long. Uh there's a group of people who I call the San Francisco consensus because it's all they're all living in San Francisco. Maybe it's the weather or the drugs or something, but they all basically think that it's within 3 to four years.
Uh I personally think it'll be longer than that, but fundamentally their argument is that there are compounding uh effects that we're seeing now which will race us to this much faster than people think. And fay, I don't think anybody's expected the performance that AI has given us so far. The scaling laws have given us capabilities that are extraordinary.
You know, you're the CEO of a a new company, the founder of World Labs. You've been at Stanford working on this. Uh how do you think about super intelligence?
Do you discuss super intelligence at all in your work? >> Yeah, that's a great question, Peter. And uh um you know when Alan Turing dared humanity with the question of can we create thinking machines um he was thinking about the fundamental question of intelligence.
So the birth of AI is about intelligence is about the the the profound general ability of what intelligence means. So from that point of view AI is already born as a field that tries to push the boundary of what intelligence mean. Now in fast forward to 75 years after Allan touring this phrase uh uh super intelligence is pretty hot in Silicon Valley.
And I do agree with Eric that the colloquial definition is um what is is the capability of AI and computers that's better than any human. But I do think we need to be a little careful. First of all, some part of today's AI is already better than any human.
For example, um AI's ability of speaking many different languages, translating between, you know, dozens and dozens of language. Pretty much no human can do that. Or AI's ability to calculate things really fast.
AI's ability to know from chemistry to biology to to sports, you know, the vast amount of uh knowledge. So it's already super to human in many ways but it remains a question that um can AI ever be Newton? Can AI ever be Einstein?
Can AI ever be Picasso? >> I actually don't know. For example, we have all the celestial data of the movement of the stars that we observe today.
Give that data to any AI algorithm. It will not be able to deduce Newtonian law of motion. That ability that humans have, it's the combination of creativity, abstraction.
I do not see today's AI or tomorrow's AI being able to do that yet. >> Eric, >> so one of the common examples that and Fay of course got it right is to think about if you had all of the knowledge in a computer that existed in 1902. Yes.
Could you invent relativity? Uh basically the physics of today and the answer today is no. Um so for example if you look at what is called test time compute where the systems are doing reasoning they can't take the reasoning that they learned and feed it back into themselves very quickly whereas if you're a mathematician you prove something you can base your next proof on that it's hard for the systems today although there are appro approximations so we're we're it's we don't know where the boundaries are the example that I'd like to use is let's imagine that We can get computers that can solve everything that we normally can do as humans except for these amazing set of creativities.
How do really creative people do it? The best examples are that they are experts in one area, they see another area and they have an intuition that the same mechanism will solve a problem of a completely different area there. That's an example of something we have to learn how to do with AI.
An alternative would be to simply do it uh in brute force using reinforcement learning. The problem is that combinatorily the cost of that is insane and we're already running out of electricity and so forth. So I think that to get to real super intelligence we probably need another algorithmic breakthrough.
>> We need another what >> algorithmic breakthrough another way of dealing with this. The technical the technical term is called non-stationerity of objectives. What's happening is the systems are trained against objectives.
But to do this kind of creativity that FE is talking about, you need to be able to change the objectives as you're doing them. >> We've seen this past year, I think GPT 5 Pro reach an IQ of like 148, which is extraordinary. And of course, there is no ceiling on on this.
I mean it it loses meaning at some point. But the ability for every human on the planet to have an Einstein level, not in the creativity side, but intelligence side in their pocket changes the game for 8 billion humans. And now with Starlink and with, you know, $50 smartphones, it's possible that every single person on the planet has this kind of capability.
Add to that humanoid robots. Add to that, you know, a whole slew of other exponential technologies. And the commentary is we're heading towards a a post scarcity society, right?
Do you believe in that vision? Feet, >> I do think we have to be a little careful. I I know that we are combining some of the hottest words from Silicon Valley from uh AI, super intelligence, humanoid robots and all that.
To be honest, I think robotics has a long way to go. I I think uh we have to be a little bit careful with the projection of robotics. I I think the the ability the dexterity of human level uh manipulation is um is um you know we we have to wait a lot longer to to get it.
So, are we entering post uh scarcity? Um, I don't know. I I actually I'm not as uh bullish as a typical Silicon Valley uh person because I think we're entering I absolutely believe AI will be augmenting human uh capabilities in incredibly profound ways.
But I think we we will continue to see that the collaboration between humans and AI will be the most productive and fruitful way of of doing things. >> So the projection is that AI is going to generate as much as 15 trillion dollars in economic value by 2030. Uh idea that shifting the foundation of national wealth from capital to labor to computational intelligence.
So what's that implication, Eric, for the global economy? How are we going to see redistribution, if you would, of wealth or of capabilities? Are we going to see a leveling of the field between nation states, or are we going to see runaway winners?
>> So in your abundance hypothesis, which we've talked a lot about, there may be a flaw in the argument because part of the abundance argument is that it's abundance for everyone. But there's plenty of evidence that these technologies have network effects which concentrates to a small number of winners. So you could for example imagine a small number of countries getting all those benefits in those countries.
You could imagine a small number of firms and people getting those benefits. Those are public policy question. There's no question the wealth will be created because the wealth comes from efficiency.
And every company that has implemented AI has seen huge gains. Think about here we are in Saudi Arabia. You have all of this oil distribution, all the oil networks, all the losses.
AI can easily improve that by 10% 20%. Those are huge numbers for this country. If you look in biology and medicine and drug discovery, much faster drug approval cycles, much lower cost trials, look at materials, much more efficient and easier to build materials.
the companies that adopt AI quickly get a disproportionate return. The question is is are those gains uniform which would be our hope or in my view more likely largely centered around early adopters, network effects, well-run countries, and perhaps capital. >> But you could imagine still that we're going to see autonomous cars in which the ownership of a car is four times, let me put it the other way, uh being in an autonomous vehicle is four times cheaper than owning a car.
We can see AI giving us the best physicians, the best health care for free in the same way that Google gave us access to information for free. We will see a massive demonetization in so much of our world. I think that will be available to anyone with a smartphone and a decent bandwidth connectivity.
Is that still not what you think will happen? Do you think there's a reason something that would stop that level of distribution of of those services which we spend a lot of our money on today? >> I do think AI democratizes that.
I totally agree with you. I think whether it's healthcare or transportation or knowledge AI will will uh democratize massively. But I agree with Eric that uh this increased global uh productivity does not necessarily translate to shared prosperity.
Shared prosperity is a deeper social problem. It involves policy. It involves you know geopolit uh politics.
It involves distribution and that's a different problem from the capability of the technology. So what's your advice to the country leaders that are here uh that are seeing ASI as a future for someone else and not for themselves? What should they be doing?
I mean this is the speed at which is deploying. They don't have a lot of time to make critical decisions. Well, it's it's worth describing where we are now in the United States because of the depth of our capital markets and because of the extraordinary chips that are available in the Taiwanese manufacturers, TSMC in particular, America has this huge lead in building these what are called hyperscalers.
If there's going to be super intelligence, it's going to come from those efforts. That's a big deal. If there is super intelligence, imagine a company like Google inventing this, for example.
I am obviously biased. Um, and what's the value of being able to solve every problem that humans can't solve? It's infinite.
>> Sure. >> So, that's the goal, right? China is a second.
Doesn't have the capital markets, doesn't have the chips, and the other countries are not anywhere near. Saudi has done a good job of partnering with America. Uh, and the hyperscalers will be located here and in the UAE.
That's a good strategy. So that's a a good example of how you partner. You figure out which side you're on.
Hopefully it's the United States. And you work with the US firms. I do think countries all should invest invest in their own human capital, invest in partnerships and and invest in its own uh technological stack as well as the business ecosystem.
This is uh as Eric said it depends on the strength and uh particularity of the different countries but I think not investing in AI it would be macroscopically the wrong thing to do. >> So under the thesis that that investment involves building out data centers in your nation. Do you think every country should be building out a data center that it has sovereign AI running on?
>> Every country is a very sweeping statement. I I I do think um it depends. It depends.
I I think obviously for region like this absolutely where you know oil uh energy is cheaper and uh and such an important region in the world. But if we're talking about smaller countries, I don't know if every single country can afford to build data centers. But there are other um other areas of investment, right?
>> But let me give you an example. Let's pick Europe. It's easy to pick on Europe.
Energy costs are high, right? Financing costs are not low. So the odds of Europe being able to build very large data centers is extremely low.
But they can partner with countries where they can do it. Uh France for example did a partnership with Abu Dhabi. So there's an there are examples of that.
So I think if you take a global view and you figure out who your partners are, you have a better chance. The the one that I worry a lot about is Africa. And the reason is how does Africa benefit from this?
So there's obviously some benefit of globalization, better crop yields and so forth. But without stable governments, strong universities, major industrial structures, which Africa with some exceptions lacks, it's going to lag. It's been lagging for years.
How do we get ahead of that? I don't think that problem is solved. >> We've seen incredible progress with AI today effectively beginning what people call solving math.
that potentially tips physics, chemistry, biology. And we have the potential, my time frame is the next 5 years, others may think longer to be in a position to solve everything where the level of discovery and the level of new product creation, new materials, uh, biological, uh, therapeutics and such begins to grow at a super exponential rate. How do you think about that world in five years, Eric?
>> So, um, first I think it's likely to occur and the reason technically is that the all of the large language models are essentially doing next word prediction. And if you have a limited voc vocabulary, which math is, and you have a and software is, and also cyber attacks are, I'm sorry to say, you can make progress because they're scale free. All you have to do is just do more.
So if you do software, you can verify it. You can do more software. If you do math, you can verify it, do more math.
You're not constrained by real reality, physics and biology. So it's likely in the next few years that in math and software, you'll see the greatest of gains and we all understand your point that math is at the basis of everything else. I think it's a a there is the expert on the real world.
there's probably a longer period of time to get the real world right, which is why she founded the company of which I'm an investor. Do you want to talk about that? >> Yeah.
Um, well, first of all, I actually want to respectfully disagree. Okay. I do not think that we will solve all the problems, fundamental math and physics and chemistry problems in uh in in five years.
>> We're going to take a bet on that one. >> Yes. So, FII14.
>> Okay, you got it. >> We should take a bet on that. Um, part of humanity's greatest capability is to actually come up with new problems.
You know, as Albert Einstein said, um, most of science is asking the right question and we will continue to find new questions to ask and there are so many fundamental questions that in our science and math that we haven't answered. Feet your new company World Labs uh creating extra extraordinary uh persistent uh you know photorealistic worlds. Are you expecting that we are going to be spending a lot more of our time in virtual worlds?
I mean my 14-year-old boys right now are spending way too much time in their virtual gaming worlds. But is this what we're going to do in a you know uh 10 20 years in a post ASI world where we don't have to work as much we have a lot more free time our robots maybe by then are serving us are we going to live in the virtual worlds >> great question so so what we are doing is building large world models that's a problem that's after large language models that humans have the ability to have the kind of spatial intelligence that we can understand the physical 3D world we can imagine um any kind of of uh 3D worlds and be able to reason and interact with it. So we do not yet up till what our company has been doing we do not have such a world model.
So World Labs the company I'm uh I'm uh I co-founded and I'm CEO in is just created the first large world model. So the future I see I actually agree with you that we will be spending more time in um in the multiverse >> yes >> of uh of the virtual worlds. It doesn't mean that the reality the real world this world this physical world is gone.
It's just so much of our productivity, our entertainment, our communication, >> our education, >> our education are going to be a hybrid of virtual and physical world. Think about uh you know, think about in medicine, you know, how we conduct surgery is very much going to be a hybrid world of augmented reality, virtual reality as well as physical reality. And we can do that in every single sector.
So, humanity is using these large world models are going to enter the mo uh infinite universe. Um, >> and I had a chance to see uh your model backstage. It's amazing.
If you haven't yet, go check out Fee's World Labs. Uh, the technology she's building is going to be world changing. So, uh, my last question here is about human capital.
So super intelligence has been called the last invention humanity will ever make as it could automate eventually every process. We'll see if it automates discovery. We'll see how much of creation automates.
But in a world where the best strategy, science and economic decisions are being made by machines at some point. What is the ultimate irreplaceable function of human intellect and leadership? What are humans innately going to be left with in 10, 20 years?
>> Well, in 20 years, we will enjoy watching each other compete in human sports, knowing that the robots can beat us 100% of the time. >> But if you go to Formula 1, you're going to want to see a human driver, not an automated car. >> Yes.
So humans will always be interested in what other humans can do and we'll have our own contests and perhaps the supercomputers will have their own contest too. But the your reasoning presumes many many things. It presumes a breakout of intelligence in computers that's humanlike.
Unlikely probably a different kind of intelligence. It it presumes that humans are largely not involved in that process. highly unlikely.
All of the evidence and FE said this very well is going to be human and computer interaction that basically we will all have so going back to what you said about 8 8 million people 8 billion people with smartphones with Einstein and their phone the smart people of which there's a lot will use that to make themselves more productive the win will be teameming between a human and their judgment and a supercomput and what it can think and remember that There is a limit to this craze that supercomputers and super intelligence need energy. >> So perhaps what will happen at some point is the supercomputers will say huh we need more energy and these humans are not building fusion fast enough. So we'll accelerate it.
We'll come up with a new form of energy. Now this is science fiction. But you could imagine at some point the objective function of the system says what do I need?
I need more chips or more energy and I'll design it myself. Now, that would be a great moment to see. >> I agree.
>> I I I do want to say it's so important as we talk about AGI at ASI that the most important thing that we keep in mind is human dignity and human agency. Our world, unless we are going to wipe out this species, which we're not, has to be human- centered. Whether it's automation or collaboration, it needs to put human agency and dignity and human well-being in the center of all this.
Whether it's technology, business, product, policy or any of that. And I think we cannot lose our focus uh from that. >> Amen.
Everybody, ladies and gentlemen, FA Fa, Eric Schmidt, thank you all.