The Future Of AI, According To Former Google CEO Eric Schmidt

282.87k views3347 WordsCopy TextShare
Noema Magazine
In an exclusive interview with Noema Magazine, Former Google CEO Eric Schmidt weighs in on where AI ...
Video Transcript:
the key thing that's going on now is we're moving very quickly through the capability ladder steps and I think there are roughly three things going on now that are going to profoundly change the world very quickly and when I say very quickly the cycle is roughly a new model every year to 18 months the first is basically this question of context window and for non-technical people the context window is the prompt that you ask so you know study John F Kennedy or something right but in fact that context window can have a million words in
it and this year people are inventing a context window that is infinitely long and this is very important because it means that you can take the answer from the system and feed it in and ask it another question so I want a recipe let's say I want a recipe to make a drug or something so I say what's the first step and it says buy these materials so then you say okay I've bought these materials now what's my next step and then it says buy a mixing pan and then the next step is how long
do I mix it for you see it's a recipe that's called Chain of Thought reasoning and it generalizes really well we should be able in five years for example to be able to produce a thousand step recipes to solve really important problems in science in medicine in Material Science climate change that sort of thing that's the first one second one is Agents an agent can be understood as a large language model that knows something new or has learned something so an example would be um read all of chemistry learn something about chemistry have a bunch
of hypothesis about chemistry run some tests in a lab about chemistry and then add that to your agent these agents are going to be really powerful and it's reasonable to expect that agents will be not only will there be a lot of them and I mean Millions but there'll be like the equivalent of GitHub for agents there'll be lots and lots of Agents running around and available to you and the third one which to me is the most profound which is already beginning to happen is text to action and what that is is write me
a piece of software to do something right you just say it and can you imagine having programmers that actually do what you say you want and it does it 24 hours a day and strangely these systems are good at running writing codes such as language like python you put all that together and you've got infinite context window the ability for agents and then the ability to do this programming now this is very interesting what then happens there's a lot of questions here and now we get into the questions of Science Fiction I'm sure the three
things I've named are happening because that work is happening now but it's some point these systems will get powerful enough that you'll be able to take the agents and they'll start to work together right so your agent and my agent and her agent and his agent will all combine to solve a new problem at some point people believe that these agents will develop their own language and that's the point when we don't understand what we're doing you know what we should do pull the plug literally unplug the computer so it's really a problem when agents
start to communicate in ways and doing things that we as humans do not understand that's the limit in my view and you think again how how far off in the future well there have been many many predictions clearly agents and these things will occur in the next few years and it won't occur in like there won't be one day where everybody says oh my God it's more a question of capab ities every month every 6 months and so forth a reasonable expectation is we'll be in this new world within 5 years wow not 10 and
the reason is there's so much money and not there are also so many ways in which people are trying to accomplish this you have the big gu guys the the three large so-called Frontier models but you have a very large number of players who are programming at one level lower at much lesser lower cost who are iterating very quickly plus you have a great deal of research I think there's every reason to think that some version of what I'm saying will occur within 5 years and maybe sooner well now so you say pull the plug
so two questions so how do you pull the plug but even before you pull the plug if you know you're already in Chain of Thought reasoning and you're headed to what you fear don't you need to regulate at some point that it doesn't get there or is that beyond the scope of Regulation well a group of us have been working very closely with the governments in the west and we've started talking to the Chinese which of course is complicated and takes time uh about these issues and at the moment the governments with the exception of
Europe which is always kind of slightly confused have been doing the right thing which is they've set up trust and safety institutes they're beginning to learn how to measure things and check things and the right approach is for the governments to watch us and make sure we don't get confused on what the goal is right so as long as the companies are well-run Western companies with shareholders and lawsuits and all that we'll be fine there's a great deal of concern in these Western companies about liability doing bad things nobody wants to hurt people they're not
they don't wake up in the morning saying let's hurt somebody right now of course there's the proliferation problem yeah but in terms of the core research the researchers are trying to be honest okay so that's the West so by saying the West you're implying that proliferation outside the West is where the danger is the bad guys are out there somewhere well one of the things that we know and it's always useful to remind the Techno optimists in my world there are evil people and they will use your tools to hurt people my favorite example is
that the face recognition stuff was invented not to constrain the Wagers you know they didn't say we're going to invent face recognition in order to constrain this the minority in China called the Wagers right but it's happening all technology is dual use all of these inventions can be misused and it's important for the inventors to be honest with that so in open source which is for those of you who don't follow it open source is where the source code in in models the weights that is the numbers that have been calculated are released to the
public those immediately go throughout the world and who do they go to they go to China of course they go to Russia they go to Iran right they go to bellaria they go to North Korea yeah uh when I was most recently in China the vast essentially all of the work I saw started with open- Source models from the West which were then Amplified so it sure looks to me like these leading firms the ones I'm talking about the ones that are putting 10 bill you know a billion 10 billion dollar eventually into this will
be tightly regulated I worry that the rest will not you can see I'll give you another example look at this problem of misinformation I think it's largely unsolvable and the reason is the code generate misinformation is essentially free right any any you know person right a good person a bad person has access to them it doesn't cost anything and they produce very very good images uh there are regulatory solutions to that but the important point is that that cat is out of the bag or whatever metaphor you want it's important that these more powerful systems
especially as they get closer to general intelligence have some limits on proliferation and that problem is not yet solved yet to follow up on on your point about the funding Faith Lee at Stanford argues that's the biggest problem is that there's so much money going into the private sector and who's their competition to look at what the red lines are or whatever it's the universities which don't have a lot of money um so you really trust these companies to be transparent enough to be regulated by government that doesn't know what're talking about really the correct
answer is always trust but verify yeah and the truth is you should trust and you should also verify and at least in the West the best way to verify is to use private companies that are set up as verifiers because they can employ the right people and so forth so in all of our industry conversations it's pretty clear that the way it will really work is you'll end up with AI checking AI it's too hard think about it you build a new model it's been trained on new data you worked really hard on it how
do you know what it knows yeah now by you can ask it all the previous questions but what if it's discovered something completely new and you don't think about it right and the systems can't regurgitate everything they know you have to ask them chunk by chunk by chunk so it makes perfect sense that an AI would would be the only way to police that people are working on that with Fay's argument she's completely correct we have the rich Private Industry companies and we have the poor universities who have incredible Talent it should be an major
national priority in all of the Western countries to get research funding for the hardware if you were a um physicist 50 years ago you had to move to where the cyclon cyclotrons were because they were really hard and expensive and by the way they still are really hard and inexpensive you need to be near a cyclotron to do your work as a physicist we never had that in software our stuff was Capital cheap not Capital expensive the arrival of heavyduty training in our industry is a huge economic change and what's happening is that companies are
figuring this out and the really rich companies I'm thinking of Microsoft and Google as an example are planning to spend billions of dollars because they have the cash they have big businesses the money's coming in that's good where does the Innovation come from they don't have that kind of hardware and yet they need access to that yeah um okay let's go to China so uh you just um you on Kissinger's last trip to China you went with him and he had a discussion with Luan Ping On exactly this set of issue you your your idea
was to set up a high level group to discuss the potential and catastrophic possibilities of AI uh where do the Chinese fit in on this on the one hand I've heard you say and not only you that we need to go all out to compete with the Chinese uh for some of the reasons you just said because there could be bad players or bad intentions but where is it appropriate to cooperate and why well first place the Chinese should be pretty worried about generative Ai and the reason is that they don't have um free speech
and so what do you do when the system generates something that's not permitted in their country right right who do you jail yeah right the computer the user the developer the training data it's not at all obvious and the Chinese Regulators so far have been relatively intelligent about this but it's obvious if you think about it that the spread of these things will be highly restricted in China because it fundamentally addresses their information Monopoly right that makes sense so in our conversation with China both Dr Kissinger and I when we were together um and unfortunately
he passed away and the subsequent meetings have been set up as a result of his inspiration to do them everyone agrees that there's a problem but we're at the moment with China we're speaking in generalities there is not a proposal in front of either side that's actionable and that's okay because it's complicated and a lot of this because of the stakes involved it's actually good to take your time to actually explain what you view as the problem so many Western computer scientists are visiting with their Chinese counterparts and trying to say if you allow this
stuff to to proliferate you could end up with a terrorist Act Right the misuse of these for biological weapons the misuse of these for cyber um the long-term worry is is much more existential but at the moment I think the Chinese conversations are largely very constrained by bio by concerns about biothreats and and uh cyber threats the long-term threat goes something like this it's when I talk about AI I talk about it as human generated so you or I give it at least in theory a command and you may it may be a very long
command and it may be recursive in the sense but it starts with a human judgment right there is something technically called recursive self-improvement right where the model actually runs on it own and it just learns and gets smarter and smarter right when that occurs or when agent to agent interaction that's heterogeneous occurs we have a very different set of threats which we're not ready to talk to anybody about because we don't understand them but they're coming do you see I guess I'm trying to think about what a kind of dialogue with the Chinese could mean
would it be something like nuclear proliferation I mean where if they understand the existential threat to start at that level maybe an iaea type of thing for proliferation do you think that's possible on on the political Horizon it's going to be very difficult to get any actual treaties with China um what I'm engaged with is called a track two dialogue which means that it's informal it's not it's it's educational it's interesting it's very hard to predict by the time we get to real negotiations between the US and China yeah what the political situation will be
what the threat situ would be a simple requirement would be that if you're going to do training for something that's completely new you have to tell the other side that you're doing it okay so that you don't surprise them so it's like the open Skies during the Cold War so so an example would be a no surprises rule when a missile is launched anywhere in the world all the countries acknowledge that they know it's coming that way they don't jump to a conclusion and think it's targeted at them that strikes me as a basic rule
right furthermore that if you're doing powerful training there needs to be some agreements around safety um in biology there's a broadly accepted set of layers BSL one to four right for bios safety containment which makes perfect sense because these things are dangerous eventually there will be a small number of extremely powerful computers that I want you to think about they'll be in an army base and they'll be powered by a some nuclear power source in the army base and they'll be surrounded by even more barred wire and machine guns because their capability for invention for
power and so forth exceeds what we want as a nation to give either to our own citizens without permission as well as to our competitors makes sense to me that there will be a few of those and there'll be a lot of other system systems that are more broadly available but you're saying that you would notify the Chinese that those systems exist yep again it's possible that that would be an answer and vice versa and vice versa all all of these things are mutual but the you want to avoid a situation where a runaway agent
in China ultimately gets access to a weapon and launches it foolishly thinking that that's some game without because remember these are not human humans they don't necessarily understand the consequence these systems are all based on a simple principle of predicting the next word right so we're not talking about High Intelligence here we're certainly not talking about the kind of emotional understanding and history that that humans have and human values so when you're dealing with a a non-human intelligence that does not have the benefit of Human Experience what bounds do you put on it and maybe
we can come to some agreements on what those are are they moving as exponentially as we are in the west with the billions going into generative AI uh is trying to have the commensurate billions coming in from government or companies it's not at the same level in China for reasons I don't fully understand my estimate having now reviewed it at some length is that they're about two years behind two years is not not very much by the way but they're definitely behind there are at least four companies that are attempting to do large scale model
training which is similar to what I've been talking about um and they're the obvious big tech companies in China right they're hobbled because they don't have access to the very best hardware um which is restricted from export by the Trump and now Biden administrations those restrictions are likely to get tougher not easier and so as the Nvidia and their competitor chips go up in value China will be struggling to stay relevant right because their stuff won't move at the same Chinese you agree with not letting those chips flow China the the chips the chips are
important because they enable this kind of learning it's always possible to do it with slower chips you just need more of them and so it's effectively a cost tax um for for Chinese development that's a way to think about it and Is It ultimately dispositive does it mean that China can't get there no but it makes it harder and makes it means that it takes them longer to do so and we should do that as the West Well the West has agreed to do it I think it's fine yeah uh it's a fine strategy I'm
I'm much more concerned about the proliferation of Open Source and the reason is and I'm sure the Chinese would have the same concern so again these are the kinds of things that we'll be talking to them about is do you understand that these things can be misused against your government as well as ours so the scenario is open source folks basically do something called basically guard rails and they fine-tune and they use a technology called rhf to eliminate some of the bad answers there's plenty of evidence that it's relatively easy if I gave you all
of the weights all of the stuff so forth it' be relatively easy for you to back them out and see the raw power of the model and that's a great concern that's problem's not been solved engineering yeah reverse engineer and that's not been solved yet
Related Videos
Bill Gates Reveals Superhuman AI Prediction
57:18
Bill Gates Reveals Superhuman AI Prediction
Next Big Idea Club
250,907 views
Exclusive Q&A: Eric Schmidt and David Solomon on the Future of Generative AI | Stream by AlphaSense
33:38
Exclusive Q&A: Eric Schmidt and David Solo...
AlphaSense
28,417 views
What do tech pioneers think about the AI revolution? - BBC World Service
25:48
What do tech pioneers think about the AI r...
BBC World Service
196,378 views
Former Google CEO Eric Schmidt on AI potential: American businesses will change because of this
8:24
Former Google CEO Eric Schmidt on AI poten...
CNBC Television
153,306 views
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Inside Mark Zuckerberg's AI Era | The Circuit
Bloomberg Originals
1,709,276 views
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
What Is an AI Anyway? | Mustafa Suleyman |...
TED
1,423,683 views
The Future of AI with GOOGLE CEO (Sundar Pichai)
8:39
The Future of AI with GOOGLE CEO (Sundar P...
Hayls World
383,451 views
Google CEO Sundar Pichai and the Future of AI | The Circuit
24:02
Google CEO Sundar Pichai and the Future of...
Bloomberg Originals
3,428,036 views
The Disruptors: Sam Altman and Brian Chesky in conversation with Lester Holt
39:58
The Disruptors: Sam Altman and Brian Chesk...
The Aspen Institute
98,018 views
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
AI and Quantum Computing: Glimpsing the Ne...
World Science Festival
379,001 views
Nvidia CEO Jensen Huang and the $2 trillion company powering today's AI | 60 Minutes
13:23
Nvidia CEO Jensen Huang and the $2 trillio...
60 Minutes
1,407,745 views
Why this top AI guru thinks we might be in extinction level trouble | The InnerView
26:31
Why this top AI guru thinks we might be in...
TRT World
883,045 views
The "Modern Day Slaves" Of The AI Tech World
52:42
The "Modern Day Slaves" Of The AI Tech World
Real Stories
472,363 views
AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic"
23:47
AI Pioneer Shows The Power of AI AGENTS - ...
Matthew Berman
539,409 views
CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival
45:22
CEO of Microsoft AI speaks about the futur...
NBC News
278,390 views
Mustafa Suleyman: The AI Pioneer Reveals the Future in 'The Coming Wave' | Intelligence Squared
31:29
Mustafa Suleyman: The AI Pioneer Reveals t...
Intelligence Squared
198,671 views
AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg
58:38
AI and The Next Computing Platforms With J...
NVIDIA
3,551,530 views
AI: What is the future of artificial intelligence? - BBC News
16:39
AI: What is the future of artificial intel...
BBC News
714,269 views
Eric Schmidt: Google | Lex Fridman Podcast #8
33:08
Eric Schmidt: Google | Lex Fridman Podcast #8
Lex Fridman
134,025 views
AI Deception: How Tech Companies Are Fooling Us
18:59
AI Deception: How Tech Companies Are Fooli...
ColdFusion
1,844,059 views
Copyright © 2024. Made with ♥ in London by YTScribe.com