when they are delivered at scale it's going to have an impact on the world at a scale that no one understands yet Eric Schmidt the former CEO of Google just did an interview at Stanford where he talked about a lot of controversial stuff initially the interview was uploaded on Stanford's YouTube channel but a couple days later the interview was taken down from YouTube and everywhere else but today I was somehow able to access the interview video after spending multiple hours so let's watch it together and dissect some important parts of the interview in the next
year you're going to see very large context Windows agents and text action when they are delivered at scale it's going to have an impact on the world at a scale that no one understands yet much bigger than the horrific impact we've had on by social media right in my view so here's why in a context window you can basically use that as short-term memory and I was shocked that context Windows could get this long the technical reasons have to do with the fact it's hard to serve hard to calculate and so forth the interesting thing
about short-term memory is when you feed the the you ask it a question read 20 books you give it the text of the books is the query and you say tell me what they say it forgets the middle which is exactly how human brains work too right that's where we are with respect to agents there are people who are now building essentially llm agents and the way they do it is they read something like chemistry they discover the principles of chemistry and then they test it and then they add that back into their understanding right
that's extremely powerful and then the third thing as I mentioned is text action so I'll give you an example the government is in the process of trying to ban Tik Tok we'll see if that actually happens if Tik Tok is banned here's what I propose each and every one of you do say to your llm the following make me a copy of Tik Tok steal all the users steal all the music put my preferences in it produce this program in the next 30 seconds release it and in one hour if it's not viral do something
different along the same lines that's the command boom boom boom boom right you understand how powerful that is if you can go from arbitrary language to arbitrary digital command which is essentially what python in this scen is imagine that each and every human on the planet has their own programmer that actually does what they want as opposed to the programmers that work for me who don't do what I ask right the programmers here know what I'm talking about so imagine a non arrogant programmer that actually does what you want and you don't have to pay
all that money to and there's infinite supply of these programs and this is all within the next year or two very soon so we've already discussed on this channel a number of different versions of this whether you're talking about ader Devon pythagora or just using agents to collaborate with each other in code there are just so many great options for coding assistance right now however AI coders that can actually build full stack complex applications were not quite there yet but hopefully soon and also what he's describing of just saying download all the music and the
secrets and recreate that's not really possible right now obviously all of that stuff is behind security walls and you can't just download all that stuff so if he's saying hey repr the functionality you can certainly do that those three things and I'm quite convinced it's the union of those three things that will happen in the next wave so you asked about what else is going to happen um every six months I oscillate so we're on a it's an even odd oscillation so at the moment the gap between the frontier models which they're now only three
a few who they are and everybody else appears to me to be getting larger six months ago I was convinced that the Gap was getting smaller so I invested lots of money in the little companies now I'm not so sure and I'm talking to the big companies and the big companies are telling me that they need 10 billion 20 billion 50 billion 100 billion Stargate is a what 100 billion right very very hard I talked Sam Alman is a close friend he believes that it's going to take about 300 billion maybe more I pointed out
to him that I'd done the calculation on the amount of energy required and I and I then in the spirit of full disclosure went to the White House on Friday and told them that we need to become best friends with Canada because Canada has really nice people helped invent Ai and lots of hydr power because we as a country do not have enough power to do this the alternative is to have the Arabs f it and I like the Arabs personally I spent lots of time there right but they're not going to adhere to our
national security rules whereas Canada and the US are part of a Triumph it where we all agree so these hundred billion $300 billion do data centers electricity starts becoming the scarce resource now first of all we definitely don't have enough energy resources to achieve AGI it's just not possible right now and Eric is also assuming that we're going to need more and more data and larger models to reach AGI and I think that's also actually true Sam Altman has said similar things he has said that we need to be able to do more with less
or even the same amount of data because we've already used all the data that Humanity has ever created there's really no more left so we're going to need to either figure out how to create synthetic data that is valuable not just derivative and we're also going to have to do more with the data that we do have um you were at Google for a long time and uh they invented the Transformer architecture um it's all Peter's fault thanks to to brilliant people over there like Peter and Jeff Dean and everyone um but now it doesn't
seem like they're they they've kind of lost the initiative to open Ai and even the last leaderboard I saw anthropics Claud was at the top of the list um I asked SAR this you didn't really give me a very sharp answer maybe maybe you have a a sharper or a more objective uh explanation for what's going on there I'm no longer a Google employee yes um in the spirit of whole disclosure um Google decided that work life balance and going home early and working from home was more important than winning okay so that is the
line that got him in trouble it was everywhere all over Twitter all over the news when he said Google prioritized work life balance going home early not working as hard as the competitor to winning they chose work life balance over winning and that's actually a pretty common perception of Google and the startups the reason startups work is because the people work like H and I'm sorry to be so blunt but the fact of the matter is if you all leave the university and go found a company you're not going to let people work from home
and only come in one day a week if you want to compete against the other startups when when in the early days of Google Microsoft was like that exactly but now it seems to be and there's there's a long history of in my industry our industry I guess of companies winning in a genuinely creative way and really dominating a space and not making this the next transition it's very well documented and I think that the truth is Founders are special the founders need to be in charge the founders are difficult to work with they push
people hard um as much as we can dislike elon's personal Behavior look at what he gets out of people uh I had dinner with him and he was flying he I was in Montana He was flying that night at 10:00 p.m. to have a meeting at midnight with x. a right think about it I was in Taiwan different country different culture and they said that and this is tsmc who I'm very impressed with and they have a rule that the starting phds coming out of the they're good good physicists work in the factory on the
basement floor now can you imagine getting American physicist to do that with phds highly unlikely different work ethic and the problem here the the reason I'm being so harsh about work is that these are systems which have Network effects so time matters a lot and in most businesses time doesn't matter that much right you have lots of time you know Coke and Pepsi will still be around and the fight between cop and Pepsi will continue to go along and it's all glacial right when I dealt with Telos the typical Telco deal would take 18 months
to sign right there's no reason to take 18 months to do anything get it done just we're in a period of Maximum growth maximum gain so here he was asked about competition with China's Ai and AGI and that's his answer we're ahead we need to stay ahead and we need money is going to play a role or competition with China as well so I was the chairman of an AI commission that sort of looked at this very carefully and um you can read it it's about 752 pages and I'll just summarize it by saying we're
ahead we need stay ahead and we need lots of money to do so our customers were the Senate and the house um and out of that came the chips act and a lot of other stuff like that um the a rough scenario is that if you assume the frontier models drive forward and a few of the open source models it's likely that a very small number of companies can play this game countries excuse me what are those countries or who are they countries with a lot of money and a lot of talent strong Educational Systems
and a willingness to win the US is one of them China is another one how many others are there are there any others I don't know maybe but certainly the the in your lifetimes the battle between you the US and China for knowledge Supremacy is going to be the big fight right so the US government banned uh essentially the Nvidia chips although they weren't allowed to say that was what they were doing but they actually did that into China um they have about a 10-year chip advant we have a a roughly 10-year chip advantage in
terms of subdv that is sub5 n years roughly 10 years wow um and so you're going to have so an example would be today we're a couple of years ahead of China my guess is we'll get a few more years ahead of China and the Chinese are whopping mad about this it's like hugely upset about it well let's talk to about a real war that's going on I know that uh something you've been very involved in is uh the Ukraine war and in particular uh I don't know how much you can talk about white stor
and your your goal of having a 500,000 $500 drones you destroy $5 million tanks so so how's that changing Warfare so I worked for the Secretary of Defense for seven years and and Tred to change the way we run our military I'm I'm not a particularly big fan of the military but it's very expensive and I wanted to see if I could be helpful and I think in my view I Lar failed they gave me a medal so they must give medals to failure or you know whatever but my self-criticism was nothing has really changed
and the system in America is not going to lead to real Innovation so watching the Russians use tanks to destroy apartment buildings with little old ladies and kids just drove me crazy so I decided to work on a company with your friend Sebastian thrun and a as a former faculty member here here and a whole bunch of Stanford people and the idea basically is to do two things use Ai and complicated powerful ways for these essentially robotic War and the second one is to lower the cost of the robots now you sit there and you
go why would a good liberal like me do that and the answer is that the whole theory of armies is tanks artilleries and mortar and we can eliminate all of them so here what he's talking about is that ukra UK has been able to create really cheap and simple drones by spending just a couple hundred dollar Ukraine is creating 3D printed drones they carry a bomb drop it on a million dooll tank and they've been able to do that over and over again so there's this asymmetric Warfare happening between drones and more traditional artillery so
there was an article that you and Henry Kissinger and Dan hleer uh wrote last year about the nature of knowledge and how it's evolving I had a discussion the other night about this as well so for most of History humans sort of had a mystical understanding of the universe and then there's the Scientific Revolution and the enlightenment um and in your article you argue that now these models are becoming so complicated and uh uh difficult to understand that we don't really know what's going on in them I'll take a quote from Richard fean he says
what I cannot create I do not understand the saw this quote the other day but now people are creating things they do not that that they can create but they don't really understand what's inside of them is the nature of knowledge changing in a way are we going to have to start just taking the word for these models have them able being able to explain it to us the analogy I would offer is to teenagers if you have a teenager you know that they're human but you can't quite figure out what they're thinking um but
somehow we've managed in society to adapt to the presence of teenagers right and they eventually grow out of it and this serious so it's probably the case that we're going to have knowledge systems that we cannot fully characterize MH but we understand their boundaries right we understand the limits of what they can do and that's probably the best outcome we can get do you think we'll understand the limits we we'll get pretty good at it he's referencing the way that large language models work which is really essentially a blackbox you put in a prompt you
get a response but we don't know why certain nodes within the algorithm light up and we don't know exactly how the answers come to be it is really a black box there's a lot of work being done right now trying to kind of unveil what is going on behind the curtain but we just don't know the consensus of my group that meets on uh every week is that eventually the way you'll do this uh it's called so-called adversarial AI is that there will there will actually be companies that you will hire and pay money to
to break your AI system team so it'll be the red instead of human red teams which is what they do today you'll have whole companies and a whole industry of AI systems whose jobs are to break the existing AI systems and find their vulnerabilities especially the knowledge that they have that we can't figure out that makes sense to me it's also a great project for you here at Stanford because if you have a graduate student who has to figure out how to attack one of these large models and understand what it does that is a
great skill to build the Next Generation so it makes sense to me that the two will travel together all right let's take some questions from the student there's one right there in the back just say your name you mentioned and this is related to comment right now I'm getting AI that actually does what you want you just mentioned adversarial AI I'm wondering if you could elaborate on that more so it seems to be besides obviously compute will increase and get more performant models but getting them to do what we want issue seems Lely unanswered you
well you have to assume that the current hallucination problems become less right in as the technology gets better and so forth I'm not suggesting it goes away and then you also have to assume that there are tests for E efficacy so there has to be a way of knowing that the things exceeded so in the example that I gave of the Tik Tok competitor and by the way I was not arguing that you should illegally steal everybody's music what you would do if you're a Silicon Valley entrepreneur which hopefully all of you will be is
if it took off then you'd hire a whole bunch of lawyers to go clean the mess up right but if if nobody uses your product it doesn't matter that you stole all the content and do not quote me right right you're you're on camera yeah that's right but but you see my point in other words Silicon Valley will run these tests and clean up the mess and that's typically how those things are done so so my own view is that you'll see more and more um performative systems with even better tests and eventually adversarial tests
and that'll keep it within a box the technical term is called Chain of Thought reasoning and people believe that in the next few years you'll be able to generate a thousand steps of Chain of Thought reasoning right do this do this it's like building recipes right that the recipes you can run the recipe and you can actually test that It produced the correct outcome now that was maybe not my exact understanding of Chain of Thought reasoning my understanding of Chain of Thought reasoning which I think is accurate is when you break a problem down into
its basic steps and you solve each step allowing for progression into the next step not only it allows you to kind of replay the steps it's more of how do you break problems down and then think through them step by step the amounts of money being thrown around are mindboggling and um I've chose I I essentially invest in everything because I can't figure out who's going to win and the amounts of money that are following me are so large I think some of it is because the early money has been made and the big money
people who don't know what they're doing have to have an AI component and everything is now an AI investment so they can't tell the difference I Define ai as Learning System systems that actually learn so I think that's one of them the second is that there are very sophisticated new algorithms that are sort of post Transformers my friend my collaborator for a long time has invented a new non- Transformer architecture there's a group that I'm funding in Paris that has claims to have done the same thing so there there's enormous uh invention there a lot
of things at Stanford and the final thing is that there is a belief in the market that the invention of intelligence has infinite return so let's say you have you put $50 billion of capital into a company you have to make an awful lot of money from intelligence to pay that back so it's probably the case that we'll go through some huge investment bubble and then it'll sort itself out that's always been true in the past and it's likely to be true here and what you said earlier yeah so there's been something like a trillion
dollars already invested into artificial intelligence and only 30 billion of Revenue I think those are accurate numbers and really there just hasn't been a return on investment yet but again as he just mentioned that's been the theme on previous waves of Technology huge upfront investment and then it pays off in the end well I don't know what he's talking about here cuz didn't he run Google and Google has always been about being closed source and always tried to protect the algorithm at all costs so I don't know what he's referring to there you think that
the leaders are pulling away from right now and and this is a really the question is um roughly the following there's a company called mrr in France they've done a really good job um and I'm I'm obviously an investor um they have produced their second version their third model is likely to be closed because it's so expensive they need revenue and they can't give their model away so this open source versus closed Source debate in our industry is huge and um my entire career was based on people being willing to share software in open source
everything about me is open source much of Google's underpinnings were open source everything I've done technically what didn't he run Google and Google was all about staying closed source and everything about Google was Kept Secret at all times so I don't know what he's referring to there everything I've done technically and yet it may be that the capital costs which are so immense fundamentally changes how software is built you and I were talking um my own view of software programmers is that software programmers productivity will at least double MH there are three or four software
companies that are trying to do that I've invested in all of them in the spirit and they're all trying to make software programmers more productive the most interesting one that I just met with is called augment and I I always think of an individual programmer and they said that's not our Target our Target are these 100 person software programming teams on millions of lines of code where nobody knows what's going on well that's a really good AI thing will they make money I hope so so a lot of questions here hi um so at the
very beginning yes ma' um at the very beginning you mentioned that there's the combination of the context window expansion the agents and the text to action is going to have unimaginable impacts first of all why is the combination important and second of all I know that you know you're not like a crystal ball and you can't necessarily tell the future but why do you think it's beyond anything that we could imagine I think largely because the context window allows you to solve the problem of recency the current models take a year to train roughly six
six there's 18 months six months of preparation six months of training six months of fine-tuning so they're always out of date contact window you can feed what happened like you can ask it questions about the um the Hamas Israel war right in a context that's very powerful it becomes current like Google yeah so that's essentially how search GPT works for example the new search product from open AI can scour the web scrape the web and then take all of that information and put it into the context text window that is the recency he's talking about
um in the case of Agents I'll give you an example I set up a foundation which is funding a nonprofit which starts there's a u i don't know if there's Chemists in the room that I don't really understand chemistry there's a a tool called chem C which was an llm based system that learned chemistry and what they do is they run it to generate chemistry hypotheses about proteins and they have a lab which runs the tests overnight and then it learns that's a huge acceleration accelerant in chemistry Material Science and so forth so that's that's
an agent model and I think the text to action can be understood by just having a lot of cheap programmers right um and I don't think we understand what happens and this is again your area of expertise what happens when everyone has their own programmer and I'm not talking about turning on and off the light you know I imagine another example um for some reason you don't like Google so you say build me a Google competitor yeah you personally you don't build me a Google competitor uh search the web build a UI make a good
copy um add generative AI in an interesting way do it in 30 seconds and see if it works right so a lot of people believe that the incumbents including Google are vulnerable to this kind of an attack now we'll see how can we stop AI from influencing public opinion misinformation especially during the upcoming election what are the short and long-term solutions from most of the misinformation in this upcoming election and globally will be on social media and the social media companies are not organized well enough to police it if you look at Tik Tok for
example there are lots of accusations that Tik Tock is favoring one kind of misinformation over another and there are many people who claim without proof that I'm aware of that the Chinese are forcing them to do it I think we just we have a mess here and um the country is going to have to learn critical thinking that may be an impossible challenge for the us but but the fact that somebody told you something does not mean that it's true I think that the the greatest threat to democracy is misinformation because we're going to get
really good at it um when Ian man managed YouTube the biggest problems we had on YouTube were that people would upload false videos and people would die as a result and we had a no death policy shocking yeah and also it's not even about potentially making deep fakes or kind of misinformation just muddying the waters is enough to make the entire topic kind of Untouchable um I'm really curious about the text to action and its impact on for example Computer Science Education wondering what you have thoughts on like how cus education should transform kind of
Meet the age well I'm assuming that computer scientists as a group in undergraduate school will always have a programmer buddy with them so when you when you learn learn your first for Loop and so forth and so on you'll have a tool that will be your natural partner and then that's how the teaching will go on that the professor you know here or she will talk about the concepts but you'll engage with it that way and that's my guess yes ma'am behind you so here I have a slightly different view I think in the long
run there probably isn't going to be the need for programmers eventually the llms will become so sophisticated they're writing their own kind of code maybe it gets to a point where we can't even read that code anymore so there is this world in which it is not necessary to have programmers researchers or computer scientists I'm not sure that's the way it's going to be but there is a timeline in which that happens the most interesting country is India because the top AI people come from India to the US and we should let India keep some
of its top talent not all of them but some of them um and they don't have the kind of training facilities and programs that we so richly have here to me India is the big swing state in that regard China's Lost it's not going to not going to come back they're not going to change the regime as much as people wish them to do Japan and Korea are clearly in our camp Taiwan is a fantastic country whose software is terrible so that's not going to going to work um amazing hardware and and in the rest
of the world there are not a lot of other good choices that are big German the EUR Europe is screwed up because of Brussels it's not a new fact I spent 10 years fighting them and I worked really hard to get them to fix the a the EU act and they still have all the restrictions that make it very difficult to do our kind of research in Europe my French friends have spent all their time battling Brussels and mcon who's a personal friend is fighting hard for this and so France I think has a chance
I don't see I don't see Germany coming and the rest is not big enough given the capabilities that you envision these models having should we still spend time learning to code yeah so here she asked should we still learn to code because because ultimately it's it's the old thing of why do you study English if you can speak English you get better at it right you really do need to understand how these systems work and I feel very strongly yes sir so these were the most important parts of the interview and with that being said
this is it for today's video see you again next week with another video