Unreasonably Effective AI with Demis Hassabis

77.41k views10626 WordsCopy TextShare
Google DeepMind
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hann...
Video Transcript:
[Music] welcome to Google Deep Mind the podcast with me your host Professor Hannah fry now when we first started thinking about making this podcast way back in 2017 Deep Mind was this relatively small focused AI research lab they' just been bought by Google and given the freedom to do their own quirky research projects from the safe distance of London well how things have changed because since the last season Google has reconfigured its entire structure putting Ai and the team at Deep Mind at the core of its strategy Google deepmind has continued its quest to endow
AI with human level intelligence known as artificial general intelligence or AGI it has introduced a family of powerful new AI models called Gemini as well as an AI agent called project Astra that can process audio video image and code the lab is also making huge leaps in applying AI to a host of scientific domains including a brand new third version of alpha fault which can predict the structures of all of the molecules that you will find in the human body not just proteins and in 2021 they spun off a new company isomorphic labs to get
down to the business of discovering new drugs to treat diseases Google Deep Mind is also working on powerful AI agents that can learn to perform tasks by themselves using reinforcement learning and continuing that Legacy of alpha Go's famous victory over a human in the game of go now of course you'll all have been following this podcast since the beginning you'll all be familiar with the stories behind all of those changes but just in case you are coming to us fresh welcome you can find our first awardwinning previous seasons on Google deep mind's YouTube channel or
wherever you get your podcast they also those episodes go into detail about a lot of the themes that we're going to hear come up over and over again from the people here like reinforcement learning deep learning large language models and so on so have a listen um they are really good even if we do say so ourselves now all of the new found attention on AI since the last series does mean that there are quite a few more podcasts out there for you to choose from but on this podcast in just the same way as
we always have we want to offer you something a little bit different we want to take you right to the heart of where these ideas are coming from to introduce you to the people who are leading the design of our Collective future no hype no spin just compelling discussions and Grand scientific ambition so with all of that in mind I am here with the deepmind co-founder and now CEO of Google deepmind Demis hassabis so with all of that in mind yes do I have to call you sir Demis no you no absolutely not okay well
Demis welcome to to the podcast thank you thank you very much for being here okay I want to know is your job easier or harder now that there has been this explosion in public interest um I think it's double-edged right I think it's harder because there's just so much scrutiny focus and actually quite a lot of noise in the whole field I I actually preferred it when it was less people and maybe a little bit more focused on the science um but it's also good because it shows that um the Technologies ready to impact the
real world in many different you know ways and impact people's everyday lives in positive ways so I think it's exciting too have have you been surprised by how quickly this has caught the Public's imagination I mean I guess you would have expected that eventually people would have got on board yes exactly so at some point um you know those of us been working on it for like us for for many years now you know even decades so I guess at some point um the general public would wake up to that fact and and effectively every
's starting to realize how important AI is going to be but it's been quite surreal still to see that actually come to fruition and and for that to happen uh and I guess it is the Advent of the chat Bots and language models because everyone of course uses language everyone can understand language so it's an easy way for the general public to understand and maybe measure where AI has got to I heard you describe these chat bot as that they were unreasonably effective which I really like um and actually later in the podcast we are
going to be be discussing Transformers which was the big um breakthrough I guess the big advance that that gave us those tools but but tell me first what what do you mean by unreasonably effective uh what I mean by it is I suppose if one were to whine back 5 10 years ago and you were to say what we're going to the way we're going to go about this is you know build these amazing architectures and then scale from there and not necessarily crack specific things like Concepts or um abstractions so these are a lot
of debates we would have 5 10 years ago is you need a special way of doing abstractions um the brain certainly seems to do that um but yet somehow the the systems if you give them enough data I.E the whole internet then um they do seem to learn this and generalize from those examples not just wrote memorize but actually um somewhat uh understand what they're processing um and it's sort of a little bit unreasonably effective in the sense that like I don't think anyone would have thought that it would work as well as it has
done um say 5 years ago yeah I suppose it is a surprise that that that things like conceptual understanding and and abstraction have emerged rather than been been yes and and we would have been probably we discussed last time things like Concepts and grounding um you know grounding language in Real World Experience maybe in simulations or as robots embodied intelligence um would have been necessary to really understand uh the world around us and um of course these systems are not there yet they make lots of mistakes they don't really have a model of the world
a proper model of the world but they've got a lot further than one might expect just by learning from language I guess we probably should actually say what grounding is for those who haven't listened to series one and series two because this was a big thing I mean we were talking about this a lot about how you need so do you want to just give over grounding uh is when you know there's one of the reasons the the systems that were built in the ' 80s and '90s the classical AI systems built in places like
MIT were big logic systems so you could imagine them as huge databases of words connected to other words and the problem was you could say something a dog has legs right and that would be in the database but the problem was as soon as you showed it a picture of a dog it had no idea that collection of pixels was referring to that symbol and that's the grounding problem so you have this symbolic representation this abstract representation but what does it really mean in the real world in the messy real world and and then of
course they try to fix that but you never get that quite right and instead of that of course today's systems they're they're directly learning from the data so in a way they're forming that connection from the beginning but the interesting thing was is that you know if you learn just from language um in theory there should be missing a lot of the grounding that you need um but it turns out that a lot of it is inferable somehow why in theory well because where is that grounding coming from these at least the first kind of
large language models don't exist exist in the real world they're not connected to simulators they're not connected to robots they don't have any access to even they weren't multimodal to begin with either they don't have access to the to to the visuals or anything else it's just purely they live in language space so they're living in a they're learning in an abstract domain so it's pretty surprising they can then infer some things about the real world from that which makes sense if the grounding Gets In by people interacting with this system and saying that's a
rubbish answer that's a good answer yes so for sure part of that if if the question that they're getting wrong the early versions of this was uh to due to grounding missing you know the real world dogs bark in this way or whatever it is and it's answering it incorrectly then that feedback will correct it and part of that feedback is from our own grounded knowledge so some grounding is seeping in like that for sure I remember seeing a really nice example about uh crossing the English Channel versus walking across the English Channel right exactly
those kinds of things and if it answered wrong you would tell it it's wrong and then it would have to sort of slightly figure out that you you know you can't walk across the channel okay so some of these properties that that have emerged that weren't necessarily expected to be I kind of want to ask you a little bit about hype do you think that that that where we are right now how how things are at this moment is overhyped or underhyped or is it just hyped perhaps in the wrong direction yeah I think it's
more the latter so I would say that um in the near term it's hyped too much so I think people are claiming can do all sorts of things it can't there's all sorts of you know startups and VC money chasing crazy ideas that don't you know they're just not ready on the other hand I think it's still under I know I know I know exactly exactly but but um you know I think it's still underhyped or perhaps underappreciated still even now what's going to happen when we get to AGI and post AGI I still don't
feel like um that's that's people have quite understood how enormous that's going to be and therefore the sort of responsibility of that so it's sort of both really I think it's it's a little bit overhyped in the in the in the near term at the moment we're kind of going through that cycle I guess though okay so in terms of all of these potential startups and VC funding and so on you who have lived and breeded this stuff for for as you say decades a very well placed to spot which ones are are realistic goals
and which ones aren't but for for other people how can they distinguish between what's real and and what isn't yeah well look I think you need to look at um obviously youve got to do your technical diligence um have some understanding of the technology and the and the latest sort of Trends um I think also look at perhaps the you know the background of the people saying it how technical they are have they just arrived in AI like last year from somewhere else I don't know they were doing crypto last year you know these might
be some clues that that that perhaps you know they're jumping on a bandwagon and it doesn't mean to say of course they could still have some good ideas and they many will do but it's a bit more um lottery ticket like should we say and I think that's always happens when there's a ton of uh attention suddenly on a place and obviously then the Money Follows that uh and everyone feels like they're missing out uh and that creates a um a kind of um opportunistic shall we say environment which is a little bit opposite to
the people those of us who've been in for decades in a kind of deep technology deep science way which is ideally the way I think we need to carry on going uh as we get closer to AGI yeah and I guess one of the big things we're going to talk about in this series is Gemini which really comes from that very deep science uh approach I guess um in what ways is Gemini different from from the other large language models that are released by other labs so from the beginning with Gemini we wanted it to
be multimodal from the start so it could you know process not just language but also audio video image code any modality really uh and the reason we wanted to do that was firstly we think that's the way to get these systems to actually understand the world around them and build better World model so actually still going back to our grounding question earlier um still building grounding in but in the but piggy backing on top of language this time um and so that's important and we also had this Vision in the end of having a universal
assistant um and and we prototype something called Astro which I'm sure we'll talk about which um understands not just what you're typing but actually the context you're in and if you think about something like a personal assistant or digital assistant it would be much more useful if the more context it understood about about what you're asking it for or or the situation that you're in so we always um thought that would be a a much more useful type of system and so we built multimodality in from the start so that was one thing natively multimodal
and then um at the time that was the only model doing that so now the other models are trying to catch up um and then the other big Innovations we had on memory so like long context so actually holding in mind um you know a million or two million now tokens you can think of them as more or less like words in mind so you can you know give it War and Peace or or even a whole because it's multimodal a whole video now a whole film and and or a lecture and then get it
to answer questions or find you things within that video stream okay and project Astra that's the the new Universal AI agent the one that can take in video and and audio data at Google IO I think you used the example of how asra could help you remember where you left your glasses for instance um so I wonder though about the lineage of this stuff because is this just a a kind of fancy Advanced version of those old Google Glasses of course Google have a long history of developing glass type devices uh actually back to I
think 2012 or something so they're way ahead of the curve but um maybe there it was just missing this kind of technology so you could actually understand a smart agent a smart assistant that could actually understand what it's seeing and so we're very excited about that digital assistant to you know to go around with you and understand the world around you so it seems like a really you know when you use it it feels like a really natural use case Okay I want to rewind a tiny bit to sort of the the the the start
of Gemini because because it came from two separate parts of the organization yeah so we um actually last year we combined our two research divisions at at alphabet so obviously the old Deep Mind and then brain Google brain into one we call it a super unit uh bringing all the talent together that we you know amazing talent we have across the company across the whole of Google into one unified uh uh unit and what it meant was is that we combined all the best knowledge that we had uh from uh all the research we were
doing but especially on language models so we had Trin Chilla and gopher and things like that and they were building things like palm and Lambda and early language models and they had different strengths and weaknesses and we pulled them all together into what became Gemini as the first Lighthouse project that the combined group would would would um output and then the other important thing is of course is was bringing together all the compute uh as well so that we could you know do these really massive training runs uh and actually pull the compute resources together
so great in in a lot of ways I mean the focus of Google brain and Deep Mind was was slightly different yes is that fair to say yeah so I think it was I mean we were obviously focused on the both of us on the frontiers of AI and and there was a lot of collaborations already on a kind of individual researcher level but maybe not on a strategic level obviously now the combined group Google deep m i kind of describe it as we're like the engine room of Google now but it's it's worked really
well I think there were a lot more similarities actually in the way we were working uh than there were differences um and we've continue to com keep and and double down our strengths on things like fundamental research so you know where does the next Transformer architecture come from we want to invent that obviously we you know Google brain invented the previous one uh we combin it with deep reinforcement learning that pioneered and I still think more Innovations are going to be needed and I would back us to do that just as we've done in the
past 10 years you know collectively both brain and and deep mind so um it's been exciting I want to come back to that that merge in in a moment but I think just just sticking on Gemini for a second how good is it how does it compare to other models yeah well I think it's you know some of the benchmarks are not problem is that we need more I think this is one thing the whole field needs is much better benchmarks well there are some wellknown benchmarks academic ones but they're kind of getting saturated now
and they don't differentiate between the the the nuances between the different top models I would say there's sort of three models that are kind of uh at the top of the frontier so it's um Gemini for Mars open AI GPT of course and then anthropic with their clawed models and then obviously there's a bunch of other uh good models too that you know people like meta and mistal and others built and they're differently good at different things it depends what you want you know coding perhaps that's clawed and reasoning maybe that's GPT and then memory
stuff long context and multimodal understanding that would be Gemini um of course we're continuing to all of us are improving our models all the time so um you know given where we started from which Gemini is a project only existed for a year um you know obviously based on some of our other projects I think our trajectory is very good so you know when we talk next time we should you know hopefully be um you know right at the Forefront cuz there is there is still a way to go I mean there are still some
things that these models aren't very good at yes for sure and and actually that's the big debate right now so this last set of things kind of emerged from the technologies that were you know invented five six years ago the question is is they're still missing a ton of things so they their factuality you know they hallucinate as we know um they're also not good at planning yet they're planning in what sense I mean well like kind of long-term planning so they can't problem solve uh something thing long term you give it an objective they
can't really do actions in the world for you so that they're very much like passive Q&A systems you know you put the energy in by asking the question and then they give you some kind of response um but they're not able to solve uh uh a problem for you um you can't say something like if you wanted it as an digital assistant you might want to say something like you know book me that holiday in Italy and all the restaurants and the museums and whatever and and you know it knows what you like but then
it goes out and books the flights and and all of that for you so it can't do any of that um but I think that's the next era uh these sort of more agent based systems we would call them or agentic systems that um have agent-like behavior um but of course that's what we're expert in that's what we used to build with all our game agents alphao and all of the other things we've talked in about in the past so a lot of what we're doing is bringing to kind of marrying that work uh that
we're sort of I guess famous for and then uh with the new large U multimodal models and I think that's going to be you you know the next generation of systems you can think of it as combining alphao with Gemini yeah because I guess alphao was very very good at planning yes it was very good at planning of course only in the domain though of games and so we need to sort of generalize that uh into you know the the general domain of everyday uh workloads and language you mentioned a minute ago how uh Google
D mind is now sort of the engine room of of Google I mean that is quite a big shift since I was asking the last couple of years ago is Google taking quite a big gamble on you yeah well I guess so I mean I think Google have always uh understood the importance of AI you know we've been Sundar when he took over CEO said uh that Google was an AI first company you know we we discussed that very early on in his tenure and he he saw the potential in AI as the next big
paradigm shift after mobile and internet you know but bigger than those things but then I think maybe in the last year or two we've really started Living what that means uh not just from a research perspective but also from products and and other things so um it's very exciting but I think it's the right bet for us to kind of coordinate all of our talents together and then um push as hard as you know as possible and then how about the other way around because I guess from Deep mind having that very strong research and
like Science Focus does becoming the engine room for Google Now mean that you have to care much more about commercial interest rather than the sort of purer stuff that you what we do definitely have to come um worry more about and and and and it's in our remit now the commercial interests and but actually um there's sort of a couple things saying about that first of all we're continuing on with our science work and Alpha folds and you know you just saw Alpha fold 3 come out and you know we're we're doubling down on our
investments there that's I think a unique thing that we do at at at Google deep mine now uh you know and even our competitors point at those things as sort of you know Universal Goods if you like that come out of AI uh and that's going really well and we spun out isomorphic to to do drug Discovery so um it's very exciting that's all going really well and so we're going to continue to do that uh and then was all our work on climate and all of these things and then but then we're quite a
large team so we can do more than one things at once we're also building our large models Gemini and Etc and then um we have a product team that we're building out that is going to you know bring all this amazing technology to all of the surfaces that Google has so it's an incredible sort of uh privilege in a way to have that there to plug in all of our stuff and you know we invent something it immediately can become useful to a billion people and so that's really motivating and actually the other thing is
is there's less uh it's there's a lot more convergence now between the technology you would need to develop for a product to have ai in it and what you would do for Pure AGI research purposes so um there's not really you know five years ago you would have had to build some special case AI for a product um now you can Branch off your main research and of course you need still need to do some things that product specific but maybe it's only 10% of the work so there's actually not that tension anymore between um
what you would develop for a an AI product and what you would develop for trying to build AGI it's it's it's 90% I would say um the same the same uh research program so and then finally of course if you do products and you get them out into the world you learn a lot from that which and and people using it and you learn a lot about oh your internal metrics don't quite match what people are saying you know so then you can update that that's really helpful for your research absolutely well okay I mean
we are going to talk a lot more in this podcast about those breakthroughs that have come from applying AI to science but I want to ask you about that tension that there is between knowing when the right moment is to to release something to the public because internally at Deep Mind those tools like large language models were being used for research rather than being seen as a potentially commercial thing yeah that's right right so we we you know as as you know we've always taken responsibility incredibly seriously here and safety um right from the beginning
you know way back when we start in 2010 and before that and Google then adopted some of our basically ethics Charter effected into their AI principles so we've always been well aligned with the whole of Google and and wanting to be responsible about about deploying this as one of the leaders in this space um and so it's been interesting now starting to ship real products with ji in them you know actually there's a learning that is going on and we're learning fast which is good because we're relatively low stakes here with the current technology right
so it's not that powerful yet but as it gets more powerful we have to be more careful um and that's just learning about the product teams that are you know in other groups learning about how to test gen Technologies it's different from a normal piece of technology because it doesn't always do the same thing people can it's almost like um testing an open world game it's almost infinite what you can try and do with it so it's it's sort of interesting to figure out how do you do the red teaming on it so red teaming
in this case being where you're competing against yourselves yeah so red teaming is when you set up a specific separate team from the from the team that's developed the technology to stress test it and try and break it in any way possible you know you actually need to use tools uh to automate that because nobody can red team even if you had thousands of people doing it that's not enough compared to billions of users when you put it out there they're going to try all sorts of things so it's um it's kind of interesting to
take that learning and then improve our processes so that um you know our future launches will be as smooth as possible and I think we got to do it in stages you know where there's an experimental phase then a kind of you know close beta and then and then launch a little bit again like we used to launch um our games back in the day so you sort of and learn at each step of the way so there's a uh you know and and then the other thing we got to do I think we need
to do more on is use AI itself to um help us internally with with teing and and and actually spotting some errors automatically or Tri triaging that so that the then the human you know our our kind of developers and human testers can actually uh focus on those hard hard cases you said something really interesting there about how you're just in a much more probabilistic space here right and and and then if there's even a very small chance of something happening if you have enough tries eventually something will go wrong and I I guess there
have been a couple of mistakes that you know public mistakes yeah so that's why you know I think that um as I mentioned that product teams are just getting used to the sorts of testing they you know they tested these things and but but they have this stochastic nature publicistic nature so in fact a lot of cases where you know if it was a normal piece of software you could say I've tested 99.999% of things so then extrapolates yeah so then it's enough because like you know there's no way of exposing the floor that it
has if it has one um but that's not the case with these generative systems you know they can do all sorts of things that are a little bit left field or out of the box out of distribution in a way from what you've seen before if someone clever or adversarial decides to it's almost like a hacker decides to um test push it in some way and it could even be I mean it's so combinatorial it could even be with all the things that You' happen to have said before to it and then you you then
it's in some kind of peculiar state which then or it's got its memories filled up with particular thing and then it that's why output something so there's a lot of um complexity there and and and but it's not in so there's there's ways to deal with it but um it's it's just a lot more Nuance than than launching normal technology I remember you saying I think it was like in the first um first time I interviewed you about how actually you have to think that this is a completely different way of computing you kind of
have to move away from the sort of the things that we completely understand the deterministic stuff into this much more messy like probabilistic error ridden place as well as your t do you think the public slightly has to shift its mindset on the type of computing that we're doing yeah I think so because and maybe that's something we you know another thing interestingly that that that uh you know we're thinking about is actually putting out a kind of principles document or something before you release something to S show what is the expectation from this system
you know what's it designed for what's it useful for what can't it do uh and um I think you know there is some sort of Education there needed of like you you'll be able to find it useful if you do these things with it but don't try and use it for these other things because it won't work and um I think that that's uh something that you know we we you know we need to get better at clarifying as a field and then probably users need to get uh more experienced on and actually this interesting
this is probably why chat Bots themselves would came a little bit out of the blue right even obviously chat GPT but even to open AI it surprised them and and we had our own chat Bots and Google had theirs and one of the things was we were looking at them and we were looking at the all the flaws they still had right and they still do and it's like well it's getting these things wrong and it sometimes does you know hallucinate and blah blah blah and there's so many things but then what we didn't realize
is actually there's still a lot of very good use cases for that uh even now that people find very valuable you know summarizing documents and really long things or writing awward emails or or mundane you know forms to be filled in um and and there's all these use cases which actually people um don't mind if there's some small errors they can fix them easily and saves a huge amount of time and I guess that was the surprising thing they sort of discovered people discovered when you put it in the hands of everyone there were these
they were there were actually these valuable use cases even though this the the systems were flawed in all of these ways we know well okay so I think that sort of takes me on to the next question I want to ask which is about open source because because when things are in the hands of people as you as you mentioned really extraordinary things can happen and I know that deep mind in the past has open sourced lots of its research projects but but it feels like that that that's slightly changing now as we go forward
just just tell me what your Sciences on open source yeah well look we we're huge supporters of Open Source and open science as you know we've we've I mean we've given away and published almost everything we've done uh you know collectively including like things like Transformers right and Alpha GOI published all these things in nature and science Alpha fold was um open source as we covered last time and these are all good choices and you're absolutely right that's the reason that all works is because that's the way um technology and science uh advances as quickly
as possible by sharing information uh so almost always that's a universal good to do it like that and that's how science Works um The Only Exception is when your and and a AI AGI and AI powerful AI does fall into this is uh you have a dual purpose technology right and so then the problem is is that you want to enable all the good use cases and and the genuine scientists who are acting in good faith and so on technologists to to build on the ideas critique the ideas and so on that's the way um
you know Society advances the quickest but the problem is how do you restrict access at the same time for Bad actors who would take the same systems repurpose them for bad ends misuse them you know weapon systems who knows what and um you know those general purpose systems can be repurposed like that and it's okay today because I don't think the systems are that powerful but in two three four years time especially when you start getting agent like systems or agentic behaviors um then I think uh you know if it's something's misused by someone or
perhaps even a rogue nation state uh there could be serious harm so then I think that as a as I don't have a solution to that but as a community we need to think about what does that mean for open source um perhaps the funer models need to have more checks on them uh and then only after they've been out for a year year or two years then they can get open sourced that's sort of the model we're following with because we have our own open models of Gemini called Gemma they're smaller so they're not
Frontier models uh so usually they so their capabilities are very useful still to the developer because they're also easy to run on a laptop or because they're small numbers of parameters but um but the capabilities they have are well understood at this point right because they're not Frontier models so it's just not as powerful as the latest say Gemini you know 1.5 models so I think that probably the approach that we'll end up taking is uh we'll have open source models but they'll be lagging you know maybe one year behind the the the most Cutting
Edge models just so that those model that we can really assess out in the open you know by by users what those models can do the fontier ones can do and you can really I guess test those those boundaries of the and we'll see what those are the problem with open sources if something goes wrong you can't recall it right with a proprietary model if your bad actor starts using it in a a bad way you can just you can just sort of close the tap off um uh you know in the limit you could
switch it off right but uh once you open S something there's no pulling it back so it's a one-way door so you should be very very sure when you do that is it definitely possible to contain an an AGI though within the sort of walls of of an organization well that's a whole separate question um I don't think we know how to do that right now so that's that's when you start talking about AGI level powerful like human level AI or what about intermediary well intermediary I think we have that we have good uh ideas
of how to do that so uh you know one would be things like a um secure sandboxing so you test that's what I'd want to test the Adrian behaviors in is in a game environment or a version of the internet that's not quite fully connected right so there's a lot of uh security work that's done and known you know in this space in in fintech and other places so we'd probably borrow those ideas and and then build those kinds of systems and that how we would test the early um prototype systems but um that's we
also know that's not going to be good enough to contain an AGI something that's potentially smarter than us so I think we got to understand those systems better so that we can design the protocols for an AGI um when that time comes we'll have better ideas for how to contain that potentially also using AI systems and tools to monitor uh the next versions of the AI system so one the subject of safety because I I know that you a very big big part of the AI safety Summit AT blessy Park in 2023 which is of
course hosted by the UK government and and from the outside I think a lot of people just say the word regulation as though it's just going to come in and fix everything but what is your view on how regulation should be structured well I think it's great that governments are getting up to speed on it and involved I think that's one of the good re things about the the recent explosion of interest is that of course governments are paying attention and I think it's been great UK government specifically who I've talked to a lot and
us as well they've got very smart people in the Civil Service staff that are um are understand the technology now to to a good degree and it's been great to see the AI safety institutes being set up in the UK and us and I think many other countries are going to follow so I think these are all good precedents and protocols to settle into again before the stakes get really high right so this is a sort of proving stage again as well um and I do think International cooperation is going to be needed ideally around
things like regulation and guard rails and deployment Norms so um because AI is a digital technology very much so you know it doesn't it's hard to contain it within National boundaries right so if the UK or Europe does something but or even the us but China doesn't does that really help the world as oppos you know when we start getting closer to AGI not really so I think my my view in it is you've got to be uh because the technolog is changing so fast um we've got to be very Nimble and and lightf footed
with regulation so that it's easy to adapt it to where the latest technolog is going if you'd regulated AI 5 years ago you'd regulated something completely different to what we see today which is geni and but it might be different again in five years it might be these agent based systems that are the ones that are carry the highest risk so right now I would you know recommend a sort of beef up existing regulations in in domains that already have them Health transport so on I think you know you can update them for AI for
an AI world just like they were updated for mobile and internet that's probably the first thing I do while doing a watching brief on you know and making sure you understand and test this the frontier systems and then as Things become clear and sort of um more clearly obvious then um start regulating around that uh you know maybe in a couple years time would make sense one of the things we're missing is again the benchmarks the right test for capabilities that what we'd all want to know including the industry in the field is at what
point are capabilities posing some sort of big risk and and there's no answer to that at the moment right beyond what I've just said which is Agent based capabilities is probably a next threshold but there's no agreed upon test for that you know one thing you might imagine is like testing for deception for example as a capability you really don't want that in the system because then you can't rely on anything else that it's reporting right so um that would be my number one uh emerging capability that I think you know would be good to
test for but there's many uh you know ability to achieve certain goals ability to replicate uh and there's quite a lot of work going on on this now and I think the safety institutes which are basically sort of government uh agencies I think they're working I think it' be great for them to do a lot you know to push on that as well as well as the labs of course contributing what we know I wonder in this this picture of the world that you're that you're describing what's the place for institutions in this I mean
if we get to the stage where we have AI that's kind of supporting all scientific research is there still a place for great institutions yeah I think so look well there's there's there sort of the stage up to AGI and I think that's got to be a cooperation between Civil Society Academia government and and the industrial Labs so I think I really believe that's the only way we're going to get to the to the to the sort of final stages of this now if you're asking after AGI happens you know that maybe that is what
you're asking then AGI of course one of the reasons I've always wanted to build it is then we can use it to start answering some of the biggest most fundamental questions about the nature reality and physics and all of these things and Consciousness and so on it depends you know what form that takes whether that would be a human uh expert combination with AI I think that will be the case for a while uh in terms of discovering the next Frontier so um like right now these systems can't come up with their own conjectures or
hypotheses they can help you prove something and I think we'll be able to prove you know gold get gold medals on International maass Olympia things like that but maybe even solve a famous conjecture think we're that's within reach now but not they don't have the ability to come up with ran hypothesis in the first place right uh or gen relativity so that's really was always my test for um maybe a true artificial general intelligence is it'll be able to do that or invent go you know and and so we don't have any systems we don't
really know how even probably you know know how we would design in theory even a system that could do that you know the computer scientist Stuart Russell so he told me that he was a bit worried that once we get to AGI it might be that we all become like the Royal princes of the past you know the ones who never had to ascend the throne or do any work but just got to live this life of unbridled luxury and have no purpose yeah so that's that is the interesting question is it maybe it's beyond
AGI it's more like artificial super intelligence or something sometimes people call it ASI but then we should have you know radical abundance and assuming we you know make sure we distribute that you know fairly and equitably then we will be in this position where um you know we'll have more freedom to choose what to do and um and then meaning will be a big philosophical question and I think um we'll need philosophers perhaps theologians even to start thinking at Social scientists that they should be thinking about that now what what brings meaning I mean I
still think um there's of course self-actualization and I don't I think we'll all just be sitting there meditating but but but but maybe we all be playing computer games I don't know but is that a bad thing even or or not right who who knows I don't think the princes of the past came off particularly well no traveling the stars but then there's also you know extreme sports people do why do they do them I mean uh you know climb Everest all these I mean there'll be you know but I think it's going to be
very interesting and and that I don't know but that's that's kind of what I was saying earlier about the it's under appreciated what's going to happen you know going back to the hype near term versus far term so if you want to call that hype even it's it's definitely underhyped I think the amount of transformation that will happen I think I think it will be very good in the limit we'll cure lots of diseases and or all diseases you know solve our energy problems climate problems um but then the next question comes is is is
their meaning so bring us back like slightly closer to to AGR rather than than superintendents I know that your big mission is to to build um artificial intelligence to benefit everybody but how do you make sure that it does benefit everybody how do you include all people's preferences rather than just the designers yeah I think you've got to I think what's going to have to happen is I mean it's impossible to include all preferences in one system because by definition people don't agree right we can see that in unfortunately in the current state of the
world countries don't agree um governments don't agree um we can't even get agreement on obvious things like like dealing with the climate uh uh situation so I think it's that's very hard what I imagine will happen is that you know we'll have a set of safe architectures hopefully that um personalized AIS can be built on top of and then everyone will have you know or or different countries will have their own preferences about what they use it for uh what they deploy it for what they you know what can and can't be done with them
but overall and that's fine that's for everyone to individually decide or countries to decide themselves just like they do today but as a society we know that um there's some provably safe things about those architectures right and then you can let them proliferate and and so on so I I I think that we going to kind of got to get through the eye of a needle in a way where um we got to as we get closer to AGI we've probably got to cooperate more ideally ideally internationally uh and then get make sure we build
agis in a safe architecture way because I'm sure there are unsafe ways and I'm sure there are safe ways of building AGI uh and then once we get through that then we can sort of open the funnel again and everyone can have their own personalized pocket AG if they want what a version of the future okay but then in terms of the safe way to build it I mean are we talking about undesirable behaviors here that might emerge yes undesirable emergent behaviors um uh capabilities that the deception is one example that that you don't want
um value systems you know we got to understand all of these things better what kind of guard whs work uh not circum ventable and there's two cases to worry about there's the there's bad uses by by bad uh individuals or or nations so human misuse and then there's the AI itself right as it gets closer to AGI doing going off the rails so that and I think you need different solutions for those two problems um and so yeah that's that's what we're going to have to contend with as we get closer to uh uh building
these Technologies and also just going back to your benefiting everyone point of course what what I'm you know we're showing the way with things like Alpha fold and isomorphic I I think we could you know cure most diseases within the next decade or or two if uh AI drug design works and then they could be personalized medicines where it minimizes the side effects on the individual because it's it's mapped to the the person's individual illness and their individual metabolism and so on so these are kind of amazing things um you know clean energy renewable energy
sources you know Fusion or better solar power all of these types of things I think they're all within reach and then that would sort out water access because you could do desalination everywhere so I just feel like um this enormous good is going to come from uh these Technologies um but we have to mitigate the risks too and one way that you said that you would want to mitigate the risks was uh that there would be a moment where you would basically do the scientific version of Avengers assembl yes sure Terren to on exactly bring
on down yeah exactly is that still your plan yeah well I think I think I think so I think if we can get the international cooperation you know love there to be a kind of international C basically for AI where you get the top researchers in the world you go look let's focus on the final few years of this Pro you know AGI project and get it really right and do it scientifically and carefully and thoughtfully at every every step the final sort of steps I still think that would be the best way how do
you know when is the time to press the button well that's that's the big question because you you can't do it too early because you would never be able to get the Buy in to do that a lot of people would disagree today people disagree with the risks right you see very famous people saying there's no risks and then you have people like Jeff Hinton saying there there's lots of risks and you know I'm I'm I'm in the middle of that I wanted to talk to you a bit more about Neuroscience um how much does
it still Inspire what you're doing because I noticed the other day that Deep Mind had unveiled this computerized rat with a with an artificial brain that that helps to change our understanding of of how the brain controls movement but in the first season of the podcast I remember we talked a lot about how deep mind takes direct inspiration from biological systems is that still the core of your your approach no it's evolved now because I think we've got to a stage now in the last I would say two three years we've gone more into an
engineering phase uh large scale systems you know massive uh uh training architectures um so I would say that the influence of a of of Neuroscience on that is a little bit less um it may come back in so any time where you need more invention then you want to get as many sources as possible Neuroscience would be one of those sources of of ideas um but when it's more engineering heavy then um I think that takes a little bit more of a backseat so maybe more applying AI to Neuroscience now like you saw with the
virtual rat brain uh and I think we'll see that as we get closer to AGI using that to understand the brain um I think it' be one of the coolest use cases for AGI and science I guess this stuff kind of goes through phases of like the engineering CH intervention CH it's done it's part it's you know now and it's it's been great and we still obviously keep a close track of it and take any other other ideas too okay um all of the pictures of the future that you've painted um are still anchored quite
in reality but I know that you've said that you really want um AGI to be able to peer into the mysteries of the universe down at the plank scale yes like kind of subatomic Quantum World um do you think that there are things that we have not even yet conceived of MH that that might end up being possible I'm talking wormholes here completely yes I love wormholes to be possible I I think we there is a lot of probably misunderstanding I would say still things we don't understand about physics and the and the nature of
reality and um you know obviously with quantum mechanics and unifying that with you know gravity and all of these things and there's all these problems with the standard model so I think there's there's there and string theory you know I mean I just think this giant gaping holes in physics phys all over the place and if you you know talk to my physics friends about this and there's a lot of things that don't fit together um I don't really like the Multiverse explanation so I think that um it would be great to uh come up
with new theories and then test those on massive apparatus perhaps out in space um at these these tiny qu you know the reason I'm obsessed with plank scale things plank time plank space you know is is uh uh because that seems to be the resolution of reality right that in a way the kind of smallest Quant you can break anything into so that feels like the kind of level you want to experiment on if you had powerful um uh apparatus perhaps designed or enabled by having AGI and radical abundance you need both so to be
able to afford to build those types of experiments the resolution of reality what a phrase what so is in like the resolution that we're at at the moment sort of human level is just an approximation yes that's right and then we know there's the atomic level where below that the plank level which as far as we know is the smallest resolution one can even talk about things and so that to me would be the resolution one wants to experiment on to really understand what's going on here I wonder whether you're also envisioning that there'll be
things that are beyond the limits of human understanding AGI will help us to to uncover that actually we're just not really capable of understanding and then I sort of wonder if if things are are unexplainable or un understandable are they still falsifiable yeah well look I mean these are great questions I think there will be a potential for an AGI system to understand higher level abstractions than we can so through again through neur going back to Neuroscience we know that you know it's your prefrontal cortex that does that and there's sort of up to about
six or seven layers of of indirection you know one could take you know this person's thinking this and I'm thinking this about that person thinking this and so on and then we we sort of lose track but um I think an AI system could have an arbitrarily sort of large prefrontal cortex effectively so you could imagine higher levels of abstraction and patterns that it will be able to see about the universe that we can't really comprehend or hold in mind at once and then I think the from in terms of explainability point of view the
way I think that is a little bit different to other philosophers who've thought about this which is like we'll be like to an closer to an ant and then the AGI right in terms of IQ but I think that's the way to think of it I think you know it's it's we We are touring complete so we're sort of a you know a full general intelligence as ourselves or be a bit slow because we run on slow machine and we can't you know infinitely expand our own brains but um we can in theory on given
enough time and and memory uh understand uh anything that's computable and so it I think it will be more like uh you know Gary Kasparov or Magnus Carson playing an amazing chess move I couldn't have come up with it but they can explain it to me why it's a good move so I think uh that's what an AGI system will be able to do you said that deep mind was a 20-year project uh how far through are we are you are you on track I think we're on track yeah crazily because usually 20e projects stay
20 years away but uh yeah we're a good way in now and I think we're 20 years is 2030 for yeah so I think I would the way I say is I wouldn't be surprised if it comes in the next decade so I think we're on track that matches what you said last time you haven't updated your prior exactly amazing yeah deis thank you so much absolute Delight absolute Delight as always so fun to talk as always as well thank you okay I think there are a few really important things that came out of that
conversation especially when you compare it to what Demis was saying last time we spoke to him in 2022 because there there have definitely been a few surprises in the last couple of years the way that these models have demonstrated a genuine conceptual understanding is one this this real world grounding that came in from language and human feedback alone we did not think that that would be enough and then how interesting and useful imperfect AI has been to the everyday person Demis himself there admitted that he had not seen that one coming and that makes me
wonder about the other challenges that we don't yet know how to solve like long-term planning an agency and robust unbreakable safeguards how many of those which we're going to cover in detail in this podcast by the way are we going to come back to in a couple of years and realize that they were easier than we thought and how many of them are going to be harder and then as for the big predictions that Demis made like cures for most diseases or in 10 or 20 years or or AGI by the end of the decade
or how we're about to enter into an era of abundance I mean they all sound like Demus is being a bit overly optimistic doesn't it but then again he hasn't exactly been wrong so far you've been listening to Google deep Minds the podcast with me professor Hann fry if you have enjoyed this episode hey why not subscribe we have got plenty more fascinating conversations with the people at The Cutting Edge of AI coming up on topics ranging from how AI is accelerating the pace of scientific discoveries to addressing some of the biggest risks of this
technology if you have any feedback or you want to suggest a future guest then do leave us a comment on YouTube until next time [Music]
Related Videos
AI: Your New Creative Muse? with Douglas Eck
42:03
AI: Your New Creative Muse? with Douglas Eck
Google DeepMind
5,490 views
OpenAI expert Scott Aaronson on consciousness, quantum physics and AI safety | FULL INTERVIEW
33:42
OpenAI expert Scott Aaronson on consciousn...
The Institute of Art and Ideas
39,511 views
The Tipping Points of Climate Change — and Where We Stand | Johan Rockström | TED
18:36
The Tipping Points of Climate Change — and...
TED
237,387 views
A Conversation with Elon Musk
34:18
A Conversation with Elon Musk
The Cato Institute
70,947 views
These Illusions Fool Almost Everyone
24:55
These Illusions Fool Almost Everyone
Veritasium
2,444,272 views
What do tech pioneers think about the AI revolution? - BBC World Service
25:48
What do tech pioneers think about the AI r...
BBC World Service
141,230 views
How Millionaire Bankers Actually Work | Authorized Account | Insider
39:03
How Millionaire Bankers Actually Work | Au...
Insider
1,333,306 views
Mark Zuckerberg on Llama, AI, & Minus One
58:42
Mark Zuckerberg on Llama, AI, & Minus One
South Park Commons
114,531 views
Beyond the Hype: A Realistic Look at Large Language Models • Jodie Burchell • GOTO 2024
42:52
Beyond the Hype: A Realistic Look at Large...
GOTO Conferences
84,476 views
Will AI Spark the Next Scientific Revolution?
40:01
Will AI Spark the Next Scientific Revolution?
World Science Festival
71,476 views
The Potential for AI in Science and Mathematics - Terence Tao
53:05
The Potential for AI in Science and Mathem...
Oxford Mathematics
113,339 views
Inside Mark Zuckerberg's AI Era | The Circuit
24:02
Inside Mark Zuckerberg's AI Era | The Circuit
Bloomberg Originals
1,679,084 views
Bill Gates Reveals Superhuman AI Prediction
57:18
Bill Gates Reveals Superhuman AI Prediction
Next Big Idea Club
240,109 views
AI and The Next Computing Platforms With Jensen Huang and Mark Zuckerberg
58:38
AI and The Next Computing Platforms With J...
NVIDIA
3,533,413 views
Humanoid Robots, the Job Market & Mass Automation - The Current State of AI w/ Emad Mostaque | EP114
2:02:49
Humanoid Robots, the Job Market & Mass Aut...
Peter H. Diamandis
93,564 views
AI Realism Breakthrough & More AI Use Cases
25:52
AI Realism Breakthrough & More AI Use Cases
The AI Advantage
26,211 views
This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.
16:09
This is the dangerous AI that got Sam Altm...
Digital Engine
2,471,167 views
Has Generative AI Already Peaked? - Computerphile
12:48
Has Generative AI Already Peaked? - Comput...
Computerphile
954,038 views
What Creates Consciousness?
45:45
What Creates Consciousness?
World Science Festival
386,558 views
AI HYPE - Explained by Computer Scientist || El Podcast EP48
1:25:26
AI HYPE - Explained by Computer Scientist ...
El Podcast
180,939 views
Copyright © 2024. Made with ♥ in London by YTScribe.com