[Music] she looks so [Music] human Central London should be the last place that you'd find gorillas but here behind the glass in a zoo these Majestic animals offer a glimpse into our past and perhaps a vision of our future about 10 million years ago her ancestors created accidentally the genetic lineage that led to modern humans and I think it's safe to say that hasn't exactly worked out well for the gorillas as human intelligence evolved our impact on the world has left gorillas on the brink of Extinction it's a metaphor that researchers of artificial intelligence call
the gorilla problem it is a warning about the risks of building machines that are vastly more intelligent than us it's about superhuman AI that could take over the world and instead threaten our existence that warning hasn't stopped companies like meta Google and open AI they're trying to build computers that surar pass human intelligence in every domain they claim it will solve our most difficult problems and invent technologies that our feeble Minds cannot even conceive of I'm Professor Hannah fry mathematician and writer I want to know if superintelligent AI really is just a few years away
and just as we almost killed off the gorilla could Advanced AI pose an existential threat to [Music] US unless you've been living under a rock you'll know the AI is everywhere now but it's not just about touching up photos and chat Bots you can use it for an incredible range of stuff for preventing tax evasion for finding cancer for deciding what adverts to serve you the explosion of AI tools are all examples of narrow artificial intelligence sophisticated algorithms are extremely good at a specific task but what companies like open Ai and deep minded trying to
do is to create something known as artificial general intelligence a machine that will outperform humans at everything these sort of human level AI systems that are very general general purpose um that's always been the Holy Grail really of AI research I think we are getting pretty close now the tech Giants are spending billions of dollars on General AI each year all to try and pin down and replicate something that most of us take for granted a broad capable humanlike intelligence the only trouble is deciding what we actually mean by intelligence because that proves to be
quite a slippery idea to pin down in 1921 the psychologist Vivian Henman said that intelligence was the capacity for knowledge and knowledge possessed which does sound quite good until you realize that means that Library should account as intelligent other people have suggested that intelligence is the ability to solve hard problems which kind of works until you realize you have to Define what counts as hard now in fact there isn't a single definition of intelligence which manages to encapsulate everything however there are still some things that we are looking for in an AI for it to
be considered truly intelligent firstly it should be able to learn and adapt because we can after all from both birth we are gathering knowledge we're applying what we learn from one area to another secondly it should be able to reason now this bit is hard it requires a conceptual understanding of the world and finally an AI should interact with its environment to achieve its goals if you suddenly landed in a foreign city you would still know how to find water even if it meant using a phrase book to ask someone for help so these are
the ingredients for True intelligence and to truly supp pass our abilities AI researchers seek to build a machine that can do all of this better than any [Music] human dream big put the spoon in the pot start small worked out with on spoon is [Music] MHM there he is hey it's in that's cool I'm impressed good robot while chatbots are impressive language models on their own might not be enough to reach super intelligence Sergey Levan and his PhD student Kevin black say it might only be done once AI has been given a body a way
to physically interact with the world put the Green Spoon on the towel here we go this robot might not look Cutting Edge but unlike the Slick robots on Factory floors which follow precise choreography this one learns every action for itself that's basically perfect yeah so what difference does having a body actually make then to the way that we learn you ask a language model to describe what happens when you drop an object will say okay the object but understanding what it really means for that object to fall the effect it has on on the state
of the world that's something uh that is becomes much more immediate when you actually experience it chat GPT doesn't unad gravity but your robot does well chat GPT can guess what gravity is based on people's descriptions of it but it's um it's a reflection of a reflection whereas if you actually experience it then you get it right from the source do you think that AI needs to have a body well I don't know if it needs to have a body but I know that if you have a body you can have intelligence uh because that's
the one proof of existence that we have put the mushroom in the silver metal Bowl sergey's robot employs a language model to understand my instruction it can also recognize objects because it looks through billions of pictures on the web next it imagines what my instruction should look like in digital form before physically carrying out the action put the mushroom in the wooden Bowl so this is actually one of the hardest possible things because we never had the wooden Bowl in this lab before so it's never seen it before should be able to recognize more objects
that have never been in this lab before can I try something you can can I my watch don't worry it's a cheat watch it's okay go on in this out put the watch on the towel oh oh well I figured out which object it is it's imagining that the thing needs to go on a towel it did it I mean that's really amazing I did not think that would work but it's it's it's the fact that I can give it any command and just it won't be thrown I'll admit if you look at those robot
arms they don't look like they're that impressive but they are demonstrating a form of imagination of a prediction a conceptual understanding of what it's manipulating and also it's something that is totally flexible that could be picked up and put into lots of different scenarios these are subtle humanlike attributes that some believe are a crucial step towards an artificial general intelligence but it's these very properties and their potential repercussions that have many people in the field worried just across the hall from Sergey is Professor Stuart Russell a research Pioneer who quite literally wrote The Textbook on
AI he's now one of the most vocal researchers sharing concerns about the future if we make machines that are more powerful than us because they're more intelligent than we are it's not going to be easy to retain power over them forever how might your concerns play out there's this idea of what's called misalignment This is the idea that the machine is pursuing an objective and it's not aligned with what we really want if we're going to put a purpose into a machine better make sure that it's the purpose we really desire you know let's fix
the problem of climate change okay well what causes climate change people yeah right so easy way to do that get rid of all the people right problem solved why can't you just put a stop button in can you not just like take the plug out the wall when not necessarily going to be able to do that because a sufficiently intelligent machine will already have thought of that but you can't expect to be able to pull the plug unless the machine wants you to pull the plug should we be building superintelligent machines at all we could
just decide not to do it but the economic incentives are too great the amount being invested right now specifically to create superintelligent AI is in the ballpark of what the entire World spends on basic science research and if we do create super intelligent AI the value you know back of the envelope calculation is tens of quadrillions of pounds with these sums of money the concern is that safety may not be the top priority in fact there isn't a single high confidence statement that they can make about these systems will they copy themselves onto other machines
without permission we haven't the faintest idea will will they advise terrorists on how to build biological weapons we don't know right can you stop them oh no it's very difficult right and in most Industries you wouldn't accept that right if I want to sell a medicine I can't say well it's really difficult all these clinical trials such a pain you know can I just bypass those and sell it direct to the public sorry no come back when you've done the work uh and I think that's what we have to say to the tech companies are
there other concerns just with artificial intelligence that's actually really good at doing stuff yeah you might wonder if AI systems are so capable then companies will use them to do pretty much everything they pay human beings to do and if they don't they'll go out of business what does a world look like where machines do all the work we become enfeebled like some kids of billionaires are absolutely useless is that yes in fact we we we would all be kids of billionaires and you know one obvious consequence would be that we lose the incentive to
learn we lose the incentive to be independent to achieve in a sense our civilization would end because it would no longer be a human civilization to me that's almost worse than Extinction I mean people have worried about this for a while right I mean Turing even was was concerned about this oh he was more than concerned he was terrified in fact he said it's hopeless we should have to expect the machines to take control is what he said how did he resolve it though he didn't just left a message for the future yep there's no
solution there's not even really an apology it's just this is going to happen and there is no shortage of people now predicting doomsday for Humanity I think it gets smarter than us I think we're not ready I think we don't know what we're doing and I think we're all going to die default is just disaster and I think most likely just human extinction am we just going to die that's my U fairly confident prediction literally human extinction of course not everyone agrees this is a a topic of very heated debate Melanie Mitchell studies Ai and
is interested in how closely it resembles humanlike intelligence right do you think there's an existential threat then here I think that there are many threats from AI but saying that it's an existential threat is going way too far why do you think that some of the the doomsdayers uh as sometimes they're called what why do you think that they are following this line of logic then it kind of gets to this projecting agency onto machines it's saying that machines because they have certain objectives can start doing things that could become catastrophic if we give them
that power but that's a big if if we give them that power you know you're are you going to give an AI sort of the decision-making power on launching Nuclear Strike let's hope not you know if you give a monkey uh launching power over nuclear weapons the monkeyy is an existential threat do you think that we overestimate AI in its current form and the F to that is is it harmful to do so you know people overestimate a I often we've seen several cases where lawyers will use chat GPT to write a legal brief and
it turns out that it's hallucinated several cases I've got in fact gotten an email from somebody saying you know chat TBT suggested that I read this book of yours but I can't find it Well it doesn't exist if we trust them too much we can get into big trouble if we're saying that AI isn't likely to be an existential threat it is a threat in other ways right yes absolutely we're seeing them already we've seen problems with AI bias facial recognition software makes more mistakes on people who have darker skin color and we've seen many
arrests of people innocent people who are arrested because of a mistake made by a facial recognition system here in the US in the election we're seeing deep fakes of like Joe Biden's voice encouraging people not to vote all of these things are really important to deal with right now nuclear Armageddon or otherwise there are a number of ways in which AI can be harmful and we need to be careful in over trusting algorithms that are capable of fooling us or making catastrophic mistakes there's no doubt that we're in a new frontier here I mean there
have been genuine incredible advances and seismic changes I think there's a lot still to come but when it comes to a super intelligent humanlike AI that can destroy our species I mean I think I think basically as a big old don't know and and I'm okay with that uncertainty I think we can mitigate against some of the potential harms think about safety very carefully while simultaneously maybe not losing that much sleep over something that's potentially not going to happen I think the only thing that we can say for sure at the moment is that we
have just one example of humanlike intelligence I.E us and AI now is definitely not a [Music] replica it is the neurons inside our own brains and how they signal to one another that inspired the artificial neural networks driving the AI boom if we're going to build a humanlike general intelligence and Beyond could the key be a better understanding of our own mind one of the questions is could you make a map of the brain so detailed that you could try to simulate it in a computer could you make a software simulation of it neuroscientist Professor
Ed boen is trying to understand the hardware of our intelligence by creating a digital map of the brain [Music] you know the brain is complex you know brain cells uh make all sorts of things right you know they make canono molecules that act kind of like the active ingredient in marijuana right they make gous molecules like nitric oxide that diffuse in all directions for a lot of these molecules we really don't know what roles they play in most decisions emotions behaviors and so forth so for everything that we know about the human brain when it
comes to understanding at the level of like individual copses and structure I mean the way you're describing is that we we've barely scratched the surface we know very little about the circuitry of of any brain frankly um there's one worm which has 302 neurons where the wiring has been mapped out pretty well with around a 100 billion neurons the human brain might have to wait Ed team has started with some of the simplest of living things including the sea Elegance worm part of the mapping process involves probing neural circuitry using a technique called optogenetics we
can borrow these molecules from nature that convert light to electricity put the molecules in the brain even in specific cells aim light at those cells and turn them on or off so this is a worm where the light will activate serotonin neurons and what you're going to see is the worms just going to stop and now I'm going to turn on the light they just freeze completely in place and so one of the things that we are in our group is to start with very small brains like worms if it works it it might reveal
principles about how the brain works but it also might pave the way to scaling up what would it be to do the mouse brain with ballpark 100 million neurons and then the human brain of course is ballpark 100 billion neurons so you know we're from worm to Mouse to human to scale up and produce detailed neural Maps Ed is using some unusual tools so you know how a baby diaper works unfortunately a lot of experience with baby diapers Ed is using a material found in diapers to overcome a fundamental problem mapping the dense web of
neurons inside a brain for 300 years the way you see something in biology is use a lens to make the picture bigger what if we make the actual thing bigger the technical term is sodium polyacrylate and then what we can do is add water oh man no liquid left at all yeah sodium polyacrylate can swell up to a thousand times its original volume so what we do is we chemically install that baby diaper material inside the brain so it's not a living brain at this point it's a preserved one but do it just right and
add water and we can make the brain bigger Ed is using it to expand tiny slices of mouse brain all right well you can see the beginning already yeah starting I mean that is absolutely incredible you can see it going already the size of it changing yeah it's very beautiful isn't it using both expansion and a powerful microscope Ed can see to the level of individual neurons this is a real piece of mouse brain tissue yeah this is real data part of the brain involved with amongst other things memory and the cells look different colors
cuz we are color coding them and our goal is to give every cell in the whole brain its unique color code yeah I mean there's a lot in that image this is a map of just a tiny fraction of a mouse's brain Ed's goal to achieve a fully mapped human brain is still in the distant future do you think that we are on the path to all super intelligence in terms of constructing it artificially it might depend on what you define intelligence to be some goals of intelligence are to replicate certain functions like language at
the extreme you might imagine uh flashes of insight like Einstein imagining traveling along a beam of light or sometimes people will talk about an Insight coming to them in a dream or while they're walking down the street they doing something else there's that argument that either there's something special about brains or it is just complex computation and if it's just complex computation then you should be able to replicate it a lot of people ask me well a large language model you know is that how the brain works and the honest answer is well we don't
really know right maybe the brain is doing something like that but and maybe not uh um my intuition is that the brain works very differently but but again since we don't have a good map of any brain we really have no idea what the fundamental underlying mechanisms are I think when you hear concerns like AI is going to take over the world or it's going to destroy Humanity I think it's really easy to impart humanlike characteristics on artificial intelligence it's it's really easy to imagine that it has intent and understanding and and cruelty maybe but
what is really clear talking to Ed is that when it comes to actual biological brains they are on a completely different level of computation of complexity of structure everything the artificial intelligence that we have now the best stuff in the entire world is more like a spread sheet that it is like a sea Elegance work and I think that there's an important lesson in that silicon Valley's quest to Eclipse human intelligence is steeped in uncertainty while the Guerilla problem is a poignant warning for the future we should not be distracted from today's risks like racial
bias and fake news but perhaps in the end the true challenge is not the creation of super intelligent AI but understanding the vast complexity of our own minds a frontier we're only just beginning to explore [Music] [Applause]