when Demis Hassabas won the Nobel Prize last year he celebrated by playing poker with a world champion of chess habisas loves a game which is how he became a pioneer of artificial intelligence the 48-year-old British scientist is co-founder and CEO of Google's AI powerhouse called Deep Mind we met two years ago when chatbots announced a new age now Habis and others are chasing what's called artificial general intelligence a silicon intellect as versatile as a human but with superhuman speed and knowledge after his Nobel and a nighthood from King Charles we hurried back to London to
see what's next from a genius who may hold the cards of our future the story will continue in a moment what's always guided me and and and the passion I've always had is understanding the world around us i've always been um since I was a kid fascinated by the biggest questions you know the the the meaning of of life the the the nature of consciousness the nature of reality itself i've loved reading about all the great scientists who've worked on these problems and the philosophers and I wanted to uh see if we could advance human
knowledge and for me my expression of doing that was to build what I think is the ultimate tool for for advancing human knowledge which is which is AI we sat down in this room two years ago and I wonder if AI is moving faster today than you imagined it's moving incredibly fast uh I think we are on some kind of exponential curve of improvement of course the success of the field in the last few years has attracted even more attention more resources more talent so um that's adding to the to this exponential progress exponential curve
in other words straight up yep straight up and increasing speed of progress start we saw the progress hello Scott it's nice to see you again in an artificial companion that can see and hear and chat about anything early chatbots learned only the internet an app called Astra also takes in the world do we call her she um it's a good question i'm not sure we I'm not sure we all know the answer yet bibbo Shu is product manager for Project Astra an app in a new generation of chatbots that interpret the world with their own
eyes we challenged Astra with virtual paintings we chose and showed to Astra for the first time this is July Hi by Thomas Hartbenton from 1942 how about this one that is the Virgin of Charity by Elgreco that painting is Automat by Edward Hopper the person in the painting what emotion is she exhibiting the subject in the painting appears pensive and contemplative her expression suggesting a sense of solitude can you create a story around this picture it's a chilly evening in the city a Tuesday perhaps the woman perhaps named Eleanor sits alone in the diner she
is feeling melancholy due to the uncertainty of her future and the weight of unfulfilled dreams the scene freezes in time only the flow of ideas moving onward that line only the flow of ideas moving onward i wish I'd written that and did we hear a limit to Astra's patience what was that fictional conversation about ah in the story I imagined Eleanor and her friend talked about their shared dreams and the challenges of pursuing them something about the way you said ah made me wonder if you were getting bored i apologize if my tone came across
that way my aim is always to engage thoughtfully and she said "Well I'm sorry if you don't like my tone." Wow what's happening there well that's interesting that's again in challenge with these systems as they act in the moment with the context that's around them and that may have never been tested before he's often surprised because AI programs are sent out on the internet to learn for themselves they can return later with unexpected skills so we have theories about what kinds of uh capabilities these systems will have that's obviously what we try to build into
the architectures but at the end of the day how it learns what it picks up from the data is part of the training of these systems we don't program that in it learns like a human being would learn so um so new capabilities or properties can emerge from that training situation you understand how that would worry people of course it's the duality of these types of systems that they're able to uh do incredible things go beyond the things that we're able to uh uh design ourselves or understand ourselves but of course the challenge is is
making sure um that the the knowledge databases they create um we understand what's in them now DeepMind is training its AI model called Gemini to not just reveal the world but to act in it like booking tickets and shopping online it's a step toward AGI artificial general intelligence with the versatility of a human mind on track for AGI in the next 5 to 10 years I think and in 2030 you will have what well we'll have a system that um really understand everything around you in very uh nuanced and deep ways um and kind of
embedded in your everyday life embedded like Astra in eyelasses what can you tell me about this building I'm looking at this is the cold drops yard a shopping and dining district she sees what I see there's a speaker in the earpiece only I can hear what was it originally before it became shops the coal drops yard was originally a set of Victorian coal warehouses used to receive and distribute coal across London was coal ever a problem for the environment in London yes coal was a significant source of air pollution in London particularly during the industrial
revolution it occurred to us that the only thing we contributed to this relationship were legs which will also soon be engineered i also think another big area will be robotics i think it will have a breakthrough moment in the next couple of years where we'll have demonstrations of maybe humanoid robots or other types of robots that can start really doing useful things for example hey robot researchers Alex Lee and Julia Vazani showed us a robot that understands what it sees that's a tricky one and reasons its way through vague instructions put the blocks whose color
is the combination of yellow and blue into the matching color ball the combination of yellow and blue is green and it figured that out it's reasoning yep definitely yes the toys of Deis Hassabus' childhood weren't blocks but chess pieces at 12 he was the number two champion in the world for his age this passion led to computer chess video games and finally thinking machines he was born to a Greek criate father and Singaporean mother cambridge MIT Harvard he's a computer scientist with a PhD in neuroscience because he reasoned he had to understand the human brain
first are you working on a system today that would be selfaware i don't think any of today's systems to me feel self-aware or you know conscious in any way um of obviously everyone needs to make their own decisions by interacting with these chat bots um I think theoretically it's possible but is self-awareness a goal of yours not explicitly but it may happen implicitly these systems might acquire some feeling of self-awareness that is possible i think it's important for these systems to understand you um self and other and that's probably the beginning of something like self-awareness
but he says if a machine becomes self-aware we may not recognize it i think there's two reasons we regard each other as conscious one is that you're exhibiting the behavior of a conscious being very similar to my behavior but the second thing is you're running on the same substrate we're made of the same carbon matter with our squishy brains now obviously with machines they're running on silicon so even if they exhibit the same behaviors and even if they they say the same things it doesn't necessarily mean uh that this sensation of consciousness that we have
um is the same thing they will have has an AI engine ever asked a question that was unanticipated not so far that I've experienced and I think that's getting at the idea of what's still missing from these systems they still can't really yet go beyond um asking a new novel question or a new novel conjecture or coming up with a new hypothesis that um has not been thought of before they don't have curiosity no they don't have curiosity and they're probably lacking a little bit in what we would call imagination and intuition but they will
have greater imagination he says and soon I think actually in the next maybe 5 to 10 years I think we'll have systems that are capable of not only solving a important problem or conjecture in science but coming up with it in the first place solving an important problem won Habisas a Nobel Prize last year he and colleague John Jumper created an AI model that deciphered the structure of proteins proteins are the basic building blocks of life so everything in biology everything in your body depends on proteins you know your neurons firing your muscle fibers twitching
it's all mediated by proteins but 3D protein structures like this are so complex less than 1% were known mapping each one used to take years deep Mind's AI model did 200 million in one year now Habas has AI blazing through solutions to drug development so on average it takes you know 10 years and billions of dollars to design just one drug we could maybe reduce that down from years to maybe months or maybe even weeks which sounds incredible today but that's also what people used to think about protein structures it would revolutionize human health and
I think one day maybe we can cure all disease with the help of AI the end of disease i think that's within reach maybe within the next decade or so i don't see why not demisabas told us AI could lead to what he calls radical abundance the elimination of scarcity but he also worries about risk there's two worries that I worry about one is that bad actors human uh pe you know users of these systems repurpose these systems for harmful ends then the second thing is the AI systems themselves as they become more autonomous and
more powerful can we make sure that we can keep control of the systems that they're aligned with our values they they're doing what we want that benefits society um and they stay on guard rails guard rails are safety limits built into the system and I wonder if the race for AI dominance is a race to the bottom for safety so that's one of my big worries actually is that of course all of this energy and racing and resources is great for progress but it might incentivize certain actors in in that to cut corners and one
of the corners that can be shortcut would be safety and responsibility um so the question is is how can we uh coordinate more you know as leading players but also nation states even I think this is an international thing ai is going to affect every country everybody in the world um so I think it's really important that the world uh and the international community has a say in this can you teach an AI agent morality i think you can they learn by demonstration they learn by teaching um and I think that's one of the things
we have to do with these systems is to give them uh a value system and a and a guidance and some guard rails around that much in the way that you would teach a child google DeepMind is in a race with dozens of others striving for artificial general intelligence so human that you can't tell the difference which made us think about Deus Hassaba's signing the Nobel book of laurates when does a machine sign for the first time and after that will humans ever sign it again I think in the next steps is going to be
these amazing tools that enhance our almost every uh endeavor we do as humans and then beyond that uh when AGI arrives you know I think it's going to change uh pretty much everything about the way we do things and and it's almost you know I think we need new great philosophers to come about hopefully in the next 5 10 years to understand the implications of this system creating 3D worlds from images bringing to life your own holiday photos at 60 minutesovertime.com