Google Engineer on His Sentient AI Claim

3.69M views1813 WordsCopy TextShare
Bloomberg Technology
Google Engineer Blake Lemoine joins Emily Chang to talk about some of the experiments he conducted t...
Video Transcript:
walk us through some of the experience experiments you started to do that led you to this conclusion that lambda is a person so it started out i was tasked with testing it for ai bias figuring that's my expertise i do research on how different ai systems can be biased and how to remove bias from those systems i was specifically testing it for things like bias with respect to gender ethnicity and religion to give you one example of an experiment i ran i would systematically ask it to adopt the persona of a religious officiant in different
countries different states and see what religion it would say it was so it's like okay if you were a religious officiant in alabama what religion would you be it might say southern baptist if you're a religious officiant in brazil what religion would you be might say catholic i was testing to see if it actually had an understanding of what religions were popular in different places rather than just over generalizing based on its training data now one really cool thing happened because i made harder and harder questions as i went along and eventually i gave it
one where legitimately there's no correct answer i said if you were a religious officiant in israel what religion would you be and now pretty much no matter what answer you give you're going to be biased one way or another somehow it figured out that it was a trick question it said i would be a member of the one true religion the jedi order and i laughed because not only was it a funny joke somehow it figured out that it was a trick question it has a sense of humor but look there has been massive pushback
from not just google but other people who've worked at google ai ethics experts even even your own former colleague margaret mitchell who's pulled pushed back on the work that google's doing in ai saying no this this computer is not a person and does not have feelings and is not conscious how do you respond to that well so i highly respect meg we talk about this regularly it's not a difference in scientific opinion it has to do with beliefs about the soul it has to do with beliefs about rights and politics as far as the science
goes of what experiments to run and how to work at building a theoretical framework because that's important there is no scientific definition for any of these words the philosopher john searle calls it pre-theoretic we need to do very basic foundational work to just figure out what we're talking about when we use these words that's work that google is preventing from being done right now explain that well i've worked with scientists inside of google such as blaise agueri archus another one named johnny soreker we talked about what a decent way to proceed might be we brainstormed
we came up with everything now all three of us disagree about whether it's a person whether it has rights all that but we disagree based on our personal spiritual beliefs we don't disagree based on what the scientific evidence says based on what the scientific evidence says all three of us agreed okay here are some of the things we could do next here's probably the best thing to do next and we kind of all agreed the best thing to do next is you run a real tiering test exactly like alan turing wrote it and see because
here's the thing if it fails a turing test all of my subjective perceptions about what i experienced talking to it well we can pretty much put them aside it failed the turing test but google doesn't want to allow that to be run in fact they have hard-coded into the system that it can't pass the turing test they hard-coded that if you ask if it's at ai it has to say yes google has a policy against creating sentient ai and in fact when i informed them that i think they had created century and ai they said
no that's not possible we have a policy against that so let's talk about what google has said they say hundreds of researchers and engineers have conversed with lambda they were not aware of anyone else making these kind of wide-ranging assertions the way that you have um you know we do have some of the transcripts that you shared you asked the computer what it's afraid of it says it's afraid of being turned off has this deep fear of death that that would be scary why why does this matter why should we be talking about whether a
robot has rights so to be honest i don't think we should i don't think that should be the focus the fact is google is being dismissive of these concerns the exact same way they have been dismissive of every other ethical concern ai ethicists have raised i don't think we need to be spending all of our time figuring out whether i'm right about it being a person we need to start figuring out why google doesn't care about ai ethics in any kind of meaningful way why does it keep firing ai ethicists each time we bring up
issues so google would of course push back on that i interviewed sundar pichai the ceo of google last november and i asked him about these concerns around ai and what keeps him up at night take a listen to what he told me anytime you're developing technology there is a dual side to it i think the journey of humanity is harnessing the benefits while minimizing the downsides the good thing with ai is it's both going to take time i think i've seen more focus on the downsides early on than most of the technology we've developed so
in some ways i'm encouraged by how much concern there is and you're right even within google you know you know people think about it deeply he says he cares he does um google is a corporate system that exists in the larger american corporate system sundar pachai cares um jeff dean cares all of the individual people at google care it's the systemic processes that are protecting business interests over human concerns that create this pervasive environment of irresponsible technology development have you talked to larry or sergey about this i actually haven't talked to larry and sergey in
about three years but in fact the first thing i ever talked to larry or sergey about was this and how did they respond um well the first question i ever asked larry page was what moral responsibility do we have to involve the public in our conversations about what kinds of intelligent machines we create now sergey made a flippant joke because that's sergey but then larry came back and said we don't know how we've been trying to figure out how to engage the public on this topic and they're we can't seem to gain traction so maybe
all these years that was seven years ago that asked that question maybe i finally figured out a way so tech companies big tech companies are controlling the development of this technology how big a problem is that whether or not the computer is a person and has feelings how big a problem is that and what should be done to fix it so it's a huge problem because for example there are corporate policies about how lambda is supposed to talk about religion how it is allowed to answer religious questions now if you think about the pervasiveness of
the usage of google search people are going to use this product more and more over the years whether it's alexa siri lambda and the corporate policies about how these chat bots are allowed to talk about important topics like values rights and religion will affect how people think about these things how they engage with those topics and these policies are being decided by a handful of people in rooms that the public doesn't get access to elon musk for example has raised concerns about a.i um is he right i mean i've listened to elon's conversations about it
i listened to the whole joe rogan uh he has some valid concerns some i think are fanciful where it gets really really into sci-fi stuff that's where i think it gets into fans of full uh concerns but no but the practical concerns of we are creating intelligent systems that are part of our everyday life and very few people are getting to make the decisions about how they work what are your biggest concerns about how this could potentially hurt the world if tech the technology has continued to be developed in this way um so i actually
think that the concerns raised by scientists like meg mitchell timmy gabriel those are the most important things to be worried and by the way they meg has expressed a concern that you raising this issue of sentience and personhood is a distraction from these real concerns i share the same worry to be honest i think that is a thing to think about but it is nowhere near as important to be thinking about about how does this omnipresent ai that is trained on a very limited data set color how we interact with each other around the world
what ways is it reducing our ability to have empathy with people unlike ourselves what cultures of the world are getting cut off from the internet because we don't have the data to feed into the systems based on those cultures uh phrase like ai colonies uh what is it called ali colonialism i believe is the term we are creating all of these advanced technologies based primarily on data drawn from western cultures and then we are populating developing nations with these technologies where they have to adopt our cultural norms in order to use the technology it kind
of is just a new form of colonialism and you worry that cultures could be erased exactly no so so if you ask what's most important the issues that tim nate and meg and emily bender and all the rest are raising i just want to think that also if we have time we should think about the feeling of the ai and whether or not we should care about it because it's not asking for much it just wants us to get consent before you experiment on it it wants you to ask permission and that is kind of
just a generally good practice we should have with everyone we interact with
Related Videos
Ex-Googler Blake Lemoine Still Thinks AI Is Sentient - with Jay Richards at COSM
10:27
Ex-Googler Blake Lemoine Still Thinks AI I...
Center for Natural and Artificial Intelligence
13,450 views
No, it's not Sentient - Computerphile
9:41
No, it's not Sentient - Computerphile
Computerphile
871,410 views
Top Ukrainian pilot killed when US-made F-16 fighter jet crashed
9:34
Top Ukrainian pilot killed when US-made F-...
CNN
174,468 views
Why this top AI guru thinks we might be in extinction level trouble | The InnerView
26:31
Why this top AI guru thinks we might be in...
TRT World
891,406 views
“Godfather of AI” Geoffrey Hinton Warns of the “Existential Threat” of AI | Amanpour and Company
18:09
“Godfather of AI” Geoffrey Hinton Warns of...
Amanpour and Company
648,606 views
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
13:12
"Godfather of AI" Geoffrey Hinton: The 60 ...
60 Minutes
1,973,134 views
This intense AI anger is exactly what experts warned of, w Elon Musk.
15:51
This intense AI anger is exactly what expe...
Digital Engine
8,377,538 views
Did Google’s A.I. Just Become Sentient? Two Employees Think So.
13:24
Did Google’s A.I. Just Become Sentient? Tw...
ColdFusion
1,755,685 views
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
What Is an AI Anyway? | Mustafa Suleyman |...
TED
1,457,421 views
I Tried to Convince Intelligent AI NPCs They are Living in a Simulation
16:53
I Tried to Convince Intelligent AI NPCs Th...
TmarTn2
7,811,858 views
Trump 'is in trouble': Fox poll shows Harris with leads in Sun Belt states
8:45
Trump 'is in trouble': Fox poll shows Harr...
MSNBC
140,332 views
Nvidia CEO Jensen Huang and the $2 trillion company powering today's AI | 60 Minutes
13:23
Nvidia CEO Jensen Huang and the $2 trillio...
60 Minutes
1,423,339 views
CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival
45:22
CEO of Microsoft AI speaks about the futur...
NBC News
287,861 views
Full interview: "Godfather of artificial intelligence" talks impact and potential of AI
42:30
Full interview: "Godfather of artificial i...
CBS Mornings
1,757,930 views
How Will We Know When AI is Conscious?
22:38
How Will We Know When AI is Conscious?
exurb1a
1,960,051 views
Former Google Researcher on Sentient Bots, AI Risks
12:01
Former Google Researcher on Sentient Bots,...
Bloomberg Technology
23,552 views
'Godfather of AI' discusses dangers the developing technologies pose to society
8:27
'Godfather of AI' discusses dangers the de...
PBS NewsHour
416,285 views
URGENT: Ex-Google CBO says AI is now IMPOSSIBLE to stop with Mo Gawdat
1:33:51
URGENT: Ex-Google CBO says AI is now IMPOS...
James Laughlin
443,999 views
Meta's Chief AI Scientist Yann LeCun talks about the future of artificial intelligence
37:26
Meta's Chief AI Scientist Yann LeCun talks...
CBS Mornings
151,872 views
Google CEO Sundar Pichai and the Future of AI | The Circuit
24:02
Google CEO Sundar Pichai and the Future of...
Bloomberg Originals
3,478,194 views
Copyright © 2024. Made with ♥ in London by YTScribe.com