The AI Dilemma: Navigating the road ahead with Tristan Harris

14.56k views3468 WordsCopy TextShare
AI for Good
Tristan Harris, co-founder of Center for Humane Technology, examines the dangerous incentive structu...
Video Transcript:
[Music] good morning everyone um it's a pleasure and honor to be with you here today we're going to be talking about the AI dilemma as as aim said AI gives gives us kind of superpowers whatever our power is as a species AI amplifies it to an exponential degree and uh I'm here from an organization called the center for Humane technology where we think about how can technology be designed in a way that is Humane to the systems that we depend on how do you design social media that depends on the functioning of a social Fabric
in a way that strengthens the social fabric and you know just to say we're all here you're going to hear some maybe some more critical or negative things about the risks of AI in this presentation but the premise of this is we all we're all in this room because we care about which direction the future goes and one of the things that we think is if we don't understand the risks appropriately then we won't get to that positive future so we have to understand what we're steering towards and one of the meta challenges is that
the complexity of the world is going up right we've got more issues social media introduced 20 new issues that every school teacher parent had to deal with that they didn't have to deal with before AI introduces many new issues for banks to have to deal with voice cloning cyber attacks so as the complexity of the world is going up the question is our ability to respond and govern technology has to go up at the same rate right it's like you're going faster and faster in a car but your steering wheel and your brakes have to
get more and more precise as the complexity is going up and the challenge that we have with technology is that uh it expands the verticality of that curve of complexity right it increases the total complexity that we have to deal with eio Wilson the Harvard sociobiologist said that the fundamental problem of humanity is we have Paleolithic brains medieval institutions and Godlike technology we have the power to transform the biosphere of the planet with our entire economy how do we have the power of gods with the wisdom love and Prudence of gods and as AI adds
to this equation uh our friend AA kotra says that AI is like 24th century technology crashing down on 20th century governance so the question we're going to be investigating in this presentation is how do we upgrade the governance that matches the complexity of the technology that we're building so and the key to this is going to be closing the complexity Gap right governance that moves at the speed of Technology now the way that we got into this these set of questions most people know our work from the film The Social dilemma how many people here
have seen uh the social dilemma okay quite a few of you uh we just found out recently that that it was actually the most popular uh documentary on Netflix of all time which is a a great accomplishment it was thank you um and it was really about you might say why are we talking about social media in a conference that's about AI but if you think about it social media was kind of like first Contact between humanity and a runaway AI what do I mean when your 13-year-old child or you flick your finger up like
this on Tik Tok or on Twitter you just activated a supercomputer behind that sheet of glass point it at your kid's brain that's calculating from the behavior of 3 billion Human Social primates the perfect video or photo or tweet to show that next person and that little baby AI That's just a curation AI was enough to cause a ton of problems so how did first Contact go well I would say we lost how did we lose how did we lose we had really good people that were actually friends of mine in college who built some
of the social media platforms I saw the people building it I was in San Francisco so how did we lose and uh Charlie Munger who is Warren Buffett's business partner said if you want to predict what's going to happen if you show me the incentive and I will show you the outcome so what was the incentive behind social media well first of all let's talk about how do we tend to relate to technology well we relate through stories here are these social media apps what were the stories we told ourselves about social media we said
we're going to give everybody a voice we're going to connect with your friends join like-minded communities we're going to enable small mediumsized businesses to reach customers and these stories are true these are totally things that social media has done but underneath those stories we started to see beneath the iceberg there's some problems but these are symptoms and they feel like they're separate problems we have addiction over here we have viral misinformation over here we have mental health issues for teenagers but beneath those in those symptoms were incentives the incentives that in 2013 allowed us to
predict exactly where social media was going to go which is that social media is competing for what your attention there's only so much attention so it becomes the race to the bottom of the brain stem for who's willing to go lower to create that engagement and let's take a look at what that actually created in society so information overload addiction Doom scrolling influencer culture the sexualization of young girls online harassment shortening attention spans polarization right this is a lot of really negative consequ quences from a very simple misaligned AI called social media that we already
released into the world and so what matters is we think about AI multiplied by social media this is a recent example from Tik Tok there's a new beautification filter with generative AI oops can someone turn up the the audio of this I grew up with the dog filter here I'll do one more time here we go I can't believe this is a filter the fact that this is what filters have evolved into is actually crazy to me I grew up with the dog filter on Snapchat and now this this filter gave me Li fillers this
is what I look like in real life are you are you kidding me so why are we shipping these filters to young kids do we think this is good for children the answer is because it's good for engagement because beautification apps that make me look better are going to be used more than beautification apps apps that don't have those filters um and so this race for engagement actually didn't just get deployed into society it kind of ens snared Society into the spiderweb it took over media and journalism media and journalism run through the click economy
of Twitter it took over the way that elections are run President Biden simultaneously said he wants to ban Tik Tok at the same time that he just joined Tik Tok because he knows that to win elections you have to be on the latest platforms it's taking over GDP children's development social media is now the digital parent for an entire generation and so have we fixed the incentives with first contact with AI have we fixed them no so we have to get clear before we deploy second contact with AI which is not curation AI but create
AI of generative AI what are the incentives that are driving this next AI Revolution okay well let's do it again what are the stories we're telling ourselves about ai ai is going to make us more efficient all the things that aim just said which are all true it's going to help us code faster it's going to help us find solutions to climate change it can increase GDP and these stories are all true but beneath those stories we also know that there's these problems everyone in the room is aware of these problems but beneath those problems
what's driving those problems what's the incentive that will allow us to predict the outcome of where AI is going and that incentive is what we call the race to roll out the number one thing that is driving open aai or Google's behavior is the race to actually achieve market dominance to to train the next big AI model and release it faster and get users before they their compe edor does and the logic is if we don't build it or deploy it we're just going to lose to the company or the country that will and so
what is the race to roll out going to cause in terms of second contact with AI and I think you all are very aware of many of the sort of issues here exponential misinformation many much more fraud and crime that's possible neglected languages when they race to release AI systems to achieve market dominance going to focus on the top 10 languages and not focus on the bottom 200 so this thing that's talked about in this room we were just at the the event yesterday inclusion how do we make sure we're including the whole world where
when you're racing to win market dominance you're not racing to support the bottom 200 languages uh in in the world when you raise to release models you also race to release models that can be jailbroken the AI companies will talk about security but all of the models that are publicly online right now there's clever techniques to jailbreak them basically get access to the unfiltered model that doesn't have the safety controls uh you can use it to create deep fake child porn we were just with the uh UK home office um a few months ago and
they said that they are now having trouble tracking down real child sexual abuse uh uh problems because there's so much deep fake child sexual pornography and so as we sort of get a grip on the shadow side the risk side of AI we we have to get clear on how these incentives are going to drive these kinds of problems and these capabilities can be combined into dangerous ways many people here already know about deep fakes but this is an example we took a friend of ours who's a technology journalist uh named Lori seagull and uh
we did a demonstration saying could we create a whole universe of damaging tweets news articles and media so I want to sort of show you how can these capabilities be combined and basically we said create a bunch of tweets that would sew doubt about her I'll just read the third one I've always wondered why Lori seagull was so soft on Zuckerberg in those interviews so she's a tech journalist who's interviewed Mark Zuckerberg in the past uh until I heard about their quote secret dinners # Zuckerberg Affair this is all generated by gp4 okay then what
we did is we took for each of these tweets to sew suspicion about her and we said what if you wrote an entire news article oops entire news article and basically we were able to say create an entire new York Post style news article about it this is the Huffington Post uh and you'll see in the in the text in the intricate tapestry of tech journalism Lori seagull has long stood as a beacon of clarity guiding readers through the Labyrinth of Silicon Valley however recent murmur suggest perhaps her connection to this world is more personal
than professional so it's written in a certain style then you can say generate a New York Daily News uh article and it starts with hold on to your keyboard folks so it's you can you can rate these articles in different styles and then generate tweets um with emojis that sort of give you a whole sense that this is real and trending and of course generate fake audio oops can you play the uh audio track please shoot one second they can turn on the audio this next one should work no okay well it's a uh example
of her voice basically saying to Mark Zuckerberg we have to not let people know about our about us n would be over I just can't have that constantly hanging over my head so uh and you know obviously generating fake images and then uh you can actually the same AI that can tell you why a meme is funny and do joke explanations can actually generate memes so this is a real meme generated by uh AI uh that people know and it says interview real people or make up stories so you can generate a whole universe of
stuff that will then show up on Google petitions and so you're probably thinking when you see this example of a way to kind of alpha cancel people we know about Alpha go and Alpha chess but this is like Alpha cancel a Target person um so you're probably thinking that I'm here to tell you AI is going to use used to cancel people and that's the main thing we should be concerned about and the answer is no this is just one example of thousands of things that you can do when you combine these different capabilities together
uh and we often talk about we want the promise of AI without the Peril of AI we want the benefits without the harms and the challenge is that can this can the technology that knows how to make cool AI art about humans be separated from the same technology that can create deep fake uh child porn they're part of the same image model can the technology that can give every kid in Africa a one-on-one biology tutor be separated from the AI model that can give every Isis terrorist a biological weapon tutor they're Inseparable they're all part
of the same model and this example from a couple of years ago in which an AI that was used to discover less toxic drug drug compounds they then the researchers just flipped it and said I wonder if we could just literally flip the variable and search for more toxic drug compounds and in 6 hours it generated 40,000 toxic molecules including VX nerve gas and of course AI is not moving at just an exponential but a double exponential Pace because nukes don't make stronger nukes but AI can actually be used to make stronger AI so AI
can be used for example by Nvidia to look at the chip design that trained Ai and say make those chips more efficient which it then does AI can be used to look at the code that makes Ai and say take that code and make it 50% more efficient and it can do that and so it's moving at such a small at such a fast pace you might think well at least there's lots of safety researchers that are that are working on this problem and there's actually currently a 30 to1 gap between people uh who are
publishing papers and capabilities versus safety uh and there's per Stuart Russell what he said yesterday there's a thousand to one gap between the collective resources going into increasing AI capabilities versus those that are increasing safety so this is a lot and actually at this point in the presentation I would actually just encourage you if you want to just take a breath together we're all here because we care about which future we get everyone in this room wants the AI for good and we can still choose the future that we want but we have to actually
see the risk clearly so we know the kinds of choices that we need to make to get to that future because no matter how High the the skyscraper of benefits that AI assembles if it can also be used to undermine the foundation of society upon which that skyscraper depends it won't matter how many benefits there are and to repeat if this is the problem statement that AI is like 21th Century technology crashing down on 20th century governance if you imagine 20th century technology crashing down on 16th century governance and the king is sitting there and
suddenly smartphones and social media and Wi-Fi and radio and television all dumped on his Society at the same time and he assembles his advisers he doesn't have the governance tools to deal with those problems so the meta issue is not to focus on what's the one solution that's going to fix all of AI it's how do we get if we're spending trillions of dollars on increasing AI capabilities shouldn't we be spending 5% of that like $50 billion on getting all the governance of upgrading the governance itself you know democracy uh was invented with 17th Century
Technologies Communications Technologies we had law we had the printing press uh and we used those institutions and those systems to invent the kind of governance that we had but now we have new 21st century tools you're probably thinking that sounds weird coming from him sounds like a techno Optimist but I think we need to be thinking about how do we use technology to upgrade the process of governance itself so it moves at the speed of technology and we could call this you know the upgrade governance plan what if for every $1 million that were spent
on increasing AI capabilities AGI Labs had to spend a corresponding $1 million on actually going into safety and I'm sure many of you are tracking that the super alignment team at AI actually left recently out of I think many safety or oriented concerns so we need to get the safety right that means the Investments need to be right I think Stuart Russell said yesterday that for every 1 kilogram of weight of a nuclear power plant There's 7 kog of paperwork to sort of ensure that the nuclear power plant is safe and we could call that
the AI safety plan and at CHT we're trying to map what are the other kinds of things that can change the incentives for AI deployment Stuart Russell yesterday talked about provably safe requirements that when model developers can prove that their AI model will not tell you how to create a biological weapon then they can release the model because we lack governance and good regulation right now that's adequate what if we protected whistleblowers so that the companies knew that the people who are closest to building it when they see the early warning signs what if they
were protected in being able to share certain information to high level institutions to make sure that we could get that safe future what if developers of AI models were liable for the kinds of Downstream harms that occurred uh that would move the pace of release of AI models to a slow enough Pace that everyone would know I'm not going to release it I'm not going to be forced to release it as fast as everybody else because I know everyone has to go at the pace of being responsible for the things that you create and then
of course we could actually think in very inspiring ways about how would we use AI to upgrade governance upgrade the the green line and we can imagine laws that actually are aware you could use AI to optimize uh laws to be saying how how do we look at all the laws that are getting outdated because the assumptions upon which the law was written have actually changed and AI could be used to accelerate those kinds of processes we could have ai systems that help uh negotiate treaties with zero knowledge proofs we can use 21st century technology
to help upgrade our governance and so this is just a small sample this is not the solution to all the problems that I've laid out but I hope what I've provoked for you is that in this map are the kinds of we need to be thinking about to get to the Future that I know that we all care about so thank you very much [Music]
Related Videos
Tristan Harris: Understanding the Coming AI
45:50
Tristan Harris: Understanding the Coming AI
Wisdom 2.0 with Soren Gordhamer
11,833 views
Paul Roetzer Opening Keynote: The Road to AGI - MAICON 2024 (9/11/24)
30:21
Paul Roetzer Opening Keynote: The Road to ...
Marketing AI Institute
35,483 views
Mo Gawdat on The Rise of AI: Could Human Error Lead to Its Downfall?
28:40
Mo Gawdat on The Rise of AI: Could Human E...
Mo Gawdat
19,800 views
Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma
1:07:22
Center for Humane Technology Co-Founders T...
Summit
234,798 views
Bill Gates Reveals Superhuman AI Prediction
57:18
Bill Gates Reveals Superhuman AI Prediction
Next Big Idea Club
346,455 views
Tristan Harris Congress Testimony: Understanding the Use of Persuasive Technology
16:49
Tristan Harris Congress Testimony: Underst...
Center for Humane Technology
317,327 views
ITU AI For Good Global Summit 2023 Press conference (Unedited)
41:35
ITU AI For Good Global Summit 2023 Press c...
ITU
470,017 views
AI for Good Impact Initiative
35:55
AI for Good Impact Initiative
AI for Good
631 views
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
What Is an AI Anyway? | Mustafa Suleyman |...
TED
1,748,767 views
"AI and The Future of Life" by Tristan Harris @ The Wisdom & AI Summit
30:28
"AI and The Future of Life" by Tristan Har...
Wisdom 2.0 with Soren Gordhamer
20,380 views
The Effects of the YouTube Recommended Algorithm w/The Social Dilemma's Tristan Harris
12:57
The Effects of the YouTube Recommended Alg...
JRE Clips
904,141 views
I'm not afraid. You're afraid | Tristan Harris | Nobel Prize Summit 2023
15:36
I'm not afraid. You're afraid | Tristan Ha...
Nobel Prize
162,791 views
How to Keep AI Under Control | Max Tegmark | TED
12:11
How to Keep AI Under Control | Max Tegmark...
TED
165,884 views
The A.I. Dilemma - March 9, 2023
1:07:31
The A.I. Dilemma - March 9, 2023
Center for Humane Technology
3,496,391 views
AI Panel Discussion W/ Emad Mostaque, Ray Kurzweil, Mo Gawdat & Tristan Harris | EP #96
29:09
AI Panel Discussion W/ Emad Mostaque, Ray ...
Peter H. Diamandis
39,728 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
244,400 views
Tristan Harris: Beyond the AI dilemma | CogX Festival 2023
42:44
Tristan Harris: Beyond the AI dilemma | Co...
CogX
73,529 views
AI for Good Keynote interview with Sam Altman (remote) and Nick Thompson (in-person)
47:33
AI for Good Keynote interview with Sam Alt...
AI for Good
3,943 views
Eric Schmidt and Yoshua Bengio Debate How Much A.I. Should Scare Us
25:30
Eric Schmidt and Yoshua Bengio Debate How ...
TIME
28,915 views
Avoiding AI Dystopia: Yuval Noah Harari and Aza Raskin
1:28:18
Avoiding AI Dystopia: Yuval Noah Harari an...
Yuval Noah Harari
75,343 views
Copyright © 2024. Made with ♥ in London by YTScribe.com