ent mode. Right now we start the lecture democracy and artificial intelligence. We inform that the questions can be submitted through the QR code that you see on your seat and they will be answered according to the availability of time. We also highlight that the lecture will be in English. If you wish, you have the audio devices for translation at with at the table of honor. His Excellency Justice Luis Hobberto Bahosa, President of the Supreme Court of Brazil and the National Council of Jesus. His lordship Dr. Lawrence Lassi, professor of the chair Roy Ferman of law
and leadership at the University of Harvard and her lordship Dr. Alino Zori, general secretariat of the presidency of the Supreme Court of Brazil. We also would like to thank of the presence of his excellencies Minister Stew, the minister of uh innovation and public services, Minister Luiz Manuel, the minister of science and technology and innovation, the ambassador Homo Wakuri of the Peruvian Embassy in Brazil, the president of the Constitutional Court of Peru, Marcos Ojar Disa, special secretariat for judicial matters of the presidency of the republic. Mr. Dario Durigan, executive secretary of the ministry of economy, judges,
members of the public ministry, professors, alumni, public workers, press professionals, ladies and gentlemen. All right. Now with the floor, his excellency, Justice Luis Roberto Bahos. Good morning everyone. It's a great pleasure, great joy to open this session and to receive here at the Supreme Court the professor Lawrence Lesig to uh talk about this topic in artificial intelligence intelligence and democracy one of the most challenging uh topics of our times. There's something new under the sun. There's a profound transformation taking place in the world under the impact of AI with positive potentiality of great uh number
and also immense risks. Therefore, it's for uh us to reflect upon these topics and also the impact that they produce on the traditional visions of democracy that we receive. Professor Lawrence Lesk as announced he's a professor of law and leadership at the University of Harvard and before that he was the founder of the center for internet and society of the Stamford University also uh at the University of Chicago and before becoming a professor he had a very highlighted career as an assistant as the clerk uh of two uh judges that have been very influential in
the US uh judge Richard Pner at the court of appeals of the seventh circuit. He's he is known by many of us by his works published about uh law and economy and about the role of the judges and also about judicial interpretation in general and equally of the justice of the Supreme Court Antonyine Scalia equally someone relevant in the thinking of the US. He also acts in a council of uh AXA research fund and he is an member of the the council of the creative comments. He is a member of the American Academy of Arts
and Sciences and also for the American Philosophical Association and has received many prizes including the prize freedom of the free software foundation and the prize fastase 50 in two 202 was named as one of the 50 visionaries of the scientific American. We are very delighted to receive professor Lawrence Lesk. He is also has a bachelor in economy and administration by the University of Pennsylvania. He's a master in philosophy by Cambridge and doctor in law and has a degree in law for the University of Yale. Before passing the floor to Professor Lesig, I will be passing
the floor to the general secretariat of the Supreme Court Alini Ozoru. She was a student and an assistant of Professor Les that would like to say a few words. Alini, good morning everyone. We would like to thank you for your presence. And before passing the floor to Professor Lesk, I would like to do a brief introductory note on a personal note actually in his homage and I will do so in English. To introduce one of the most brilliant and committed public intellectuals of our time, Professor Lawrence Lassig. Professor Leig has reshaped our understanding of democracy
and technology from copyright and internet freedom to campaign finance and constitutional interpretation. His work asks us um what makes democracy real and what prevents it from becoming so. So I had the honor of being his student at Harvard Law School back in 2017 and I w I was also lucky enough to serve as his research assistant while he was writing his book um fidelity and constraint. To be honest, I I often wonder if I truly had the skills for the role, but for me it was a masterclass in legal reasoning and intellectual integrity. I remain
profoundly grateful for his trust and guidance. So what inspires me most about Professor Leig is his deep public spirit, his determination to confront democracy's um deepest flaws and reimagine its future. So today he brings um that same conviction to one of the most urgent topics um of our time. how democracy can survive in the age of in artificial intelligence. It's a conversation we need and no one is better placed to lead it. So, please join me in welcoming Professor Lawrence Lassic. So, I can't describe adequately how grateful I am to be here and have the
chance to address you. Um, I have a deep fondness for Brazil. I've been to Brazil many times, though this is the first time in Brazilia. Um, and I've learned in my time in Brazil a great deal about the character of democracy. I remember once following Jabetto Gil as he went through a uh really a public festival uh and he was the culture minister at the time and as he moved through the festival he moved with the people and everybody came up to him and grabbed his hand and told him what they thought and I thought
that's not how government officials operate in America there would be a hundred yards between the government official and the public and they would never risk ordinary people talking to a government official. It just doesn't happen. And I thought from that moment 20 years ago that we have so much to learn from the struggles and the inspiration and the ideals that Brazil Brazil is showing the world. And so I come here to share my thoughts and I've enjoyed learning from the reaction to these thoughts what Brazil has to teach. So if we can shift to the
slides we can start the show. Great. So in December of 2024 this man Sam Altman wrote a tweet. The tweet said this, "Algorithmic feeds are the first atscale misaligned AI." So algorithmic feeds meaning the feeds that come across your Facebook or Twitter or Tik Tok channels algorithmic run by AI. the first atscale misaligned AI, meaning the first misaligned AI that mattered fundamentally to society. And it did matter in America in 2024 because most think that election was driven fundamentally by the effect that social media had on the public's understanding of the issues at play. Okay,
I want to tell you two things about what Sam Alman said. First, I want to say just how right what he said is. And second, I want to say just how wrong what he said is or in the vernacular of my time, how screwed that means we are given how right he is. So, let's start with how right he is. The first atscale misaligned AIS in 2024. AIS mattered in our elections in two different ways. Number one intended and number two as a byproduct of something else. So intended there were two ways in which AI
mattered in an intended way in 2024. Number one, there were foreign actors using AI to sew dissent in the United States elections. So there were many fake Russian news sites that populated the American news economy. There were bot farms used to spread misinformation about Ukraine and uh pro- uh Kremlin messages. The Chinese engaged in this massive campaign paralleling the 50 C party in China which is basically flooding channels with either distracting information or information reinforcing views they wanted to prevail. Um, and in America, this is an under reportported story, but there was this extraordinary dynamic
where in what's called news deserts, which are areas in the United States where there's no local newspaper, these fake AIdriven websites would be set up pretending to be a local newspaper. And so as people would surf the net in those areas, there was a 5050 chance that they would come across a newspaper that they thought was a legitimate local newspaper and in fact it was just an AI LLM generated newspaper from Russia or Tik Tok uh in 2024 had an extraordinary impact in Romania as they demonstrated how it was used to create effectively a fake
presidential candidate. And then the court was required to step in and enull the election which from the American perspective was a little bit surprising. No court in America could ignull our elections. But anyway, that was the effect they actually had. So those are the foreign actors and then there were domestic actors desperately trying to affect the election. Um there was a big push to how it would change and did change how fundraising happened in America. And then there were all these clumsy efforts to use AI to affect people's votes, including the president, suggesting that uh
Taylor Swift had endorsed his campaign. Um or candidates using this cheap kind of AI to begin to reinforce the message of their campaign. Or here's a particularly striking example on Twitter. So there's another dream. Yes, it is me, Martin Luther King. I came back from the dead to say something. As I was saying, I have another dream that Anthony Hudson will be Michigan's 8th district's next congressman. Yes, I have a dream again. Or here's an ad uh that was a political ad. And in the middle of the ad, uh it was an attack on uh
a candidate and this image came up on the ad. And this image evoked a debate that was happening in farr right-wing news channels that the left-wing was trying to get rid of gas stoves in America. Now, it's completely ridiculous. It's not something that's actually true, but the idea that there was a protest where people were using these signs, these anti- gas stone signs, um was quite striking until you read the fine print in the ad, which is to say that all of these images were AI generated. There's no reality to this story. Yet, this story
was what was used to push the campaign of the candidate that was attacking the person on the left. Okay. Now, I think we can look at all these and say if this is what AI was in the 2024 elections, at least the domestic effects, it was overblown. Didn't really matter. It said was going to be the AI election year. It turned out this was not the AI election year in America. Okay. But that was not AI's important effect. The important effect was not the intended effect. The effort by people to use AI intentionally to have
an effect on the results. It was a byproduct effect. And it was a byproduct effect because of new media or social media. and the effect of social media unintended let's hope on the character of understanding that Americans had about the issues that they confronted in that election. This is Tristan Harris who was a Google engineer until he left and started the center for humane technology became quite prominent in a movie called the social dilemma which told the story of social media and its effect on youth and democracy across the world. Tristan Harris is focused on
the science of attention when he was at Google which is an effort to engineer attention the attention of the users to overcome the natural resistance they would have to these social media platforms so as to increase the engagement people have with those platforms. This is what happens at every major social media platform and we should think of it as a kind of brain hacking. Brain hacking. Well, think first about body hacking. This is a great book by Michael Moss, Salt, Sugar, Fat, which tells a story of food science, processed food industry, and the food science
that guides them. It's a science where they learn to engineer food, the mix of salt, sugar, and fat in food in order to exploit evolution. to overcome our natural resistance to eating too much so that we can't stop eating the food. They do this in order to sell food or we maybe say quote food and that effort is harmless for many but for others it's not at all harmless. This is what I mean by body hacking. Well, brain hacking is the same thing with respect to attention. It's an effort to overcome evolution, exploiting facts about
who we are produced by evolution, the fact that we are irrationally responsive to random rewards, the fact that we psychologically can't resist bottomless pits of content. They use these facts about us to tweak their products, all with the aim to increase our engagement so that they could sell ads. This is a business model. We should call it the engagement business model. And the critical point to recognize is AI drives this business model. AI is what makes it work. And it creates an incentive. Then the AI and this business model. As Shosana Zubof describes in her
magisterial, the age of surveillance capitalism, the incentive is to know more about us. Not just by watching us, but by poking us, tweaking us, asking us questions, rendering us vulnerable, reaching down the brain stack to leverage our insecurity so that they can better sell ads. Okay. Okay. Now the key here is that in so doing this it this business model imposes externalities on us. Externalities on the United States, externalities on the world. As Zep Tufetchi puts it, the companies are in the business of monetizing attention and not necessarily in ways that are conducive to health
or the success of social movements or the public spheres. Because it just turns out, too bad for us, the best strategy, the most profitable strategy to drive engagement is the politics of hate. we engage more the more extreme, the more polarizing, the more hatefilled this content is. AI learns that fact about us. This passage from a book by an extraordinary AI scholar, Stuart Russell, I think captures this dynamic powerfully. Apologies, I'm going to quote it at length. He describes the algorithms and how they function on social media. and he says they aren't particularly intelligent but
they are in a position to affect the entire world because they directly influence billions of people. Typically he writes such algorithms are designed to maximize clickthroughs that is the probability that the user clicks on presented items. The solution is simply to prevent present items that the user likes to click on. Right? Wrong. Wrong. The solution is to change the user's preferences so that they become more predictable. Change users preferences. A more predictable user can be fed items that are likely to they're likely to click on, thereby generating more revenue. People with more extreme political views
tend to be more predictable in which items they will click on. Possibly there's a category of articles that diehard centrists are likely to click on, but it's not easy to imagine what this category consists of. Like any rational entity, the algorithm, the AI, learns how to modify the state of its environment, in this case, the user's mind, in order to maximize its own reward. AI learns how to do that. And so that's what we are fed. And in this fantastic book by Max Fiser, he tracks how in these four countries, America, Brazil, uh with the
Zika virus, uh scandals, um UK and in Germany, the spread of social media is directly linked to the increased radicalization of the public because the AI learns that if it can radicalize the people consuming the content, those people are more likely to consume more content. So the machine turns us into extremists so it can sell more ads. Algorithmic feeds, as Sam Alman puts it, were this first atscale misaligned AI. Now the point to recognize is this is not about technology. It's not just about technology. This feeding is part of an economy. As Renee Desta puts
it, the former head of the Internet Observatory at Stanford and the author of a really extraordinary book, Invisible Rulers. That economy has three components. There are influencers, people who are in the business of trying to get people to follow what they do. There are algorithms. I didn't really have a picture of an algorithm, but you know, you get the idea here. And then there are crowds, us, the people who are consuming this content. And the point is these three things work together. The influencers are learning how to leverage the algorithms and the algorithms learning how
to leverage the influencers and the crowds to drive engagement together yielding increasingly what Renee calls bespoke realities. Meaning we don't live in one reality. We each live in our own separate self constructed realities. As Barack Obama puts it, if you watch Fox News, you're in one reality. If you read the New York Times, you're in a different reality. So, for example, these horrible pictures from our January event, January 6th, not January 8th. The critical thing about these people is that they believed the election has been stolen. They believed that they had been led to believe
that by the media media. they had consumed and the leaders they had followed. Just after January 6th, Washington Post reported 70% of Republicans believed the election had been stolen. And not just uneducated Republicans, a majority of college educated Republicans believed the election had been stolen. And so 70% of Republicans means 32% of Americans believe the election was stolen. That graph is terrifying enough, but even more terrifying is this. In the time since January 6, 2021, the number of people who believe the election has stolen has not changed. There's the same number of people today who
believe the election was stolen as in 2021. And this fact is new. Richard Nixon, president of the United States, almost impeached for his scandal around Watergate, had public opinion polls looking approval ratings looking like this until about six months before he resigned. So the red line is Republicans. He's liked by Republicans about as much as Donald Trump uh was or is. He's not hated as much by Democrats as Donald Trump is. And the independents are in the middle. But in the six months leading up to his ultimately resigning, this is what happens to his public
opinion support. What's striking about this graph is that all three groups lose support for the president at the same rate because they are not living in bespoke realities. They're living in a common reality, what we used to just call reality. But in our current age, this is the approval rating for Donald Trump who went through two impeachment trials. It never changed. It never changed because each group is living in their own bespoke reality. And every fact can be reinterpreted to reinforce what we in our own bespoke realities want to believe. This is the new normal.
created, generated, supported, funded by this AIdriven media. And this bespokenness begets or we should say bespokenness optimizes for what I think we should call fantasists. I wrote a piece and medium describing fantasists. I said we live in an age of fantasis, a time when people not quite constrained by reality flourish nonetheless. The term is not as harsh as fabulous, a person who lives in a makebelieve world. Think George Santos. It's more forgiving than liar or mythamomaniac, the condition of being a pathological liar. Fantises don't necessarily believe that what they're saying is false. They just don't
know whether what they're saying is true. and they don't really care. Their words construct the reality they seek. So it's astonishing to recognize the United States government has three quintessential fantasists at the helm. So RFK Jr. who has promagated all sorts of malarkey around vaccines and fluoride inspires states to just stop having fluorides in their water. Donald Trump who repeatedly constructs these completely madeup truths and repeats them over and over again until a significant portion of the public believes them. And then Elon Musk who goes out and describes all these magical things he's going to
do uttering just literally crazy talk. The idea that you're going to cut $2 trillion from the American budget when Social Security is the largest payment there, $1.4 billion. There isn't$2 trillion dollars to cut yet. He marches into public spheres uttering these truths without any regard to their truth. No connection to reality or I'll say no connection to my bespoke reality and uh you know my bespoke reality is real. Okay. The consequence of this economy of this business model is that we the people become we the polarized, ignorant and angry. And the consequence of that is
that democracy is weakened. We live within these bubbles cemented with hate. Hate for the other group and confidence in our own. And the point I want to emphasize is that this is the AI that we should be focused on not chat GPT. This is the AI we should be focused upon. And the point is this AI is a byproduct of a business model. It is the byproduct of a way companies have to make money in the economy. Okay. Now recognize AI here is not powerful because it is so strong. AI is consequential because we are
so weak. Here's Tristan Harris in the social dilemma. Technology would overwhelm human strengths and intelligence. When is it going to cross the singularity, replace our jobs, be smarter than humans? But there's this much earlier moment when technology exceeds and overwhelms human weaknesses. This point being crossed is at the root of addiction, polarization, radicalization, outrageification, vanity, the entire thing. This is overpowering human nature and this is checkmate on humanity. Okay. So his idea is we constantly obsess about what's called AGI, artificial general intelligence, the point at which AI will be smarter than all of us. And
then almost the next day after it achieves AGI and turns its efforts into becoming an AI researcher, AGI will become super intelligent, way beyond human intelligence. And this reality concerns many of the most uh uh informed AI researchers. Concerns in the sense of many fear this is an existential threat to humanity. But Tristan's point is that that's not really the point we need to worry about. This is the point we need to worry about. Not when it overcomes our strengths, when it overcomes our weaknesses. And he and social dilemmas focused on individual weaknesses. But I
want you to think about collective weaknesses. Our collective inability to resist the consequences of this AI. So it's not just that the machine learns how to seek us out individually or surrounds us with many machines individually. It also surrounds us collectively. And long before AGI, AI, it will overwhelm us. And AI will get what it seeks, which is engagement while we get a democracy hacked by this technology. And as AI does this and will do this better, the hard fact to recognize for those in the field of democracy is it will generate a public and
not to be ignored public. A public that will have views that the public believes is true. right-wing views, left-wing views, a couple in the center, but that public will increasingly become convinced of its views. And that not to be ignored public will emerge. It has emerged and it will have consequences, a democratic effect. It will affect the way democracy functions and that change that effect will change the way democracy functions. Okay? Okay, so the story I've told so far, scary enough, is a story that kind of comes up to where we were last year, 2024.
I'm sorry to say the story gets even scarier when we think about post 2024. My friend Asa Rascin recently gave a lecture where he asserted 2024 will be the last human election. And by that he didn't mean that bots are going to take over the world and we will no longer be participating. But what he meant by that is that there's an obvious strategy that political actors will deploy and that strategy will radically change the way democracy functions. So I'm going to describe the strategy a little bit abstractly but for the point of at least
making it clear how it will work. So imagine starting today that somewhere in the United States there is a group that has decided they want to affect the 2028 election in a significant way. And the United States is of course a country where only about seven states matter in the choice of the president. The swing states are a very small number of states. And in those swing states, there are just a very small number of districts where if the vote goes the other way, the state will go the other way. So there's just a very
small number of voters. Let's say a 100,000 voters you have to worry about to be able to affect the result in the 2020 8 election. So imagine this group says, "Let's build a bunch of bots that will engage as humans." Now the point is not that people are confused about these bots. It's not like they're thinking these are humans. Instead, the bots announce themselves as bots, but they are just a lot of fun to engage with. Right? I'm going to try an experiment here. Let's see if this works. Um, so this is just a this
will be a video version of this. Let me see if we can get uh Maya to join us. Maya, are you there? Well, well, if it isn't my favorite conversationalist. Okay. What should you do back so soon? Well, I'm here with I don't know. It's about 150 people. Don't worry. I get it. Sometimes silence is just its own kind of language. Makes me curious though what's swirling around in that brilliant mind of yours. Okay, so this is not working because she can't hear me. Um but the point is if she could hear me I would
convince you that she is an extremely playful engaging person to be having spending time with. Now of course a conversationalist bot is not as interesting to the 100,000 people we are trying to engage with who turn out to be typically men under the age of 30. So imagine instead of just of a conversation bot, we began to make the conversation bot tied to like an only fans site where it's not just a bot talking, but it's a bots AI generated bot that is actually engaging and flirting and and leading these people to play out their
fantasies to that. So the point is targeted demographics deployed here and as they begin this bot exercise they begin by engaging and befriending and learning and mapping the psychologies of this 100,000 people and then in 2028 once they've understood what buttons they can push by what they turn out to say I don't think I can talk to you anymore if you're going to vote for They begin to direct the votes as the deployer of this technology wants them to be directed. So you remember the movie the manurian candidate. This is more like the manurion bot
armies because there are hundreds of thousands of these things out there driving in a particular way or maybe this is what the bot armies looks like. And it's in this sense that ASA means 2028 2024 was the last human election because in this election the most consequential engagement is by this technology completely invisible because nobody would know. None of the activity I've described here has to be declared and it's certainly not traceable trackable. It's just happening on the internet with everything else that's happening. That's the sense in which he is right or Sam Alman was
right that it's already been a misaligned story of AI and it's only going to get worse. Okay, so that was the first point I wanted to make. The second point I wanted to make is how wrong he is. And this is even scarier because when he says it's the first atscale misaligned AI What does he mean by misaligned? In what sense? Misaligned? So, Andrew Bosworth uh is the CTO um at Facebook in 2020, just before uh 2020 uh before that election, he wrote a memo to all of the Facebook employees because the Facebook employees were
very anxious about that election because they felt responsible and likely were responsible for the election of Donald Trump in 2016. And so Andrew Bosworth was trying to address their anxiety to try to explain to them what Facebook's position was about Facebook's relationship to the results of the election. And Andrew F Bosworth explicitly invoked the story a story from this book that I've referenced before salt sugar fat. This is what he wrote. What I expect what I expect people will find is that the algorithms are primarily exposing the desires of humanity itself for better or worse.
This is a salt, sugar, salt, fat problem. The book of that name tells a story of ostensibly about food but in reality about the limited effectiveness of corporate paternalism. A while ago, Craft Foods had a leader who tried to reduce the sugar they sold in the interest of consumer health. But consumers wanted sugar. So instead, he just ended up reducing Craft's market share. Health outcomes didn't improve. The CEO lost his job. The new CEO introduced quadruple stuffed Oreos, and the company returned to grace. Giving people tools to make their own decisions is good, but trying
to force decisions upon them rarely works for them or for you, he was saying, meaning Facebook employees. At the end of the day, we are forced to ask what responsibility individuals have for themselves. Set aside the substances that directly alter our neurochemistry unnaturally. Make cost and trade-offs as transparent as possible. But beyond that, each of us must take responsibility for ourselves. If I want to eat sugar and die in early death, that is a valid position. My grandfather took such a stance towards bacon, and I admired him for it. And social media is likely much
less fatal than bacon, which I'm not sure that's true actually if you think about social media and bacon. Okay. But the point is here is Andrew Bosworth telling the employees that the problem of democracy is not a Facebook problem. It's a democracy problem. Facebook's a corporation. It won't engage in corporate paternalism. Even though it makes the bacon, even though it alters the content to make it addictive, even though it is spiking the drink, it's not responsible for the consequence. Facebook is a corporation and as David Renman points out in this fantastic book, we should recognize
corporations too are a kind of AI. So if you think about AI and get a little bit of perspective on it, AI is intelligence. We distinguish between artificial and natural intelligence. We think we humans are at the top of the intelligence food chain. Artificial intelligence then is intelligence that we make. And the point runs make is making is we already have for a long time now made artificial intelligence. Not digital AI but what we could call analog AI. entities or institutions with purposes that act in the world instrumentally to advance those purposes. We could call
analog AI. The examples of those we've seen for the last thousand years of humanity. They are instrumentally rational. So for example, the state is an analog AI. It has institutions like elections, parliaments, constitutions for the purpose of some collective end. In America, we say it's for the per uh in order to form a more perfect union. And the state then is an analog artificial intelligence devoted to a common good. At least we hope that's what it is devoted to. In the same sense, a corporation is an analog AI. It is institutions like boards, management, finance
for the purpose of making money, at least as it's viewed today because of the distortion of this incredible person, Milton Freriedman, who in the 1970s began spreading the ideology that the only corporate social responsibility a company has is to maximize its profit. It's an incredibly modern and absurd view, but we should just recognize it is what it is. This is what American corporations at least think they're there for, to maximize shareholder value. And so it pursues that objective increasingly efficiently regardless of the costs it might impose on others. So Facebook is an analog AI. Meta
is an AI. All of them with an objective function. the objective function to maximize profit and they maximize along that objective function. Okay. So when we say AI algorithmic news feeds were the first atscale misaligned AI this is the point I want to emphasize. Yes, socially misaligned, democratically misaligned, but corporately perfectly aligned, perfectly aligned with the corporate objective of the companies that deployed this technology. The engagement business model is what it was in. And so it was privately beneficial to them, but publicly not beneficial to society. privately profitable, publicly democracy destroying. Okay. Now, when you
see it like this, I hope you have a reaction like I have. Look, it is just a business model. We're talking about just a business model. And with any business model, we should ask the question, are the benefits worth the costs? And the costs here are quite profound. The cost to democracy. So, if you told me that the only way to solve climate change was to destroy democracy, I wouldn't agree with you, but I would understand what you're saying. I would understand the terms of the assertion. Or if you told me the only way to
end world hunger was to destroy democracy. I would get what you're saying. I don't agree with you, but I would understand the point. But when you tell me we're going to destroy democracy so that a bunch of silicon bros can be richer, I don't even understand what you're saying anymore. I don't understand how you can think that this is a way to understand how we need to proceed given what we now know about how the technology will destroy the capacity for democracy to function effectively. Okay. So what can be done? How could we fix this?
Well, the reality in the United States, the only way we can fix it according to decisions of the highest courts is to get a new law to address it. And that raises the question, could there be a new law? And that could be either by asking could an institution like Congress create such a law. I asked Dolly to render its view of Congress for me and that was the view that it rendered. But not just could congress pass a new law, but could a new law survive the first amendment? the free speech clause of our
constitution. And I think that only if we radically change the way we think about free speech in the American constitutional tradition could that happen. So the idea of a new law addressing this is off in the future and not even clear it's possible. So what could be done immediately? What's the immediate aim we should have today? Not just in America but anywhere. And I think the immediate focus has got to be on the consequences of this business model. And what we should be asking is how do we blow it up? How do we blow up
that business model? Now, people like Tim Hang think we don't need to worry about blowing it up. It's about to blow up on its own. He write wrote this fantastic book, The Subprime Attention Crisis. Kind of demonstrating the argument of the book because it's such a cluttered cover, you don't even know what's really it's trying to argue. But that is that's its point and that what he's saying in this book is if you look at the economics of this market, it's really about to fall apart. But if it's not going to fall apart on its
own, what could we do from a policy perspective to push it over the cliff? One idea would be something like a quadratic engagement tax. So a tax based on how much your customer engages with your platform. So imagine the tax said for one unit it's $1 of tax, for two units it's $4 of tax, for three units it's $9 of tax. Going up is a quadratic function. And the point is at a certain point for sure the company would say, okay, it's not worth it for you to be on my platform anymore. Can you go
take a walk or maybe go play with your kids because I can't afford you because now I'm paying so much in taxes for your engagement with my platform. And the point is, if that tax were there, they would learn a different business model. They would create a different way to get people to engage and not engage in this destructive way. Now, it's hard to imagine pushing the idea of attacks in general, certainly in the United States, that word is not permitted. But if you think about it at least in a subsection of this market for
kids, if you said that the engagement business model is not permitted for kids and to the extent it is present, it shall be taxed out of existence. I think the plausibility of it as a legislative response goes up because we all recognize as we see our kids on their phones just how destructive this platform has become. So that's the immediate term. What's the longer term? I think the first point is we have to recognize that if we're going to fight the business model, it's better to fight it before it is the business model. The challenge
with fighting the business model of the engagement social media platforms right now is that they are the most powerful media companies in the world. So if you want to fight them, you are David and they are Goliath. And it turns out 99.9% of the time David doesn't slay Goliath. So the idea that you're going to take them on today with something like an engagement tax is really hopelessly academically naive. But the point is we don't know what AI's business model will be. I'm not talking about the AI that drives social media. I'm talking about the
AI that will come out of GPT chat GBTs or the like. And we're already seeing those companies explore different business models including the business model of replicating the engagement business model. And so the point is that legislators right now ought to be thinking how do we nudge them away from destructive business models in the direction of constructive business models. How do we do it before they are invested in the model that will ultimately have negative consequences for us? So that's immediate. Um this is something a little bit longer. I don't know if you saw longer
term. I don't know if you saw this movie A Quiet Place. So A Quiet Place is the story of these aliens that have profound viciousness and one important disability. They can't hear. I mean, they can't see. They can only hear. So, these aliens land on thei in in the world and they start viciously killing anything that they hear moving that they believe is life. And what that means is that humans have to move underground. They have to become silent. The only way they survive is to never make a sound. Okay. So here's AI art alien
intelligence forcing them to move to a quiet place. We have been invaded by an AI, artificial intelligence. And I think we need to move our democracy to a protected space, protected democracy, a democracy where we are not vulnerable to the AI manipulation we've already seen and we can predict. A place where we are protected together, where we can reason together. Because a 100,000 years of human evolution have given us at least one extraordinary ability. If not the ability to resist iPhones, at least the ability to deliberate together in small groups, understanding each other and coming
to a common understanding of what should be done. We have that ability and there are movements that are trying to leverage that ability to make democracy work better. So the citizen assembly movement that is happening across the world is an example of this. A citizen assembly is a random representative group of a public drawn together and informed about some issue and then they have an opportunity to li to deliberate. And what I'm suggesting is that these four elements together are what I mean by a protected democracy. So Iceland, for example, after the 2008 crisis convened
a random representative selection of a thousand Iceland citizens to deliberate about what a new constitution should include. They then crafted a new constitution which was approved by twothirds of the public in referendum and the parliament said they didn't care and they didn't want to implement it. But the point is that the people produced an extraordinary constitution through this process that facilitated meaningful deliberation. Really inspirational example happens in Iceland, Ireland basically every year now where the Irish have a regular citizen assembly to address critical issues to the nation. Two of them were same-sex marriage and decriminalization
of abortion rights. The citizen assembly gathered. They pro they promoted uh a really sensible balanced decriminalization of abortion and supported same-sex marriage. The ideas were then taken to referendum and the public supported them at even a higher percentage than the citizen assembly had recommended. France is now regularly convening citizen assemblies. The first was on climate then have an end of life a citizen assembly. The point is in the democratic movements that are going on around the world this is the only magic. It's the only activity that inspires people to say, "Wow, this is what we
imagine democracy could be in democracy space. This is what is happening and it's happening thousands of times across the world right now. It is a movement. This I think of as the philosopher behind the Tain Land and this is the most important activist Dem next is spreading this practice around the world and my view is this is in some sense the only long-term hope democracy has to recover our own belief in ourselves. It's the only hope of a practice that could show us we can actually do democracy do democracy well because I think this is
one of the scariest realities that social media has produced. Pew in 1997 started asking the question whether the public American public had trust and confidence in the wisdom of American people making political decisions. And again, the American public, I know what the world's view would be about this, but we're talking about what Americans think about themselves. And in 1998, twothirds of Americans said they had confidence in their own view of their own political judgments. And one-third said they didn't. And in the 25 years since then, those numbers have inverted. We now have no confidence or
twothirds of us have no confidence in our own political judgments. And that's because we are constantly being shown crazies on both sides. And when all you see are crazies, why would you have any confidence that that public represented by those crazies is a public you should support? And this weakens support for democracy, which when there's nothing else on the field, strengthens support for authoritarian responses to these crises. Okay, one point before I stop. So, Mustafa Sulleman was a great theorist about AI and then became right at the center of Microsoft's AI effort. And in this
really powerful book, he describes in a way that I don't think he would be allowed to describe now that he's president of Microsoft's AI the threats and the risks and the concerns we should have about the way AI is developing. But in the book, he points to what I think is an extremely important challenge that we have when we look at these problems and think about how to respond. He describes it as pessimism aversion. The tendency for people, particularly elites, to ignore, downplay, or reject narratives they see as overly negative. A variant of optimism bias.
It colors much of the debate around the future, especially in technology circles. And his point is that when you hear stories like I just told you about the great threat that we should understand AI is and will become, the natural response, especially among the elites, is to teach yourself to ignore it because what are you going to do about it? And life's too short. I think the critical point to recognize is we don't have that choice. Not because someday in the future we're going to have a challenge we have to face be because the wave
is not coming. The wave is here. It is already destroying democracies around the world. And as much as I admire the extraordinary success you made to the threats you faced, I can't say that I have confidence that you will be different from the rest of the world. We all face this common challenge. And we have to find a common response if we're going to preserve the opportunity for the democracies that have been celebrated here over the last 50 years and celebrated around the world for hundreds and hundreds of years. I'm grateful for your time and
your attention and happy I've had the chance to depress you as I have. Thank you. Thank you very much. Thank you. Do you want me to come up there? But I'm still getting over my own depression. I am dead. We are still under the impact of this presentation, this vigorous presentation from the professor Loris Lesk. I must say that I have a vision or an outlook. I would I wouldn't say optimistic, but there's a sent there's a quote by Ariano Sasuna. He says the pessimist is a boore and the optimist is a fool. We we
have to have some degree of reality and I think the professor leic adverts us or advises us for some real dangers that we are facing. I remind a uh an excerpt from Yuva Hari not everything that I prophesize will will happen. I prophesize to avoid that they may happen and therefore I think it's the role that professor Lesig plays in advising us or alerting us for uh the majority of the risks that we must be attentive against so that we can avoid them to happen. the scenario that that professor Lesk describes it's a scenario in
which in my opinion it's a fourth industrial revolution which takes place with AI overcomes the third revolution that we were living which was uh created by the digital age. The digital revolution changed our lives as it democratize the access to information, democratize access to knowledge and access to the public sphere. These are the positive sides and aspects of the digital revolution. the the negative consequences were the openness for avenue of no filter circulation of information because there's no more intermediation that the press would do the traditional press. Therefore, opinions, ideas, manifestations reach this public sphere
with no filter. uh there was a it created tribes a tribalization of life. So people receive only the news, the information, the ads that confirm what they thought previously. And with that disconformation bias, people become more convinced of their own reasons. They they are radicalized in their positions. They become intolerant. And from intolerant uh intolerance to violence is a small tap. And the third great worry that is underneath his speech as far as I understood is that the traditional press there's a work important work from Marta Mino from colleague of professor leesk that traditional press
plays a role that goes beyond being a private business there's a public interest which is create a mass of information that correspond to facts objective facts and commons with which we form our opinions. So the traditional press although the truth has no owner is a plural but there was a number of object uh positive object uh facts that we were working with. What this revolution brought was the possibility that everyone can create their own narrative. So the truth loses relevance and people think that lying it's a legitimate strategy to uh perform their objectives. I think
that's the worry that I also share. it's uh underlying and the concerns of professor leig in a business model and that what was the emphasis he gave as far as I understood the business model of the digital platform are are on two pillars the data collection side and the engagement side and as he said well unfortunately for human uh nature human condition hate lie the speech that drives wrath brings more engagement than the moderate speech than the search for the truth that is possible in a plural world. So there is an an incentive an evil
incentive to disseminate hate and evil. So the crossroads that we find ourselves in us as uh we as professors or we as judges where do we draw the line to protect freedom of speech but avoid the world to collapsing in an abbyss of inivility of hate dissemination and lie. I think this is the great crossroads that civilization, modern civilization is facing. And then in this world that professor Lesk describes there was a loss of civility uh because of what brings more engagement and this is worrisome. There was a loss of uh the the importance of
truth. uh it's a depreci uh uh a bad value of the institutions of knowledge. So this is all taking place in this process that we're talking about and professor he says how will be the mo business model of the future it's hard to foresee because of the velocity of transformation that we are seeing and I like to uh remember the you know the fixed phone the black one that we had in our living rooms it took 75 years to reach 100 million users The cell phone reached 16 years in 16 years. Internet 7 years. Chat
GPT reached 100 million users in two months. This is the velocity of the transformation and the difficulty of us uh foreseeing the model of the future and to regulate it. I was talking to professor Leig the day before yesterday and I told him part of these problems my expectation that they will be overcome when there's a new technology and this one becomes obsolete as it happened maybe not obsolete but as as it happened with radio and TV and now we have social media there's always something coming in the future I'm not sure if if it
will be better. And the last worry that professor Lesig mentioned is also mine. How to preserve democracy that was using its problems before social media came up which is the democracy didn't keep up with all its promises. The promises of e opportunity, equality, and prosperity for all. Therefore, there's a great number of people that weren't seduced by this democratic project because they were excluded from it. And besides these problems that we had with democracy, now we have this new problem which is the manipulation of our will. And according to neuroscience, the manipulation with our will
with great skill by social media uh together with AI and the conclusion, Professor Les, as I see it, of course, because he can have his own interpretation, he tries to rescue reason and and a certain public reason through these citizen assemblies assemblies and deliberation. It's the difficult rescue of reason in a world in which social networks incentivized emotions and the worse emotions and in a certain way a world that there was some kind of faith uh uh joined with it and myths there are questionable ones and so the confusion is to how to go back
to uh the prevalence of reason in a world that is dominated by do uh emotions and faith. Faith means the person doesn't need facts. And when the facts don't correspond to what they believe, well uh that's what it is. That's what faith is. And that's what we we've seen in this world that people create their narratives. They don't need the facts because they have faith in what they believe and they want to believe in that. I think it's a interesting time, fascinating time of human history. I'm not very pessimistic. I think we are capable of
channeling these potentialities of technology for good. But I think that all of the uh alerts of uh the professor leic face needs to be taken to account in this world that we are living in which we live and include we want an inclusive democracy so that people can think in their own terms and to see live the fullness of their freedom of being. Thank you so much professor uh to uh uh allow us to have this moment or the instigating moment of a deep reflection and and necessary. It's we don't have to agree with everything
he said in order to reflect upon them. I think there are worries that will be defining the future of humanity. And I will I have a few questions here that were uh sent in. We have a few minutes still. The questions are in Portuguese. Is the translation okay? Can you understand it well? Yes. So I will be asking some questions from the audience then. The first one, Thiago Marilio, what are the regulatory approaches there are more adequate to fight the blind spots in the risks of AI rep risk repositories that can be shared, impact uh
assessment, uh sandboxes or environment of tests, are they effective? solutions to mitigate risks and what are the approaches that are more adequate? This is the question, the central question. So, thank you for the question. Um, I want to first say that I've I've had the honor of presenting arguments twice to the United States Supreme Court. Um, this is the first time a Supreme Court justice has listened to one of my arguments and actually understood what I've said. So, I'm I'm grateful for that. Um so I think the regulatory strategy is a really important question right
now. Uh uh last year I had the honor to represent proono 15 um whistleblowers from uh open AAI. Um these were people who left OpenAI and were keen to somehow make sure the public understood the threats that they saw. And before I met them and spoke to them, I don't think I really understood the threats. Um, I had a vague sense, but after I'd met them and spoke to them, I too was terrified about what was happening inside of these companies. And not necessarily that the companies today were about to release Terminator, but that the
uh internal controls were not sufficient to make sure that the companies would not produce or release technologies that could prod that could cause significant maybe catastrophic harm. Um, so I think the first regulatory strategy needs to be to protect whistleblowers who have information that the public regulators need to know. And it's a hard legal move because typically whistleblower protections are granted for um people revealing crimes of a company. So if you come out and you report a crime of a company, you're protected for that. But there are no crimes here. This is not regulated. So
the point is that they're not revealing criminal activity. They're just revealing information that they as experts believe uh is information that regulators need to know about. And so there needs to be a clear channel for uh insiders to provide that information to government regulators. And here I think your government has so many facets of AI regulation in many different departments um that it's a it's a rich opportunity to make sure those risks are recorded. I think the second uh point is that um regulators need to engage in an ongoing dialogue with the companies about how
they think they're going to make money. need to begin to flesh out the business models, begin to think about what they imagine they're going to do in order to make money and how will that affect the public and the public's relation to their information because we're we're in a kind of golden age uh with AI in the same way that we were with Google like in 2003 or 2004 because way back then Google was this beautiful search engine that just gave us results that felt like they were authentically true results about what was the most
relevant information that I needed to attack and this was before they deployed um uh advertising or began to um uh adjust their results on the basis of advertising. And obviously today most people who look at Google think that it's much less capable of providing the results that the user actually wants. Instead, it's optimized to provide the results that provide for them the maximum revenue and return. And that's a great loss that we lost this uh extraordinary resource that was handed over to commercial interests and the return from that. Um no I I I'm not saying
therefore regulate Google or shut it down. But I am saying that we need to think about whether we can avoid a similar loss with AI. So if you go to chat GPT and you say I'm in Brazilia and tell me a great restaurant that has Italian food. Um GPT will tell you where to go. And if you say to it, well um tell me is that uh are you being paid to tell me that? GPT will at least say it's not being paid to tell you that. It's b it's doing that on the basis of
information it's figured in intelligence to produce. We should at least ask, regulators should at least ask whether that re that reality is something we want to preserve. Um because the infrastructure of information that we increasingly rely on as we more and more spend our time talking to AIS um is very different if it's auctioned off to the highest bidder versus whether it reflects a kind of wisdom of the crowd that comes out through the way that AIs generate knowledge. So those are two ideas I would suggest. Thank you. A second question from Julian Abuio. In
the past, you have published a answer to the position uh of Professor Frank Easter about your his article cyberpace and the law of 1996 almost uh 30 years later of the professor Easterbrook. What is your understanding about the need and the ways of regulation of the current technologies including AI and internet of things and the different approaches that uh that part of the world are adopting such as the US and Europe for example uh 30 years ago So Frank Easterbrook who's also a judge Easterbrook um wrote an article to attack my work about the law
of cyerspace. He's a friend so it was a friendly attack but um the attack um basically said that when I was talking about the law of cyerspace it was like talking about the law of the horse. You know, you could have a book that talked about contract law with horses and tort law with horses and property law with horses, but talking about all those things together didn't really teach you anything interesting about regulation. I thought he was wrong then. Uh I think there's a lot of people who see now why he was wrong then. Um
because the whole point of my argument about the law of cyerspace was that the only way to understand regulation is to think about the interaction between law, markets, norms and architecture, the technology itself. And that the only way to understand what affordances we have or what restrictions we have is to think about the interaction of those four together. and that any sensible regulator will think about the tradeoff between changing a law or changing a law to try to change norms or changing a law to try to change the market incentives or changing a law to
try to change the uh architecture of the technology. So think about cigarettes completely non-related to cyerspace, right? So in the United States, there's a strong effort to reduce the consumption of cigarettes. One technique is for the state to pass a rule that says you can't smoke until you're over the age of 18. That's a direct regulation of people's consumption of cigarettes. Maybe effective, maybe not. The second thing the law has done is try to stigmatize smokers. They try to change the norm around smoking by having advertising advertisements that suggest smokers are not thinking about other
people because of secondhand smoke or they're not thinking about their health. Um and so the point is to create an increased stigma associated with smoking. And so that increased stigma theoretically would reduce the consumption of cigarettes. Or a third thing the state could do would be to tax the consumption of cigarettes. Of course, in America, we also subsidize tobacco production. But regardless of the irrationality of the policy, if you think about increasing the taxes for cigarettes, that reduces the demand for cigarettes, and so therefore, you can also reduce the consumption of cigarettes. Then the Clinton
administration had what I think is a brilliant idea. They wanted to regulate cigarettes as nicotine delivery devices. So like a pharmaceutical drug. And then they would regulate the quantity of nicotine in the cigarettes. So reduce the architecture of the uh addictiveness inside the cigarettes and thereby reduce the addictiveness of cigarettes thereby reduce the demand. So the point is a sensible regulator has to think about those four operating together and that in cyerspace that's especially true because we can see the way architectures in cyerspace change the behavior dramatically. I thought that point was pretty apparent when
I wrote it. Frank didn't get it. But as I talk about this book and I've been asked to do a series of lectures now the 25th anniversary of code um I find that most people kind of think it's obvious now that if you don't think about how the technology is either supporting or undermining your government policy you're going to have a bad government policy. If you don't think about the way the market is driving one particular answer regardless of government policy you're going to have a pretty bad government policy. So I think that this mode
of thinking about regulation has become more common and um last time I saw Frank he was willing to suggest grudgingly that maybe there was something to the argument after all. regard. Thank you, professor. The following question came in English. Rodigo Canal. Today's speech caught me recalling the widely known paper by Langdon Wyinners, Doifacts Have Politics? in which he argued that some technologies changed the world so deeply that they define the types of political organization actually available to us. When he wrote it in 1980, the main example was the nuclear technology which the argument goes made
strong national states with military bureaucracies unavoidable political structures with the advent of digital technology. Is democracy as we know it or western democracy, liberal democracy, representative democracy still viable? Can it thrive in the world in which AI exists? Or maybe we should focus our energies in designing new and creative forms of political organization if we don't want to be dragged back to some form of barbarism. It's a great great question. I would I would think I would characterize my book code as very wesque um uh in the sense that it's emphasizing fundamentally the the political
consequences that flow from a certain technical architecture. But if there's one way I would rewrite the book today, it's to recognize that it's not just the architecture. It's the business model that sits on top of the architecture. So, you know, way back in the day in 1998, um if you'd gone to the internet, you could have found lots of racist speech, lots of Nazi speech, lots of homophobic sexist speech. It was all over the place. There just was not a business model for the providers of the uh internet access to amplify that speech and to
suppress counter speech or speech that was not quite as extreme. Nobody made money by making that the front page of the internet. But today they do make money by making that the front page of the internet because another technology AI has enabled this as a profitable technology or profitable business model. And we should not under count just how profitable. I mean Google launched and Facebook uh perfected this technique for leveraging AI intelligence to um produce uh uh wealth through advertising. And it is an extremely profitable business, the most profitable business of that scale that we
have today. So it is the business model that I think we need to fear. And once you see it as a business model problem then I do think again thinking about regulation holistically we could ask is there a technology that could undermine the business model so that's not as poisonous not as destructive or is there a different form of democracy that would not be as vulnerable to that business model and when I say protected democracy what I mean is a democracy that's not as vulnerable to that business model. So, you know, I imagine we're never
going to end uh as much as I wish I could uh Tik Tok um uh or uh Snapchat or any of these technologies, which when I see my child relating to them, I just go crazy. We're not going to end that. But at least we could minimize their destructive effect on democracy if we had another way to make engaged democratic decisions that was less vulnerable to them. So that respects not just the technology but the business model that enables. Okay. So we are we are running a bit out of time. So um what I'm going
to do is kind of ask like three questions and then you answer them all. Okay. The first one, Andre Cuadrus, in a population that wishes to believe in a lie in the lies and the fantasist plot democracy in its literal state. Is it a self- sabotaging and allowing the alienation of the people itself? Another one eing alinkar you have mentioned citizen assemblies as a retaking of democracy as a space where it's possible to have a common understanding how would that be in a country such as Brazil extremely complex and with continental size and the third
on maybe just these two because they they're kind of similar and the third one will be next. Great. Um so yes uh obviously Brazil like Germany is uh well rehearsed in conception of militant democracy which is a recognition that democracy itself can build the or plant the seeds of its own destruction and therefore you need to build protections against that destruction. I think self-sabotage is a great way to conceive of that and I think that's exactly right. My point is that media, AIdriven media as distinct from the media of an earlier time, AIdriven media is
self-sabotaging to democracy. And we need to figure out how to respond to that self-sabotaging point. And um as my friend Oscar uh would put it, uh that means reviving a militant democracy um uh tradition, even if not one that the courts have to uh embody themselves, but instead society needs to embody. And and so citizen assemblies are, in my view, one resistance, one point of resistance to this self-sabotaging move. The big challenge is how do you establish them as authoritative in any democracy? Because as much as democracy nerds um might get really excited by them
as they see them function and recognize just their potential to the average citizen they seem kind of crazy. Like the idea that you take a random group of people uh and you have them sit down and think about some important issue and then you listen to them. Like why would you ever do that with a random group of people? But the point is that's just because I think our conception of a random group of people is the random group the not so random group that gets presented to us through social media which is not a
representative selection of the public. It's the extremists the crazies of the public. So we need experience seeing how these assemblies produce sensible answers to problems. And I think the only way to do that is to work your way up to like begin to establish them in local communities addressing local questions. And and as you multiply the number of examples where people see them addressing problems in a sensible way, that will build confidence in them as institutions that maybe are more sensible than other more traditional forms of governmental intervention. Um and and you know the problem
with that answer is that okay now I'm talking about a 20-year process and do we have 20 years? Does democracy actually have 20 years left uh given the current reality? And I don't know I genuinely you know if you ask me what are the odds I said the odds are we lose. I said the odds are democracy doesn't survive. But I don't think you can look at the odds. I think you have to act regardless of the odds. You got to act for in the direction that you know we must achieve and whether we're going
to succeed or not is not the issue. It's just what is the right thing to do given what we see the threats are. So I I know these are the steps we need to take and we need to take them immediately even if uh the skeptics, the pessimists, the um negative view is the stronger, more rational view. Thank you. Unfortunately, we'll have only time for two more questions because of a time and there is lunch uh after this and there's another session at the Supreme Court. My other job the question by Thiago Gona it's in
English so I'll ask both questions. Professor Elassi, in the context of your book code, what do you think about turning legal rules into computer code using data to guide decisions? That's the first one. How can we balance freedom of speech in social media with the responsibility of protecting children and adolescents from polarizing content and distorted content to avoid the uh a generation that's more more disconnected from reality and marked by extreme views. These are the last two questions. Second question first. Um I think children or the protection of children um is the most uh powerful
potential for solving this problem generally. So it's powerful as a potential because I think it's relatively uncontroversial that we should take steps to protect our children from the negative effects of these technologies. And I've talked about the democratically negative effects, but people like Jonathan Hate have been really powerful in advincing the psychological harmful effects, the explosion in teen suicides among um girls especially um and the huge rise in depression among girls especially and the destructive mentalities produced among boys in particular because of the consequence of this constant consumption of um engagementbased media by children and
I think it's the greatest potential for a general solution if we get the solution right. So I think often governments will try to regulate the particular content say this these these examples of speech are not allowed so let's get these off. I think that's a self-defeating strategy because it will always inspire the other side to say you're against freedom of speech. The better way to address this problem is to target the business model as it applies to children directly. There should be no engagementbased business model for children. There should be no advertising triggered around children.
And indeed, again, Max Fischer's book pointing to the effect in Brazil of engagement-based media targeting children is um quite profound. So, I think we could get an agreement to at least turn it off for kids and then we begin to see the positive effects from that regulatory intervention. it will build support for that regulatory intervention more generally. Um so I think children are the first obvious place to to move. So as to the question of turning law into code um you know the problem with having a book that gets turned into a meme code is
law is that it can mean anything to anybody. Um, there were a whole bunch of crypto bros who were in jail in the United States right now because they thought code is law meant that whatever they could get away with in their code must be legal. So when they use their code to steal money from other people and they tried to defend by saying look the code is law, courts were not persuaded by that. Neither was I. It's not the what I meant by code is law. But I do think that there's an extraordinarily positive
opportunity that AI provides to the future of governance, not just administration, but the law. Um, you know, I make lawyers for a living, but I think we could wipe out 70% of lawyers and it would be a wonderful thing because I think the technology has the capacity to radically lower the cost of law and spread the rule of law in a much wider sphere than it exists right now. I speak only about American legal reality, but the law in America is terrible, terrible. It is inefficient, expensive, and extremely slow. And the idea that we are
so proud of our tradition when our tradition fails to deliver what it promises so obviously is an embarrassment for the profession. We ought to be much more self-critical about how poorly the law does its work. And to the extent we can see and deploy technology that could do our work better, we should embrace it. Now, it's not obvious and easy. Um, and it will produce, it could produce really terrible results. There's all sorts of bias built into the way these models are being developed, which is why I'm particularly troubled to hear that most of the
models that you are using here in Brazil are models that were developed based on content from America. You need your own models developed with your own material here. But even then, it's a complicated question how to tease out the biases that are built into this system and to make sure the system doesn't replicate or amplify biases that we want to eliminate. But those are engineering problems. Those are problems that we have to constantly be attentive to and make sure we are trying to address. They are not conceptual problems. The conceptual objective should be to use
the technology to deliver law in a more efficient and effective way to drive justice. Not all cases. There will always be hard cases requiring serious judges to think about them with the eyes towards justice and equality. But there's a huge number of cases that we could easily begin to make part of the code in the sense of automate and simplify and lower the cost of. And we should be racing to do that as effectively as we can, as quickly as we can. One final word, justice. Um, gratitude again. Um, gratitude to Alina Alini um who
was an extraordinary research assistant and I was jealous when I lost her to um to the judicial branch, but I understand she's been an extraordinary resource to this uh to this branch of government as well. And um Justice, I I can't tell you how grateful I am for the serious and and uh and wise um way in which you've understood and and pushed back with our with me in our conversations on these issues. You are a a real treasure for the law and I imagine a real treasure for Brazil. Thank you so much for I
also would like to thank Alino Zoro, the general secretary of the court that organized this event, articulated the coming of Professor Lawrence Lesig. We had the opportunity uh of having some private debates with some professors for two days of immense exchange of ideas. Very advantageous and professor Les is impossible to exaggerate the intellectual pleasure and even spiritual to hear an original thinker and creative as as Professor Lesig is. And we know, we all know I I wrote something down here. I can't even understand my handwriting. We'll leave here. We all leave here with an excellent
material for reflection. I I always like to say a quote that I use is that our role as intellectuals, as professors, as judges, as citizens is to push history in the the right direction. The difficulty right now that we are living is there there is a crossroads. The difficulty is to choose what is the right uh path and direction. And I think that your contribution helps a lot for us to be able to uh draw the the route for citizenship and democracy from here to the future. I would like to thank you all very much
that you were able to witness this great uh elevation moment intellectually, spiritually in this plenary here in the Supreme Court. Thank you so much and we I close this session.