Why AI is a threat--and how to use it for good | John Tasioulas | TEDxAthens

35.08k views2720 WordsCopy TextShare
TEDx Talks
AI is a radically transformative technology that makes it even more urgent for us to address Socrate...
Video Transcript:
[Music] well I'm a philosopher in Athens in 2024 talking to you about artificial intellig Ence that is a great thrill and a great honor but you might ask what does the ancient discipline of philosophy have to say about the AI Revolution I've got 15 minutes to make the case let's go back to Socrates in the Republic he says the question we're dealing with is not a trivial question it is the question how should one live now this question how should we live is a central philosophical question but it's also a question each one of us
has a responsibility to answer for themselves that's why our Socrates conducted his dialogues in the Agora in Athens with ordinary citizens and AI makes it all the more urgent to ask this question again because this technology is so revolutionary it could transform Our Lives both for the better and for the good so we need to ask this question again in light of these new technological developments that Socrates could not have foreseen but not only does AI make this question how should we live more urgent it creates problems for us in addressing this question new problems
new threats to our ability to answer socrates's question so today I'm going to talk about three of these threats the first threat is that AI threatens to distort our self- understanding our understanding of what it is to be human all the big Tech corporations say they have the same goal their goal is to create artificial general intelligence artificial general intelligence is intelligence that is replicating the entire spectrum of human intelligence so able to write a poem make a cancer diagnosis um make a hiring decision do everything that a human can do but if we pursue
AGI as our aim there will be a Temptation an incentive to imagine that AGI has been achieved because we've changed the goalpost we've changed our own understanding of what humans are like blurring the distinction between humans and machines but we need to preserve this distinction so what are the differences well one fundamental difference relates to understanding an AI system like gpt3 large language model operates on the basis of statistical correlations between data these statistical correlations do not give it a genuine understanding as a human would have of what a cat is or what an electron
is humans do have that understanding partly because we're embodied agents engaging with a physical reality and one way this difference shows up is that there is a quality that most humans have which is common sense which has proved very elusive for AI systems in other words AI systems make spectacular mistakes that no human would ever make they confuse a cat with a skateboard or a human with a gorilla no human would ever make these mistakes because humans have a rooted understanding of the world in a way that a machine operating on correlations between data points
does not so understanding is a big difference but there's another big difference and that is humans have a capacity for rational autonomy we have the capacity to choose our goals to decide do we want to be a do doctor or an actress do we want to be a lawyer or a musician and to choose these goals in light of all the reasons pro and con for each option an AI system cannot do this it has a goal programmed into it and it optimizes for the Fulfillment of that goal now you might say if we start
to blur the distinction between human and machine what differences make why is this important why is it a threat the reason it's a threat I think is the reason given by Aristotle which is if we want to know what a good life for a human being is we need to know what the distinctive capabilities of a human are these capabilities for reasoning for communicating for social engagement that manifest themselves in all sorts of valuable Pursuits suits like scientific research or sustaining a friendship or fighting for justice if we lose our Vivid sense of the capabilities
we have as humans the risk is we will have an impoverished ethics we will have an impoverished sense of the values that matter what will that impoverished ethics look like well one form it will take will be a kind of consumerism that says what matters is your gratification as a consumer which can be just a passive matter of receiving pleasure rather than exercising your capabilities or there's another kind of Ethics that's become very popular in Silicon Valley which says the pathway to a better life is transhumanism is to transcend our human nature to live in
virtual reality to become cyborgs now on the Aristotelian view According to which a fulfilling life is exercising your human capacities transhumanism which is taken seriously in these powerful circles is not a path to Utopia it is a path to species suicide another threat astronomical sums of money are being invested by the private sector in the development of artificial intelligence in 2025 it is projected that $200 billion will be invested in developing AI Technologies and there will be a profound economic incentive for these companies to say to us that these AI systems can solve problems better
than humans can and that will create another Distortion it will create a distortion whereby increasingly maybe unconsciously we start to change the nature of our problems the way we understand our problems to make them more suited to being handled by AI systems so let me give you an example of this risk assessment tools that based on AI technology already used in criminal justice so one question in criminal justice is the question of bail should someone who has been accused of a crime be allowed to go free awaiting their trial or should they be kept in
custody now there are a lot of very prominent thinkers who say AI systems are better at this kind of decision than human judges they say the argument is in two parts one part is is the bail decision is fundamentally a prediction will this person commit an offense if they are released and the second claim is AI systems are free from certain biases that humans have and they are better at making these predictions now that second step in the argument is very controversial it's not entirely clear that AI systems far from it are better at making
the prediction but I want to challenge the first step of the argument the way the problem is understood is the baale decision exclusively a decision about will this person commit an offense clearly not there are other considerations that are relevant like how serious was the fence that they committed what will be the impact on their children and family if they're put if they're kept in custody what's the capacity of the prison system to house this person so when we think about this bail decision it's not a monodimensional decision based on a prediction it requires balancing
a whole series of factors and there may not be one single correct way to balance all those factors there's not a way where you could say I'm going to give all these factors a numerical score and then choose the option that gets the highest number that's not the way the situation works the but that is how an AI system would work so the worry here is instead of using AI to address our problems we change the nature of our problems that AI starts dictating to us as it were what the problems are that we're confronting
we're getting things back to front a third threat I wanted to talk about is to our values themselves people who now push AI systems especially as replacing human effort tend to focus on two things one that these AI systems will produce valuable outcomes they'll produce a correct cancer diagnosis a good hiring decision or good journalistic copy and second they say they'll produce these good outcomes efficiently faster and at less cost than a human but this Relentless focus on valuable outcomes produced efficiently ignores a whole range of values that we care about values that I'm going
to call Process values values about the way in which we produce valuable outcomes and I think this is basically kava's wisdom in the poem Etha that what matters to us us is not just reaching the Final Destination by the quickest route we also care about reaching that destination in the right way how we reach that destination arriving in Ithaca in my private helicopter is very different from arriving there by a hero's journey encountering man-eating Giants Cyclops and angry Poseidon and sometimes the journey is more important than the destination remember the line and if you find
her poor Etha won't have fooled you now how does this how does this work out in real life how do how is it that we care about process and not just outcome well think about loving relationships yes we seek loving relationships but it makes all the difference if we're in the relationship through a free choice that expresses our personality and our tastes and also we assume the risk of making a terrible mistake that's part of the value as opposed to being assigned to a relationship by an algorithm that has been configured to optimize for human
compatibility or think about work it work work we want to produce valuable outcomes valuable goods and services we want to have the outcome of a decent salary but we also care about how these things are produced we want to be able to work in a way that involves the exercise of skill the exercise of judgment the possibility of real cooperation these are process values and they will never be fully compensated by the idea that there will be a universal basic income to compensate people for jobs lost through automation or take a final Example The Case
of the judge we want good legal decisions that are correct but we also want decisions that are made for the right reasons there's an AI system called Lex mackina not X mackina and Lex mackina can predict the outcome of legal cases in US patent law just as well as a top US patent attorney sounds impressive but then I'll say to you the way it does this is not by looking at the law and making a prediction it looks at the name of the judge it looks at the amount of money at stake in the case
it looks at the law firms involved in the case these are irrelevant to what the correct out out come should be so you get correct decision but wrong reasoning it's like the AI system that was able to distinguish pictures of huskys certain kind of dog from wolves and it was very successful but it was using the presence of snow in the picture to make the distinction right outcome wrong wrong reasoning we also want a judge who if they're going to make a decision that affects our life and Liberty can take personal responsibility for that decision
AI systems lacking rational autonomy cannot be held responsible in the same way and this is very important because one of the things we need judges for is for the Dreadful job of taking responsibility for decisions that have a huge impact on human life but it's also a human tendency to try to avoid responsibility where possible AI systems are going to be a new way in which we can Sherk our responsibilities for the terrible consequences we visit on our fellow citizens and finally there's the point about empathy even if the AI judge is merciful to me
and gives me a reduced sentence because of the hardship that I was facing that led me to cause the to to commit the crime that same sentence will not have the same meaning as if it were delivered by a human judge because the human judge can deliver that sentence expressing a compassionate reaction to the situation I confronted he could say there but for the grace of God go I the AI system cannot say that it doesn't share a human life with us I've talked about risks AI poses but you might now ask what are the
solutions and I I think on any realistic view there is no simple single solution for example in my Institute we do work on things like the need for a new right the right to a human decision a special new right for the age of AI but if there is one more fundamental solution I think the answer lies in democracy because the historical record shows that technical and technological innovation does not automatically bring benefits for everyone it only brings prosperity and other benefits if it's subject to robust Democratic control democracies are better at producing good decisions
than non-democratic systems and also democracies have the kind of process value I talked about if we deliberate together as free and equal citizens about the common good and reach a decision together we respect each other's dignity in a way that other systems don't okay but of course we know that we live at a time of democratic crisis of loss of faith of democracy we live at a time when there's a rise of technocracy more and more decisions being taken away from democratic publics and given to experts judges Bankers we live in the time populism where
people look for a strongman ruler who can act outside of democratic structures and in so far as AI is involved in all this a lot of people think AI is toxic for democracy because it enables the spread of misinformation disinformation it ferments political polarization I want to end with a more positive story and that is although all that is true there's also another Vision that's possible for AI which is to enable a more participatory form of democracy that is also better informed now the obvious objection that people make to participatory democracy is they say that
was fine for ancient Athens but it's not fine for modern States given their size and their pluralistic character but I think the real top of the agenda item for us now should be using Ai and digital technology to enable this more radical participatory democracy that we need so you could imagine AI tools providing citizens with information that is tailored to their specific learning style you could imagine them bringing together random samples of affected populations and promoting and moderating deliberation and debate and they could circulate rival proposals and identify points of consensus not just simple majority
but points of consensus that are across different demographic groups now you might say that sounds like a philosopher's dream but in fact it's already in existence in Taiwan for example there was a Grassroots movement that became the government and this Grassroots movement used an online platform called polus I wonder where they got that name from polus in abl citizens to come online to engage in deliberation and debate about questions like what should our policy be towards Uber and then their deliberations actually feed in to final legislation so I think we're living at a time in
which people feel increasingly disempowered subject to forces they can't control or even understand and I think there's a serious threat that AI will exacerbate that condition of disempowerment by creating a dehumanized world in which the exercise of our distinctive human capacities is pushed to the margins and all of this will be done under the happy Banner that it's satisfying our consumer preferences and there will be powerful economic interests behind all this it's not going to be easy to fight back but the fight back has to begin by thinking about what our deepest values are that
are at stake here the fight back has to begin by going back to socrates's question thank you [Music] oh
Copyright © 2025. Made with ♥ in London by YTScribe.com