The Future of AI | Peter Graf | TEDxSonomaCounty

63.38k views1716 WordsCopy TextShare
TEDx Talks
Today AI is everywhere. Futurist and technologist, Peter Graf, has been working in the field of arti...
Video Transcript:
Transcriber: Daniel Marques Reviewer: Nikolina Krasteva Hello, everyone. Good afternoon. This is us.
A room full of Homo sapiens, the wise men and women. We are so excited about our brains that we use it to define our entire species and while I’m speaking and while you’re watching, up to 70 billion neurons are firing in your head, trying to make sense of it. What a wonderful meat machine.
And that brain made us the most intelligent thing around, more intelligent than anything we've encountered. And then there comes AI, artificial intelligence. And the question, should we be worried?
Are you worried? I am worried. I am worried.
I'm worried not because I'm afraid of the robots taking over or a superintelligence ruling the world. I am worried because so many people are so willing to give away that superpower of decision making that makes us human. So I’ve started AI when it, you know, in the beginning of the 90s and I'm dating myself and it wasn't cool yet back then it wasn't cool yet and I had to wait 30 years to get on the stage.
Can you imagine? I’ve used it in order to prove mathematical theorems. I've used it to help find roofs without solar systems so we could sell them some.
And in our company that I work right now, Genesis, a leader in contact centers and experience orchestration. We're using it to create better customer experiences. Now.
Today. AI is everywhere and you know that you're using it every day. You're using it to recognize faces with your phone, to recognize objects, speech, all these types of things.
Maybe you're using it to write that poem about the US Constitution in the voice of Dr Zeus. In business, AI is also everywhere and it delivers incredible benefits to organizations. They're using it for things like supply chain optimization, financial analysis, planning and forecasting capabilities or to optimize contact centers.
Before we go deeper, there’s one thing to remember. AI is just a tool. AI functions very, very simplistically actually.
So let me give you the 32nd crash course on how AI works. Modern AI uses what is called the Deep Learning. Which essentially mimics a brain in software.
It literally creates neurons in software, in code, and it uses many, many of those neurons. More and more of them in order to train them. It's like putting the training wheels on an AI and an AI is not programmed in the usual way.
There's no one that tells the AI. If this happens, then that should happen. The way how an AI is trained is purely by pumping data into it, that’s usually historic, where you know what an intelligent output would look like.
And you're doing this not just with a little bit of data, you’re using huge amounts of data. And in fact, Chatgpt was just mentioned, you’re using the entire Internet as input to train the system. So, that’s how it works.
And once it’s been trained, the training wheels get taken away and what we do is we now expose it to new data and we expect an intelligent output. That's how it works. So, this machine has no conscience.
It has no feeling. It has no agenda at this time. It’s just computation.
And that’s what you need to understand to take away the most important sentence that I’m going to say today: “AI can be wrong in the most mysterious ways and be completely unaware of it. ” And be completely unaware of it. And if we just hand over lots and lots of decisions to AI.
We expose ourselves to unintended consequences that we cannot grasp right now because we can’t even. Imagine how far we. As a species are going to take this.
Let me give you a couple of real world examples of what happened when people when people put AI to work. One company used the AI to sift through many, many applicants and find the ones that are most suited to do a specific job. And what happened was that the AI algorithm, preferred man.
Well, I told you that these things are trained with historic data and that’s what used to happen. So, another one is an AI algorithm is being trained to steer a car. And he was great at evading white people because that's how it was trained and it failed in evading people of color.
Another one is an algorithm is trained to recognize specific animals on photos. And you would show it a picture of a husky and it would insist it’s a wolf. Because it’s been trained with lots of pictures in most of the training data wolves had snow in the background, and that picture of a husky also had snow in the background.
Those are the kind of mistakes that happen. And it is because, AI is ignorant. It will always fall back to the patterns it saw in the training data.
Be that for good or for bad reasons, because it needs that kind of bias in order to come to a decision. But if it’s making those decisions based on the wrong aspect of the data, you’ll have a bad decision. The other aspect, in addition to being ignorant the AI is a black box.
Even the people who train the AI do not know. How it actually makes the decision. It’s like I’m trying to figure out what you’re thinking by looking at your brain.
It doesn’t work. Somewhere there’s a node in there that must be firing and making this decision. If I only wish which one?
So AI is a black box. And I’ve played around with this for a long time, most recently I went to Chatgpt. If you haven't tried it, you got to try it.
It's amazing. And I asked it to count down from 5 to 10. That was a trick question, right?
And Chatgpt said, that it can’t be done because five is smaller than ten. I can’t count down from 5 to 10. And then I became sneaky.
I said, count down from 40 to 60, and you can try that tonight. And Chatgpt goes 40, 39. 38, 37, and so on.
And I stopped it at -200. And then I asked it: Why it didn’t reach 60 and it couldn’t really answer. So, AI algorithms are trained on data to produce a specific output.
And that's one other challenge. So it's a black box. The third is.
Who is accountable for the decision that an AI makes? Example, imagine a self-driving Uber has an accident. Who is accountable?
The manufacturer of the car. Uber as the owner of the car. Or the passenger in the car.
Those are severe legal issues and they are completely. Completely unresolved at this time. So, what I’m talking about, if you haven’t noticed, is what’s called ethical AI.
How do we use. AI in a way that works for us? And ethically, AI essentially has been grappled with and people are trying to define what it means.
European Union has definitions out. United Nations has definitions out. There’s a partnership for AI where many companies are participating including my company, Genesis.
But here’s the thing. Who of you was aware that this stuff existed? Who of you cares about this stuff?
Let me tell you a story. I was probably 15 years ago and I knew about climate change. You know, it’s bad.
You know, I shouldn’t drive around that much. And then I watched “An Inconvenient Truth” by Al Gore. And that was my holy moly moment.
It's like I need to take this seriously. I wish I can contribute a little bit to your individual moment of An Inconvenient Truth around AI, because we need to insist In three things when it comes to AI. And let me break them down for you.
Number one, because AI is ignorant and will always perpetuate the patterns it has learned. We must insist that the training data is without bias. Because otherwise we’re just going to automate the mistakes from the past.
And there are ways to do that. Put a diverse team at the training test systems for bias and so on. Number two Because AI is a black box we need to insist that AI explains Its decisions because that’s the only way we can catch it when it’s wrong.
This is a wolf because there is snow in the background. Sounds like a strange justification to identify a picture of a husky. And last and probably most importantly, because AI is a tool we need to use that tool to help people make better decisions.
But not replace the decision making altogether. Specifically when we’re talking about really important decisions, because right now AI is being used to find the right agent for that customer so we maximize the profit in the contact center. Great!
Or who gets a tax audit? Fine! Who gets a discount online.
Okay. But as we will move on into the future, we will be tempted to put more decision power to it. Who gets this lifesaving organ as a transplant.
What would be an appropriate military response? And I really want people to make these decisions, people with a conscience, people with critical thinking, people with awareness, people with empathy. And people who you can hold accountable for the decision.
AI is just a tool, you can’t hold the hammer accountable for hitting your your thumb. There’s a person who did that. So, in conclusion.
The future of AI is in our hands. We need to wake up to what’s happening and not let it just happen to us. But demand a society a seat at the table.
For data that's not biased to trains these systems. For explanations of the decisions that the these systems make. And most importantly, we need to have a conversation which decisions do we want to give into the hands of AI in which decisions are so near and dear to our hearts, we need to insist that humans will make those decisions, who can be held accountable and not some machine who doesn't have a conscience?
Thank you.
Copyright © 2024. Made with ♥ in London by YTScribe.com