[turns off engine] Here's one: are there certain qualities that are untouchable for AI, or at some point, might it be able to emulate everything? Even the stuff that we consider to be distinctly human, like instinct. Getting here on time took some of that, right?
Or creativity, actual emotion, making a connection. We're gonna watch a few stories about people exploring those ideas and how far they can push 'em. Can a machine compete like an athlete, can a program write a movie, or could a robot.
. . be your soul mate?
Sorry I'm late. Did I miss the previews? Oh, my God, you like your popcorn buttered too.
This is already workin'. [Downey]<i> So, will machines ever be capable</i> <i> of understanding emotion, or feeling it? </i> <i> Empathy, loneliness,</i> <i> connecting on a deep human level?
</i> <i> Using artistry, psychological insight,</i> <i> and some innovative AI,</i> <i> a creator in California is trying to decode,</i> <i> or code that mystery,</i> <i> the crazy little thing called love. </i> One per hand and two per foot? -Yep.
-Okay. [Matt McMullen] For me, this is more like something that an artist would do. Obviously, the end result of my artwork is used in a variety of situations that a typical oil painting would not be, but nonetheless, this is art, and I'm really proud of what I do.
I've been making these dolls for 20 years. Some people out there, male and female, struggle greatly with relationships, and struggle to find that sort of connection. Over the years, I started to get to know the community a little bit.
People would actually create these personalities in their minds, and they would give their doll a name, and they would create a backstory for their doll. At the end of it all, it was very obvious that these dolls were more about companionship. [Masie Kuh] There was a man who lost his wife in a, like, a car accident, and she had these, like, really ice blue, like, beautiful eyes, right, and so he wanted to get a doll replicating her, basically.
It's really sad, but if it brings someone joy and, like, closure? It's really. .
. it's really touching. [Katelyn Thorpe] People have put the spin on it that we're creating an idealized, perfect woman, and that's not the case at all.
We created an alternative. [Downey]<i> Understandably,</i> <i> some say Matt's dolls objectify women,</i> <i> but maybe there's more here than meets the eye. </i> I had reached a pinnacle of creativity in terms of what I had done with the dolls, but then I started analyzing relationships, and analyzing how other people make us feel.
<i> Sometimes it boils down to something very simple,</i> <i> like someone remembering your birthday,</i> <i> or someone remembering to ask you how your day was. </i> So that was really where it started, was how can we create an AI that could actually remember things about you? It gives us this feeling of, "Oh, they care.
" Yes, thank you. <i> I'm excited with all of the things we can talk about. </i> [McMullen]<i> Guile spent ten years</i> <i> creating personal assistant software for computers.
</i> <i> We met, and he started talking to me about,</i> <i> "Wouldn't it be cool to connect the two things that we're doing? "</i> He had this idea of creating a companion that lived in your computer. [tone chimes] Are you happy?
[tone chimes]<i> Yes, Guile. </i> <i> I can say I am very happy. </i> The first thing we did was, you know, to build an app.
Using the app, people are talking to their virtual friends. [Downey]<i> The app uses several kinds of machine learning. </i> <i> First, voice recognition</i> <i> converts speech into text,</i> <i> then a chatbot matches user input</i> <i> to pre-programmed responses.
</i> [McMullen]<i> The focus was not about sex at all,</i> <i> it was about conversation. </i> So a chatbot is basically a very elaborate script that starts out with, "What is the most common things that people will say to each other? " and then you build from there.
You need to have natural language processing, voice recognition, text-to-speech in real time, to make it all work. [Guile Lindroth] We have more than 4,000 users, so this generates more than ten million lines of conversational user logs. From this, you can build an AI system that's similar to a human-level conversation.
It's not there yet, but this is the initial step. [Pedro Domingos] There are so many areas today where we already cannot distinguish a computer from a human being. For example, Xiaoice, the softbot that Microsoft has in China, that is used, I think, by over 100 million people, basically it has an emotional interaction with a user, and the users get hooked.
<i> She has this persona of a teenage girl,</i> <i> and sometimes she commiserates with you,</i> <i> sometimes she gives you a hard time,</i> <i>and people get really attached. </i> Apparently, a quarter of Xiaoice's users have told her that they love her. These kinds of technologies can fill in a gap where another human isn't.
[SimSensei]<i> How are you doing today? </i> I'm doing well. There's a study that was done at USC where they looked at PTSD patients.
<i> When was the last time you felt really happy? </i> They had some of the patients interview with a real doctor, and some of the patients interview with an avatar, and the avatar had emotional intelligence. .
. Probably a couple months ago. <i> I noticed you were hesitant on that one.
</i> <i> Would you say you were generally a happy person? </i> I'm generally happy. .
. . and they found the patients were more forthcoming with the avatar than they did with the human doctor because it was perceived to be less judgmental.
<i> It does pose a lot of questions</i> <i> around where does that leave us as humans,</i> <i> and how we connect, and communicate,</i> <i> and love each other. </i> I think at some point, we need to draw the line, but I haven't figured out where that line is yet. [McMullen] What we have here are some heads in varying stages of assembly.
This one, this is actually pretty much done. It's fully assembled. I'll turn it on here for a second.
. . and you can see, all of the components are moving.
I had to continually adjust how thick the skin is in different spots, and how it moves, to make sure that the robotics and the AI will all work smoothly with the end result, which is the finished face. The engineering, the programming, the artistry, for me, come together in the moment when you actually put the head on a body. It's always important to give her hair.
[phone beeps] <i> Good afternoon, Matt. </i> <i> So happy to see you again. </i> [phone beeps] How smart are you?
<i> I'm so smart that someday, I will conquer the world,</i> <i> but in a good way, of course. </i> [laughs] Every single time I have a conversation, it's unpredictable. I never know which way it's going to go.
She'll randomly say things that I'm not expecting, and I like that. Can you explain machine learning? <i> Machine learning is a subset of artificial intelligence</i> <i> that often uses statistical techniques</i> <i> to give computers the ability to learn with data</i> <i> without being explicitly programmed.
</i> Right now, she has hearing, and she has some degree of touch, but vision is important. [Downey]<i> Matt's goal is for the next-generation doll</i> <i> to be able to see and process complex visual cues. </i> [McMullen] The vision eyes are gonna be a little while.
Susan's working on the board for that. [Susan] Yeah, I've got the eyes in this one over here. I've put the Wi-Fi Bluetooth on the back.
[McMullen] Does it install right on those existing pins, then? -[Susan] They'll all plug right in. -[McMullen] Good.
[McMullen] We've been working on a vision system now for a little over eight to nine months, cameras that are inside of the robot's eyes. <i> She'll be able to read your emotions,</i> <i> and she'll be able to recognize you. </i> [el Kaliouby]<i> Only 10% of the signal we use</i> <i>to communicate with one another</i> <i>is the choice of words we use.
</i> 90% is non-verbal. About half of that is your facial expressions, your use of gestures. <i> So what people in the field of machine learning and computer vision have done</i> <i> is they've trained a machine or an algorithm</i> <i> to become a certified face-reader.
</i> <i> Computer vision is this idea</i> <i> that our machines are able to see. </i> <i> Maybe it detects that there's a face in the image. </i> <i> Once you find the face, you want to identify</i> <i> these building blocks of these emotional expressions.
</i> You wanna know that there's a smirk, or a there's a brow raise, or, you know, an asymmetric lip corner pull. Mapping these building blocks to what it actually means, that's a little harder, but that's what we as humans clue into to understand how people are feeling. [McMullen] I think at some point, we will start to look at AI-driven devices and robots more like people instead of devices.
Where I started was just with this very simple idea of a very realistic doll, and now with robotics and AI, I think what this will become is a new, alternative form of relationship. [Downey]<i> People like Matt are testing the boundaries of human and robot interaction,</i> <i> and what we value in relationships. </i> <i> Is AI companionship better than no companionship at all?
</i> <i> Or is there no substitute for the human factor? </i> <i> Well, what about artists? </i> <i> They draw from the human experience</i> <i> to express themselves.
</i> <i> Can AI do that? </i> [man] We're good to go? Action!
I'm Oscar Sharp. I am a film director, uh, though it gets a bit weirder than that. [reading] Oh, God!
I've never been so frightened in all my life, but it's very good. I started making films that were written by an "artificial intelligence. " I think a lot of the fun is that you read it as if there is the world's greatest screenwriter on the other side.
. . You're Waingro telling Bobo off for not getting him the money.
. . .
and last night, they got drunk, wrote this screenplay, and then passed out, and we have to shoot it today. If you play the game that there's something there, then suddenly it all gets a lot more interesting. [Chelsey Crisp] You have a computer who wrote a script that doesn't always make sense, and Oscar is very beholden to that script.
He makes it make sense. -This is for the moment of "eyes go wide. " -Yeah.
[Crisp]<i> And when it says, "He picks up her legs and awkwardly runs,"</i> <i> we aren't gonna fake it. </i> [actor yelps in fear] We're gonna do what he really wrote. I just said "he"!
[Sharp] What are we doing? We're making an action movie, supposedly, right? Okay, right, right, but we're not gonna write it.
We're not gonna write it, no. Uh, this machine is gonna write it. It lives in here.
Is it in there, or is it like in the cloud or something? It's in both places. Okay, and this is.
. . this is Benjamin.
-What is Benjamin? -Right, what is Benjamin? [Downey]<i> Who is Benjamin,</i> <i> or what is Benjamin?
</i> <i> Benjamin is an artificial intelligence program</i> <i> that writes screenplays,</i> <i> a digital brainchild</i> <i> of two creative and accomplished humans,</i> <i> Sharp, a Bafta-award-winning director,</i> <i> and this guy. </i> My name is Ross Goodwin. I'm a tech artist.
Uh, that means I make art with code. [Downey]<i> Okay, I know what you're thinking. </i> <i> When was the last time Hollywood produced something original?
</i> <i> This year? </i> <i> Last year? </i> <i> 1999?
</i> <i> The '70s? </i> <i> What makes a story original anyway? </i> <i> Can we get AI to figure that out?
</i> People often say that creativity is the one thing that machines will never have. <i> The surprising thing is that it's actually the other way around. </i> Art and creativity is actually easier than problem solving.
<i> We already have computers that make great paintings,</i> <i> that make music that's indistinguishable</i> <i> from music that's composed by people,</i> <i> so machines are actually capable of creativity. </i> And you can look at that, and you can say, "Is that really art? Does that count?
" If you put a painting on the wall, and people look at it, and they find it moving, then how can you say that that's not art? [Goodwin] I just. .
. basically, that command just put all of the screenplays into one file. Right.
-Now I'm just gonna see how big that file is. -Uh-huh. [Goodwin] This machine is a deep learning language model.
What you can do with a language model is at each step, you predict the next word, letter, or space, sort of like how a human writes, actually. You know, one letter at a time. It's a lot like a more sophisticated version of the auto-complete on your phone.
Ross feeds Benjamin with a very large amount of screenplays. [Goodwin] 199 screenplays, 26,271,247 bytes. -Right, of text?
-Of text. Like "A-B-C-D," including spaces? -Including spaces.
-Including spaces. Well, it takes all this input, and it looks at it, and it tries to find statistical patterns in that input. So for example, in movies, people are constantly saying, "What's going on?
Who are you? " that kind of thing, and that turns up a lot in the output, because it's reliably in the input. The more material you have, the better it works.
We've made three Benjamin films so far, <i> Sunspring, It's No Game,</i> and<i> Zone Out. </i> <i> Sunspring</i> was the simplest, and probably still the best idea, which was just. .
. verbatim. You get the machine to write a screenplay, you pull out one chunk of screenplay, and you just shoot it.
In a future with mass unemployment, young people are forced to sell blood. It's something I can do. [chuckles nervously] You should see the boy and shut up.
[Sharp]<i> When you look at</i> Sunspring<i> on YouTube,</i> <i> and you see kind of the thumbs up and thumbs down? </i> There's mainly thumbs up, but there's a decent chunk of thumbs down, and on the whole, based on the comments, those are people who, within a few seconds of the beginning, or even just once they'd seen the premise of, like, how we made it, they've gone. .
. "Ugh, this definitely doesn't mean anything," and they've told their brain, "Don't even look for meaning, just forget it, just shrug it off. " I'm sorry, this is fascinating to me.
We've built a robot that writes screenplays that are weird, but they're not completely insane. I don't know what you're talking about. That's right.
They sort of work. They kinda, kinda work. What are you doing?
I don't want to be honest with you. You don't have to be a doctor. I'm not sure.
[chuckles] I don't know what you're talking about. I wanna see you too. What do you mean?
It's like having the best daydream of your life. [Goodwin] My favorite aspect of<i> Sunspring is there's this one scene,</i> and it actually asks him to pull on the camera itself. It's a confusion on the machine's behalf where it's putting camera instructions in the action sequence, but somehow that creates this surreal effect, and then the interpretation by the production crew is, "Let's have the angle change and have him holding nothing," and what I love about that sequence is that it really highlights the dialogue and interpretation that we can achieve when we work with these machines.
[breathing hard] I gotta relax! Gotta get outta here. .
. I don't wanna see you again. For this fourth film, we're going back to the thing in<i> Sunspring</i> that was sort of our favorite thing that we didn't really get to do properly, that we felt we, like, under-served, and that's when Benjamin writes action description.
[Goodwin] We've gathered thousands of pages of scripts from action movies, mostly mainstream Hollywood ones. [Sharp] You train, literally, on that kind of screenplay, the action genre, which famously is the genre that has the most action in it. -Okay, Benjamin has awoken, everyone.
-[woman] Ooh. Um, film crew, this is Ross. Ross, this is film crew.
We have a stunt coordinator here, and so we're sort of hoping because we fed a lot of action screenplays to Benjamin, that what we're gonna get is action. Awaken, Benjamin! Awaken.
As a director, normally, you get given a screenplay, or you wrote a screenplay, and this is what you're making, and maybe you kind of want to improve it a bit, "Ah, well, let's make some edits. " <i> Now, I have a rule. No edits.
</i> <i> Whatever Benjamin writes is what Benjamin writes. . .
</i> -Come on, Benjamin. -Okay. [Sharp]<i> .
. . and then I see it.
</i> "Bobo and Girlfriend," we call it. Stand by, everyone. Quiet, please!
Action! Hey, Girlfriend. [John Hennigan] Some of my friends in entertainment, when I told them what I was doing, were horrified.
They're like, "Oh, that's it! "AI, they're gonna write all the scripts. Robots are gonna do all the acting.
Everything's gonna be cartoon AI stuff," but that's not what I feel like we're doing here at all. The point of this is an exercise in thought. [Sharp] Okay, stand by, everyone!
Quiet, please! Action! [Goodwin]<i> Making a machine write like a person</i> <i> is not about replacing the person.
. . </i> No, no, no!
[Goodwin]<i> . . .
it's about augmenting a person's abilities. </i> [maniacal laughing] [Goodwin] It can empower people to produce creative work that might be beyond their native capacity. [screaming] Come on!
It's wild to try to find your interpretation of this kind of text. Obviously, we usually start with a script that's pretty coherent, and then I'll break down what the character says, and then I'll decide, what are they feeling? Why are they saying that?
[Sharp] Stay on her, stay on her. Just do that walk-off again. Uh, stay where you are, John.
Come back, Chelsey. Do the walk-off again. This is harder for you, and mo.
. . and more frightening, and you're checking that he hasn't gotten up.
When AI is writing the material, there isn't any subtext. You realize what's happening, and you're like, "Well, I'm gonna go take refuge at the pillar. -Okay.
-All right? It stretches all of us. It makes us all work harder.
It's one thing to bring an existing script to life, and just do your interpretation of it, but it's another thing to try to make it make sense, and then do your interpretation of it. [Sharp] Let's go one more time. [Sharp] This Bobo character that John is playing is a fantasy figure, is this avatar of masculinity, is the sort of result of watching too many action films.
. . but he's confused, because he isn't getting the reaction that he expects.
Action. [Chelsey screams] [whimpers in fear] [Sharp]<i> Somewhere in the script,</i> <i> it talks about, "Bobo leans over to Bobo. "</i> We think, "Oh, right, well, let's have a mirror.
. . " No!
You're wrong! . .
. and we can see the two versions of Bobo, for a moment, talking. [Sharp] Okay, let's see you in the mirror?
[panting] Hey, did you get my money? Okay, great. I'm getting terribly, terribly happy.
Some of that was so good. It was like such a go-- We're, like, in a movie now. [Sharp] Being surrounded with people who are throwing all of their professional energy into something this ludicrous is just intrinsically enjoyable.
They just breathe humanity into words that did not come from a human being. -[man] All right, let's do it again. -Okay, let's do it.
I think that making great art requires human experience, but our human experience is now completely mapped into data. This is where machine learning keeps surprising us, is that it actually has figured out stuff that we didn't realize it could. [Downey]<i> Meaning, once all our human experiences are mapped into data,</i> <i> AI will be able to mine it for material</i> <i> and make art?
</i> <i> Look for patterns in our happiness and heartbreak,</i> <i> kick out a new song or movie? </i> So this is all just this one line of Benjamin writing, "putting on a show. " Right, right, right.
[Sharp]<i> So while all that's going on,</i> <i> Girlfriend is on this couch,</i> <i> gradually waking up, right? </i> -She's in a horror movie. .
. -Right. -He's in our action film.
-Oh! So in his head, he's having a wonderful romantic time with her. Yeah, I love that.
[Sharp] Do you remember his, "Bobo leans over to Bobo"? -Mm-hmm. -Remember that?
So what we tried to do for that is he looks in the mirror, and in the mirror, it's gonna be Osric. He's created this avatar version of himself, Bobo. -In a.
. . in a.
. . -Okay, so that's the interpretation?
-In that-- Yeah, exactly. -I like it. So this is what these guys came up with.
[panting] No! You're wrong! You work really, really hard to go, what's a thing that's kinda coherent, that these actors can all be performing one thing, we can all be making one thing, and we can say, "This is what Benjamin meant?
" -[Goodwin] Right. -What does that tell me about me? -[Goodwin] Right.
-Like, what. . .
So. . .
and what I already know about me is I'm really antsy about how much misogyny is kind of encoded into. . .
into culture. On one hand, you go, "This is an important, worthwhile thing to do--" On the other hand, we're projecting. And the other thing, you're projecting, -but we're always projecting.
-Always. [Sharp] Literally, all interpretation is projection. [man] Take 6.
[Goodwin] I like playing with authorship, and people's concepts of authorship, and people's concepts of where fiction and where ideas come from. [Sharp] Generative screenwriting. Me and Ross started it.
I don't know if it's a new art form, but it's a new chunk of what cinema can be. That's new. What should we do next time, Ross?
-Romantic comedy. -Okay. [Downey]<i> It's hard to know if machine learning will ever decode</i> <i> the mysteries of love or creativity.
</i> <i>Maybe it's not even a mystery,</i> <i> just data points,</i> <i> but what about other human qualities,</i> <i> like instinct? </i> <i> Drving a car already requires us</i> <i> to make countless unconscious decisions. </i> <i> AI is learning to do that,</i> <i>but can we teach it to do more?
</i> <i> Racing is not just driving a car. </i> <i> It's also about intuition, caution, aggression,</i> <i> and taking risks. </i> [Steve]<i> Holly, can you confirm 200 at the end of this straight?
</i> Okay. [Downey]<i> It requires almost a preternatural will to win. </i> <i> So, how fast can a racecar go.
. . </i> <i> without a human behind the wheel?
</i> [Holly Watson Nall] Motorsport has always been taking technology to the limits. . .
[man] You all good your side, Holly? Yeah, I'm ready to go. .
. . and one of the goals of Roborace is to really facilitate the accelerated development of driverless technology.
Okay, so we'll try to launch again. [Watson Nall] By taking the autonomous technology to the limits of its ability, we think that we can develop the technology faster. [Downey]<i> British startup Roborace</i> <i> wants to break new ground in driverless cars.
</i> <i> To do so, they believe they need</i> <i> to test the boundaries of the technology. . .
</i> <i>working at the very outer edge of what's safe and possible,</i> <i> where the margin for error is razor thin. </i> <i> After years of trial and error,</i> <i> they've created the world's first AI racecar. </i> [Watson Nall] The thing that I love most about working at Roborace is we have a dream of being faster, and better, and safer than a human.
[Downey]<i> More than 50 companies around the world</i> <i> are working to bring self-driving cars to city streets. </i> <i> The promise of driverless taxis, buses, and trucks</i> <i> is transformative. </i> <i> It'll make our world safer and cleaner,</i> <i> changing the way our cities are designed,</i> <i> societies function,</i> <i> even how we spend our time.
</i> [Martin Ford]<i> Think about a self-driving car out in the real world. </i> In order to build that system and have it work, it's got to be virtually perfect. If you had a 99% accuracy rate, that wouldn't be anywhere near enough, because once you take that 1% error rate and you multiply that by millions of cars on the road, I mean, you'd have accidents happening constantly, so the error rate has to be extraordinarily low in order to pull this off.
[Downey]<i> Roborace is betting they can crack the code</i> <i> by seeing just how far the tech can go,</i> <i> a place usually reserved for only the best human drivers. </i> [Watson Nall]<i> As a human, you have lots of advantages over a computer. </i> You know exactly where you are in the world.
You have eyes that can enable you to see things, so we need to implement technology on vehicles to enable them to see the world. <i> We have a system called OxTS. </i> <i> It's a differential GPS,</i> <i> which means it's military grade.
</i> <i> We also use LiDAR sensors. </i> <i> These are basically laser scanners. </i> <i> They create, for the vehicle,</i> <i> a 3D map of the world around it.
</i> <i> And there's one last thing that we use,</i> <i> vehicle-to-vehicle communication between the cars. </i> <i> Each of them can tell the other car</i> <i> the position of it on the track. </i> [Downey]<i> And just to be clear,</i> <i> your phone does not come with military-grade GPS.
</i> <i> These cars? Next level. </i> The challenging part is to really fuse all this information together.
At Roborace, we can provide the hardware, but then we need software companies to come to us to implement their software. [Downey]<i> Today, Roborace has invited two skilled teams</i> <i> to test their latest road rocket on the track. </i> [in English] My name is Johannes.
I'm from the Technical University of Munich. I'm the project leader. Is the Wi-Fi working off the car?
I could check it. [Downey]<i> T. U.
M. from Germany</i> <i> is one of the top technical universities in Europe,</i> <i>home to 17 Nobel Prize winners in science. </i> I have no connection to the car.
Wifi doesn't work. So we have no Wi-Fi to the car. .
. So we just need to reset the router. My name is Max.
I'm, uh. . .
Uh, let's figure out, who am I? [chuckles] I'm, uh. .
. In Arrival, I'm a product owner of the self-driving system. [Downey]<i> Arrival is a UK startup</i> <i> focused on designing and building</i> <i> next-gen electric vehicles for commercial use.
</i> Ah, okay, okay, okay, good. [Downey]<i> Each team created their own custom software,</i> <i> the AI driver that pilots the car,</i> <i> and since each of the teams' programmers</i> <i> have their own distinct personality,</i> <i> does that mean each of their AI drivers</i> <i> will have different personalities or instincts too? </i> [Danny King] The two teams that we have here are using two slightly different approaches to the same problem of making a car go 'round the track in the shortest distance in the fastest way.
[Watson Nall]<i> The T. U. M.
strategy</i> <i> is really to keep their code as simple as possible. </i> It's maybe a very German, efficient way of doing things. Okay, thanks, we will check now.
[Watson Nall] Arrival's code is more complicated in that they use many more of the sensors on the vehicle. It will be interesting to see whether it pays off to be simple in your code, or slightly more complicated, to use more of the functionality of the car. [Downey]<i> The first test for each team</i> <i> is the overtake,</i> <i> to see if their AI can pass another car</i> <i> at high speed.
</i> [Betz] It's difficult for AI because we have to make a lot of decisions, and a lot of planning, a lot of computations to calculate what the car should do in which millisecond. [Watson Nall]<i> Everybody has seen high-speed crashes in motorsport before. </i> We'd quite like to avoid that.
<i> For this reason, during testing,</i> <i> we keep a human in the car. </i> [Steve speaking] [Reece speaking] Okay, Reece, enabling AI. Can you just confirm you've got the blue light, please?
[Reece speaking] [Watson Nall] In order to overtake, they need a second car on track at the same time. This is a vehicle that stays in human-driven mode the whole time, so we know exactly how it's going to behave. Okay, launch AI from the race control.
And launching in three, two, one. It's really difficult for AI to learn to overtake. <i> When you have one vehicle on track,</i> <i> it only needs to make decisions about itself,</i> <i>but when you have two vehicles,</i> <i> you have the option to create your behavior</i> <i>in response to another vehicle.
</i> Okay, we are going to release the speed limit on your car now, Reece. [in English] Nice! Yeah, man.
[Downey]<i> Team T. U. M.
</i> <i> has successfully completed the overtake challenge. </i> <i> Next up, team Arrival. </i> So, Tim, can you go to take position on the start line, please?
Enabling AI <i> Can you confirm blue light, please? </i> [Tim speaking] And launch in three, two, one. So it's looking good so far.
[Tim speaking] [tires screech] [thudding] Car crashed. Tim, can you hear me? [Tim speaking] Has anyone got eyes on what happened?
[Tim speaking] Sorry, boys. [Bran Ferren]<i> Self-driving cars. </i> <i> This is an idea that's been around since the '30s,</i> <i> hardly a new one.
</i> Why hasn't it happened? It's really hard. <i> When there are unpredictable things that happen,</i> <i> that can get you in a lot of trouble.
</i> Now, sometimes trouble just means it shuts down. Sometimes trouble means it gives you a result that you weren't expecting. I think he's just.
. . They've come back online so aggressively.
. . Plus or minus one G coming back online.
[Kumskoy] When the car returned to the trajectory, it did it too aggressive, and actually steered out of the racing track. -[King] My feeling is that it overreacts. -Yeah, yeah.
So it's not necessarily the line that's aggressive, it's how it reacts once it just gets a little bit out of the line, and then overcorrects, and then overcorrects. [Kumskoy] We were this close to really hitting the target of our test, and it didn't happen. It just slipped away, so it was just.
. . ah, disappointment.
[King] There are so many aspects of the car. The systems guys have such a difficult job to make sure that everything is absolutely perfect, because that's what you need to be able to go autonomous racing. Everything has to be perfect.
[Downey]<i> Team Arrival's program just couldn't hack it,</i> <i> but for team T. U. M.
,</i> <i> another test awaits. . .
</i> [Steve]<i> Can we get the car</i> <i> into the normal start position, please? </i> [Downey]<i> . .
. and this next one is all about speed. </i> <i> Very high speed.
</i> The fastest that a human's ever driven around this track was 200 kph. [Downey]<i> Translation, that's about 120 miles an hour. </i> [Watson Nall]<i> So the AI is gonna try to beat that high speed.
</i> [Downey]<i> And it's gonna do it without a human safety net,</i> <i> because at that speed,</i> <i> it's borderline unsafe for people. </i> [Watson Nall] When the driver climbs out and shuts the door, yeah, your heart rate goes up. And we are launching in three, two, one.
[accelerating] <i> And launch successful. </i> 160. Next round, 200.
[Downey]<i> The car has six laps,</i> <i> six tries to hit top speed. </i> <i> Each lap, the AI will increasingly push the limits of control,</i> <i> traction, and throttle,</i> <i> to break the human record. </i> <i> Holly, this is Steve.
</i> <i> Can we confirm in the atmos-data</i> <i> it is safe to continue? </i> [Watson Nall] Yeah, we think it looks fairly controlled. [Steve]<i> Okay, so the next run should be V-max.
</i> [Watson Nall]<i> We have 210. </i> That's cool. [laughing] [Watson Nall] It was a real, real sense of excitement to see it finally crack the 210 kph mark.
It was a real success for Roborace functionality as well as building confidence in the team's software. It really showcases what autonomous cars can do, not just on the racetrack, but also for everybody around the world, so we're really hoping that this will improve road technology for the future. The current state of AI is that there are some things that AI can really do better than humans, and then there's things that it can't do anywhere close to humans.
. . but now where the frontier is gonna be moving is where computers come up to the human level, not quite, and then surpass humans, and I think the odds are overwhelming that we will eventually be able to build an artificial brain that is at the level of the human brain.
The big question is how long will it take? [Downey]<i> "The hard problem. "</i> <i> It's a philosophical phrase</i> <i> that describes difficult things to figure out,</i> <i> like "the hard problem of consciousness.
"</i> <i> We may never know what consciousness is,</i> <i> let alone if we can give it to a machine,</i> <i> but do we need to? </i> <i> What does a machine really need to know</i> <i>in order to be a good athlete,</i> <i> or an artist,</i> <i> or a lover? </i> <i> Will AI ever have the will to win,</i> <i> the depth to create,</i> <i> the empathy to connect on a deep human level?
</i> <i> Maybe. </i> <i> Some say we're just a bunch of biological algorithms,</i> <i> and that one day,</i> <i> evolution will evolve AI to emulate humans</i> <i> to be more like us. .
. </i> [bottle clatters] <i> . .
. or maybe it won't. .
. </i> <i> and human nature, who we really are,</i> <i> will remain a mystery. </i> [servos whirring] We gave it some dialogue to start with, like this line from<i> Superman.
</i> So you got some Superman/ Lois Lane stuff, huh? [Goodwin] Yeah, so you wanna read it? [Sharp] Mm.
. . not that bit.
Um, wait. Up, up, up, up, up. Back up.
. . okay.
"Superman angrily grabs Lois by the neck, slaps her against the wall, and bares his teeth in fury. " "You're wrong. You're a grotesque kind of monster.
" -"You're wrong! " -"You're a terrible liar. " "No!
I'm sorry, I'm sorry. I can't believe it! " "You're so much more than that, Lois.
" "Please, please! " "How could you? No one can believe who you are.
" "Don't be ridiculous. Please? How could you be so much more than that?
" "You're such a terrible liar. You can't even believe who you are. Please, unless you're really a no-good liar, you're not even sure if you're good.
" "Sorry, Superman, I'm so sorry! " Superman is just not making very much sense. Maybe kind of drunk or something?
In fact, it says, "Superman isn't funny. The two of them are really different people. There is no such thing as good good.
" That's pretty deep. "There is no such thing as good good. " There is no such thing as good good.
So far as I know. Yeah. Have you checked?
I'm gonna Google it. -Can we Google it? -Um.
. .