it's the DNA of the next tech Revolution this is such a huge data set that there's no way that a human or even a team of humans can look at all of it the race for artificial intelligence is on in Quantum you have this thing called a cubit which can be zero and one at the same time and that's where the power comes from AI might make our lives better the life expectancy for human civilization might then easily measure in billions of years but could it destroy the human race you ought to be really concerned
about the strong AI that has guns on [Music] it as the host of cyber war I've traveled the world to talk to brilliant hackers scientists and programers and many of them have told me they're thinking about artificial intelligence today we're in the middle of an AI boom huge advancements and artificial intelligence have made things like Siri and self-driving cars possible AI is in video games security surveillance systems smart home devices and advanced Weapons Systems but as researchers raced to make the next breakthrough a lot of warning that we haven't actually thought this through and it's
not necessarily DET Terminator scenario they're afraid of where an AI system becomes self-aware and decides to kill us all the real threat could lie in the unintended consequences no one sees coming I'm at Stanford University where researchers are designing AI that comes in a very non-threatening package meet Jack ribbot a cute little machine that's built to navigate and move through crowds of human buddies this is a robot that's programmed to learn on its own through example this technique called Deep learning has been fewing a lot of recent AI breakthroughs alexand alahi and Alexandro are part
of a team that built Jack rabot and they say their little creation is more advanced than a self-driving car so the robot enter the scene analyze it and then navigate through it according to all the data that we gathered so it's just watching people and looking at people and what they're doing and judging the you know the space and the time yeah also trying to understand how they interact between each other the main aspect would be to understand well how what is the safety distance that I keep with someone else so how do I accelerate
or dis relate when I get close to someone and react accordingly so how do you gather that information you just like this just looks some like some sort of surveillance footage so last summer gr team spend like two months Gathering daily at R hour uh top view data from the stold crowd so we can actually understand how they avoid how they behave and how they navigate in such a crowd let's go robot free so wait he's being controlled by you right now yeah but so does he have an fully autonomous mode yeah he could and
what does he do he just follows no actually you can give him like if you map the place in the area he can go from point A to point B but just having a simple collision avoidance right now but like literally avoid every single obstacle and go where you wanted him to go can we drive him around a little bit yeah absolutely and those that spinning thing that's just like a 3D sensor right just constantly right so we're capturing a depth data visual data and we're combining all these uh sensors to detect humans understand uh
its surrounding to locate itself and predict also where people will go do people freak out when they see it a lot no like it a lot they come hug it they talk to it he doesn't he cannot talk yet is that what you hope the rise of the Machines is it's a friendly rise of the Machin for that for that one yes yeah absolutely Jack rots creators hope that as AI advances machines like this will be built to Carri luggage through airports or help the blind navigate through pedestrian traffic the Stanford team is working to
get AI to the point where Jack rots can one day do what most people can just do by nature but today's computers are already per performing tasks no human is capable of Performing even if it's not quite AI that technology continues to evolve and the so-called supercomputer is at The Cutting Edge Brian beagle Works in Nassau's Advanced supercomputing division where they're working with machines that can actually help us see into the Future Let's go to the viz lab and I'll show you uh some of the things that are users do with our supercomputers yeah using
Code so Advanced it took 15 years to write yeah so this is actually a simulation of R oceans this is um it uses measured data from NASA satellites and puts it into a really huge computer model on the supercomputers and predicts the ocean Behavior so this single simulation actually used up to 70,000 processors um and generated over three pedabytes of data that's like 10,000 times what you would have on your entire laptop let me show you this in full scale this is actually scaled down so you can see it on the monitors but if I
go to the the next one you can see what the full scale version of this simulation is for example this is the ice ice over Antarctic ocean and you can see where it cracks and it sends ripples out into the ocean and they're working on um they're still continuing to advance the accuracy of this model so they can predict decades into the Future Would you say this is artificial intelligence it seems like it's getting there right it's so complex um but we have programmed in this case every single line of this coat um it's it's
a little different than artificial intelligence where it's more of a even more of a black box where we don't know exactly how it's going to work and how that that artificial brain is going to evolve why would NASA be interested in artificial intelligence uh well there are a few reasons um of course NASA's trying to send artificial probes further and further out into the solar system and and eventually Beyond we can't program those uh to for all of the things that it's going to encounter so we need them to be able to do uh their
job without us constantly telling them everything to do even in interpreting data like this this is such a huge data set that there's no way that a human or even a team of humans can look at all of it Computing is going to continue to expand dramatically over that even the next 10 years and so we'll be able to do even better job of modeling you know the history of the universe the future of the universe the history and future of our planet and go further in exploring our universe the key to artificial intelligence could
be creating systems that take cues from the way our own brains work using a process called Deep learning I'm here at Berkeley to meet one of the leading Minds in the development of AI Stuart Russell has been at this for more than 20 years he co-wrote the definitive textbook on AI and he understands both its promise and its dangers so the the brain has an enormous network of neurons so neuron is a cell that has these long thin connections to other neurons so it kind of looks like a big tangle of electrical spaghetti and um
we have tens of billions of neurons and the Deep learning networks are much much much simpler and what's similar about them is that uh both the brain and these deep learning networks can learn how to perform a given function and they learn by being given lots of examples that deep Learning Network for example if you want to learn to recognize cats and dogs and candles and uh staplers and things like that you can show it millions of photographs of these things with labels saying this is what they are so on tasks like recognizing uh a
wide range of categories the the competitions they run now have a thousand different categories of objects and in the last 5 years we've gone from systems that might get 5% accuracy on that thousand category task to systems that are getting 98% accuracy so you've been at this for a while I mean how does it make you feel that the progress now be is becoming exponential the exponential word is a dangerous one because it uh it tends to suggest that things will will continue to accelerate Without End but it may turn out that it will plateau
uh and that for other tasks we need new breakthroughs um Quantum Computing Quantum Computing possibly but that's kind of a a cheat in some sense that says that rather than really understand you know how the brain manages to do these amazing task with what is really not that much hardware and so I almost hope that Quantum Computing doesn't happen uh doesn't happen or happens a long time in the future where and you know give us a chance to keep working on on finding the sort of the secrets of intelligence the things that make us smart
um and Gain real understanding because that just Brute Force it isn't really understanding what's going on Quantum Computing may be the next giant leap in building the artificial intelligence of the future it's a technology so powerful that we really could use it to develop AI without having to understand how human learning works and though Stuart Russell hopes that Quantum Computing won't happen for a long time rupac biswas can't wait and he isn't he runs the quantum artificial intelligence Lab at NASA's a Research Center where scientists are working on a powerful experimental computer called the d-wave
it's built on the principles of quantum mechanics which I was hoping rupac could explain to me if someone tells you that they'll explain to you what quantum mechanics is you should run away from them because no one really understands this field so it's a very complex field rupac agreed to show me NASA's quantum computer this is the you know the d-wave quantum and leing system uh so what you'll see here is basically a black box and that is where the d-wave processor is this is the Star Trek computer that's more 100 a millions millions times
more powerful than anything Microsoft's got yeah well sure yeah for certain classes of problems yes this is something that is really more powerful and we expect to get a lot of research done on this nasau is hoping to use the quantum computer to develop Quantum algorithms and algorithms are a key component of the code running AI in layman's terms an algor is a kind of recipe for solving a problem basically it's the set of step-by-step instructions given to a computer to help it accomplish a task the question here though is that the algorithm could be
different on a super computer than on a quantum computer even though you're trying to solve the same problem so then what is the difference because you know to a lot of people when you say super computer and quantum computer aren't they all super computers what's the difference right so so supercomputers are basically you know is based on this transistor and the transistor is in in layman terms is B basically a very small switch so it's either zero or one so a bit and that's in a traditional computer it's called a bit it's either zero or
one whereas in Quantum you have this thing called a qbit which can be zero and one at the same time and that's essentially the difference between a computer or a supercomputer or what we would consider classical Computing versus quantum Computing where the quantum computer allows to be you to be in two states at the same time and that's where the power comes from a quantum computer could help solve problems that no technology can solve today finding cures for diseases or designing space colonies but what if we choose to use that awesome computing power for destructive
purposes you know if you think about nuclear energy you can use nuclear energy to solve world's energy problems but you can also use it for bad things you know it's all of these things if they are used improperly and get in the wrong hands it could lead to trouble I'm in Oxford to meet the founding engineer of Skype since leaving the company Yan talin has become one of the most prominent voices in the field of artificial intelligence he sees great advantages in developing AI but he also warns of the risks of making the technology more
powerful without understanding its potential harms once you have systems that are basically smarter than humans when it comes to developing further intelligent systems then you have intelligent systems developing intelligent systems that in turn go on to develop more even more intelligent systems and you have this intelligence explosion he thinks the next step in AI is the creation of so-called general intelligence what's the difference between what we're producing now and general intelligence of the future if you think about the chess playing computer it's just modeling the chess board in his memory and and looking at the
scenarios how this game can play out and what are the actions that it can do on the chess board so it actually would be good from chess playing perspective to not only model the chess board but also what's going on in the brain of your opponent so it's one thing for a computer to understand the game of chess but if it can look up from the board and understand me that's a whole new level of scary but I'm not the only one that's worried and 2015 a group of prominent thinkers signed an open letter warning
researchers of the risks of making AI more powerful without understanding its potential harms the signatories include Stuart Russell Jon tlin Steven Hawking and Elon Musk Nick Bostrom also signed the letter he's a Swedish philosopher who's concerned that a mega powerful AI that is capable fulfilling the goals we give it could cause our Extinction what if we don't understand the full consequence is what we're asking it to do what if we leave a crucial detail out take the myth of King Midas you know he asked everything he touches should be turned into gold which sounds like
a great idea because you'll be very wealthy if you can turn your coffee mugs into gold then he touches his food it turns into gold he touches his daughter turns into gold sculpture so not such a cool idea turns out that it's actually quite difficult to write down some objective function such that it would actually be good if that objective function were maximally realized it seemed to me that um this could be the most important thing in all of human history what happens if I succeeds at his original ambition which has all along been to
achieve full general intelligence where we have potentially artificial agents that can strategize can deceive that can form long goals and find creative ways of achieving them and at that point that kind of AI is not necessarily best thought of as merely a tool merely another Gadget at that point really talking about creating another intelligent life form so could this potentially be AI the last human invention yeah so once you have general intelligence at at the human or superum level then it's not just that you have made a breakthrough in AI but you have made indirectly
a breakthrough in every other area as well so the AI can do research science development all the other things that humans do so potentially what you have is a kind of telescoping of the future where all those possible technologies that you know maybe we would have developed given 40,000 years to work on it you know space colonization cures for aging all these other things so in other words we could be this hyper Invincible space hopping species with AI but we also could go extinct by it yeah I think think um that machine super intelligence is
depending on how optimistic you feel on a given day either the keyhole through which Earth originating intelligent life has to pass and we could crush into the wall instead of actually going through this but if you make it through there then the life expectancy for human civilization might an easily measure in billions of years the scientists and philosophers working on artificial intelligence are sure that AI can improve our lives yet the same people worry that it could destroy us but how would that happen I hi I'm Ben nice to meet you Ben nice to meet
you nice to meet you what if the AI we build is actually designed to kill Heather rth has testified before the UN as an expert on autonomous weapons I I once sat next to a a a grad student on a plane and he told me that he was he was an AI researcher and that's why I was very interested in saying what is it that you work on and uh he said well I work on image recognition and I was really tell me more about that and he said well I'm looking at you know how
we identify um different birds and how we identify different types of coral and starfish and I said who funds your research and he said the Office of Naval Research and I said do you really think that your research is going to be confined to birds and starfish and he didn't actually have an answer for me he didn't really think through the next step how bad would it be if super intelligence is developed by the military so I think if a super intelligence emerges we're all in trouble um for a variety of of reasons one of
the things that we know about military applications is that they're not for the benefit of humanity right they're directed towards offensive harming we're not talking about creating an AI that's going to be trying to solve climate change or solve um or create poetry we're not worried about those types of of AI applications um we could be worried about those ones is becoming super intelligent but you ought to be really concerned about this about the strong AI that has guns on it what does a super intelligent weapon look like a super intelligence could be connected to
everything if it has a network if it has a capability of being connected through Wi-Fi or it could figure out new ways of connecting itself if you shut off that Wi-Fi it could propagate itself and it software on different um servers and different things so you could never really truly get rid of it it could hook itself into missile defense systems and nuclear arsenals and it could do whatever it liked I mean that's the whole thing about being a super intelligence is you're everywhere you're you know you think Skynet but scarier well can't wait thanks
thanks all those AI researchers are getting that well I mean the the AI the AI researchers I I think it's not fair to blame them I think they're they're trying to do things that are good with AI it's the moment someone takes a scalpel from a surgeon and makes it a knife for killing but others think the greatest dangers will come from unintended consequences as Yan Talen explained to me I don't think it's correct to say that AI is a technology just like any other technology or tool like any other tool no it's a it's
a technology that can potentially create new technologies itself now there are a lot of smart people as we know looking into this problem in this issue and not enough though not enough no like imagine that we are building a spaceship that's able to carry the entire humanity and uh like the boarding has already be begun and and children are already on board and then there's like a small group of people who used to be completely ignored who saying that look we're going to need some steering uh on this ship and now like people are going
like oh wait a minute yeah steering might become handy but in this case steering is what programming the AI to make sure that it doesn't kill us all I think the more General point is that uh whenever you're a technology developer uh you have the responsibility of uh thinking through the consequences of your [Music] actions it might be an incredibly subtle process uh which eventually ends up with the human race becoming sort of enfeebled and dependent uh on machines in ways that that leave us vulnerable to any kind of unexpected event is it possible right
now we're almost tripping over ourselves in that we're coming up with discoveries about AI we don't really realize the full implications of what we had just discovered and we just use it you know for centuries or Millennial or even it'll never happen you hear that you know so we don't have to worry it's just impossible the history of nuclear physics there was a um a speech by ruford he's the guy who split the atom um in 193 3 September 11th he said there is no possibility that we'll ever be able to extract energy from atoms
less than 24 hours later uh zard invented the neutron based nuclear chain reaction and and instantly realized that what it would mean in terms of the ability to create nuclear explosions so it went from never to 24 hours the scientists who split the atom didn't understand that his Discovery would so quickly lead to the atom bomb but the same Discovery later gave us the energy that still provides electricity to Millions around the world when it comes to AI it seems like we're about to split the proverbial atom again the future of humanity could be at
stake and how we build AI now what we designed to do could make the difference