♪♪♪ (beep) Hello Adam. This is your Central Monitor Sorry to bother you on your walk But I need to let you know that your vital signs are outside normal parameters. You are showing signs of ST-elevation tachicardia and high blood pressure I suggest you slow your walk until I can do a full analysis of your cardiac helath I will be sending your vital signs to your doctor in just a moment. Please stay on the line... There are significant societal disruptions throughout history, particularly in communication. The invention of writing really allowed for the creation of what we
now know think of as organised society in terms of the empires of the past because you could then actually govern at a distance you could communicate at a distance. The printing press, the telephone the telegraph, broadcast media, all of that stuff was extremely important. And each on in it’s own way created a revolution in society. I think we are in the midst, on the verge, of one, beginning with communication that we are experiencing in the internet. And now with the ability for machines to become smart and learn, that we are on the cusp of another
amazing transformation. This is a perfect futures topic because nobody knows what’s going to happen. ♪♪♪ ♪♪♪ The brain is what makes us everything that we are, you know, without it, we're really nothing. So, everything we do everything that we think, and everything that we remember, is all based on, it's done by our brain. So, you know, people might think the brain is just your intellect, but your brain is everything, your brain is what allows you to catch a ball, or dig a hole or watch TV, as well as do science or write a novel or,
you know, gaze at the stars, all these things happen from our brain. The human brain consists of about 100 billion neurons. And each of those neurons makes about 10,000 connections. And so there's a vast amount of complexity there. And we're really only scratching the surface and understanding how it works So a lot of older research focused on the things that we find hard. So reasoning, and logical thinking, and so on. But now we are, I think we understand better that a lot of the computational complexity of the brain is actually required just to solve very
simple tasks, which sounds simple, but turns out not to be simple. So for instance, just understanding a visual scene about third of your cortex is devoted to processing visual information, and yet you have no conscious awareness of what you're doing, it just happens. ♪♪♪ One of the things that makes it so complex is that it's complex on so many levels. If you look at a single neuron, a single neuron is an amazingly complex little, basically nano-machine doing incredible things. And then you have small networks of neurons connected together. And then you have large systems in
the brain where billions of neurons are connected through massive neural pathways, and then you get the brain overall. And then you get the body that the brain interacts with, and then you get the environment and the social world. So all these levels, we have so much to discover about the brain, and we know so little, and they all interact in ways that we can't even yet fathom. ♪♪♪ We have a small understanding of what the single cells are doing, right? You think maybe we understand really well what the cells do but in fact, we don't
on the single cell level. Of course, there's billions of the cells and they're connected in networks, alright? And there's billions and billions of connections in your brain. Now, we have a better understanding of single connections, how they operate, but we really don't understand together as a network, how all this operates. ♪♪♪ Nervous systems are both in humans and in and other animals have been the basic structure of the nervous system has been conserved for a very long time, perhaps 500 million years. And so neurons communicate by sending spikes of electrical activity to each other. And
these neurons are organized into groups, and also into regions of the brain. And different regions of the brain are specialized for different kinds of tasks but they all have to work together seamlessly. We've certainly discovered a lot in the last few decades, there's been a lot of new developments in technology, which have helped us understand, particularly at the cellular level, how the brain works. I think one of the really big questions is how the brain is working at a more systems level. So how networks of neurons interact, and how the activity of those neurons lead
to our thoughts and our behavior. And that's really one of the frontiers of neuroscience at the moment. So we're using zebrafish as a model system to understand how patterns of neural activity develop in early life. And the great thing about the zebrafish is that young zebrafish is transparent. And we can insert a gene, which means that neurons glow when they're active. So we convert the electrical activity of neurons into an optical signal that we can record under a microscope. So we take young fish, when I say young I mean, just a few days ago it
was a single cell, we embedded in a gel under a microscope and then we can image the activity of every neuron in the brain, as it sits there thinking it's fishy thoughts. So what we want to do is understand two things, firstly, understand how normal brain development occurs. So understand how a nervous system becomes wired up during development, how patterns of activity emerge, which appropriately represent and process information. So I think that I mean that important for understanding, obviously, our biology, it's also important for developing new forms of artificial intelligence. So all artificial intelligence at
the moment, the hardware is designed by humans and put together by, you know, by humans, but in the long term, one can imagine artificial intelligence that grows itself more organically, perhaps inspired by the kinds of things we're discovering about how real nervous systems are built. ♪♪♪ "...we could also try to fuse the depth image...." ♪♪♪ The brain is an amazing device, it's the most complex thing we know in the universe And it had millions of years to perfect what it's doing. And so it's only natural that we look to the brain as an inspiration for
the robots and the artificial intelligence systems that we develop. Copying the brain sounds great, in theory, but doing it in practice is much more difficult than I think we all hoped it would be. There's a few problems. First of all, we need to know what's actually happening in the brain. And cracking open the lid, and looking inside and observing what's happening with all of the neurons and cells in our brain is quite challenging. And we have to use a lot of guesswork to fill in all the gaps about the things we don't know. Once we've
got an idea of what happens in the brain, we then have to actually reconstruct it in software, for example, and that is also challenging. So we have to do things like create artificial neural networks, in software that run on a computer, or in the cloud. And there's a lot of engineering and tinkering, that's needed to get those things to work as well. An artificial neural network is very simple at its core. It's a representation in software of what we think goes on in the brain. It consists of artificial neurons, or units, or cells, depending on
what you'd like to call them. And they represent, abstractly at least the neurons that occur in the brain. But it's not enough to just have neurons in this model, they need to be connected together. So the second key component is connecting together these neurons in the artificial neural network. And that's where the real magic of artificial intelligence occurs. The brain is very different from a computer, the way it's structured. A computer basically has the CPU is separate from the memory and connecting the CPU with the memory, you have this thing called the bus, the memory
bus. And the memory bus is working full time continuously when a computer is turned on. And it's actually a bottleneck. So the CPU can be very powerful, and the memory can be huge, but you're limited as to how much information you can transfer between the two. And that is a very limiting factor in the overall power of the standard computer. The brain, on the other hand, works massively in a massively parallel fashion, every single neuron is doing the best it can all the time. Even the the current best AI that we have is still very,
very different to the brain. It’s… you might say it's brain inspired, but it's not copying the brain. In the brain is massive amounts of feedback connections. So obviously, when we process sensory input, and that comes up into higher brain regions, and gets further processed and abstracted from from the original input that we see. But there's also a massive amounts of feedback coming from those higher regions back to the perceptual areas. And this feedback directs where we look and it gives us expectations of what we might see. And when those expectations are violated when something unusual
happens, you know, we we attend that we are forced to attend that we pay attention to it. So a typical neural network will have a stereotypical structure, You'll feed some sort of input into the network: Now, that could be imagery, it could be videos, it could be the sound of your voice. And then that data will go through many layers of neurons within the neural network, sometimes hundreds, or even thousands of these layers, go through all the connections between those layers. Those connections will gradually get changed over time and that's the training or learning process
for the network. And at the end, you'll spit out something like a classification, where the network tells you what it thinks it's hearing, or what it thinks it's looking at. Scientists and engineers have invented a whole myriad of ways to train these neural networks. But the basic premise, it typically revolves around feedback. So you feed some sort of data into the network and you look at what the network, for example, classifies it as. Now the network, at the beginning, might be very good at doing it, it might get it wrong. So you then give it
some feedback about what it got wrong, how badly it got it wrong. And then the network will subtly alter the connections within it, until it does the thing correctly. ♪♪♪ "Orange." "Mandarine?" ...Oh! She's got it! Both the human brain and the brains of other animals are very good at solving these tasks, like processing sensory information, for instance, visual information. And that has been a great inspiration to artificial intelligence in terms of the kinds of problems it is trying to solve So in the early days of artificial intelligence there was a focus on reasoning problems, which
feel like hard work to humans. But now people have become more interested in trying to build that higher level intelligence from this, what you might call low level intelligence, which is still extremely complicated. And one of the big inspirations for recent developments in sort of neural network versions of artificial intelligence is the hierarchical structure of the brain. So for instance, to process visual information, there are several layers of cells in your retina, and they send connections to more centrally in the brain, then there's several stages of processing, which is arranged fairly hierarchically. And so the
most popular kinds of neural networks these days are arranged in this hierarchical form. So each layer of artificial neurons, both artificial and real neurons, each layer extracts more complicated properties from the input. And that turns out to be a very good way to decompose these kinds of computational problems. There's been two major breakthroughs in in the last few years. Those are deep learning and reinforcement learning. And both of those were inspired by biology. So in deep learning, that refers to networks which consists of many layers of artificial neurons. So information flows through those many layers,
each of those layers extracts more complex information than the layer below. And we had people develop learning algorithms for such networks a few decades ago. But it wasn't really until the past 10 or 15 years that the computing power, and the amount of data we had, were enough to be able to demonstrate the amazing power of these learning algorithms. In reinforcement learning, the idea is how you learn from rewards. So life is one big set of rewards and punishments, and so it's very important the nervous system to be able to decide what actions to take,
to maximize the reward, and minimize the punishment. And through through many decades of experiments on animals, classical conditioning experiments, going all the way back to Pavlov’s experiments with his dogs. It's been possible to develop some very powerful mathematical principles for how any kind of learning system can learn from these kinds of rewards, including rewards, which are delayed into the future. So it's one problem to learn from rewards presented immediately, and another problem to learn from a reward which you might not achieve for, you know, days or weeks or months, or even even years. Yet, we
now understand both the mathematics of how that can happen. And an amazing result is that about 20 years ago, people discovered that mathematics is essentially implemented in the brain, in the form of signalling of certain molecules like dopamine. So that actually dopamine actually implements a reward signal in the brain in much the same way as a mathematics says a reward signal should be implemented. I think when it comes to AI theres two sides to it. Theres obviously a negative side and theres a positive side as well if you look at the positives its amazing for
humanity in the long run we can see the advancement in the medical field and technology in terms of... theres a car outside that drives itself, thats incredible and right there, just behind where you're sitting there is a little tiny robot that solves rubics cubes we're just in the beginnin of whats going to be a long journey in amazing advancements for humanity and I genuinely believe to always look at the glass half full when it comes to it it's going to be great (Computer game noises) Going back to 2018 there was a league online game, DOTA
2 which accepted into it's ranks of players a new type of team. It's an artificial intelligence team called OpenAI Five OpenAI Five not only won The International World Championships it's gone on in April of this year, 2019, to expand it's availability for gameplay. Any person who might want to play DOTA 2 can challenge OpenAI Fives bot team. We have all of the humans against five very special bots from OpenAI Five. Now why are they so successful? I think that's a good question to ask. When we consider how fast a human can learn we're bound by
our ability to process information over time AI changes the speed of learning and when we consider reinforcent learning techniques the DOTA 2 AI bots have spent and accumulated 45,000 human years equivalent of training. And that's why they're so successful. Obviously, gameplay is a very interesting area of development for AI but it's actually only one small area. And the applications of AI are far more diverse than that in fact . Artificial intelligence now has the ability to utilize information in a way that wasn't previously possible Data acquisition using, for example, the World Wide Web or alternatively,
when we think about smart cities, the amount of data that you can collect through utilities usage means that we have the possibility now to see real world applications where massive learning can take place of a very brief periods of time to create a scenario where solutions have found to problems that humans have struggled with actually for quite a long time My idea about artificial intelligence is It's a promising approach in the future However it requires a jump in the technology. At the moment it has the problem of the acuracy and how reliable is the output
of this artificial system? Is there any way to have some kind of feedback about the output? Whether it's the correct decision or not? And I think the future of the artificial intelligence is going that way. To understand to reasoning about artificial intelligence outcomes and the reasons behind the decisions from artificial intelligence. So how do we copy what's happening in the brain and put it in a neural network? Well, one of the easiest areas to start with is choosing a concrete tangible process that we all do and that's navigation. So all animals, all humans, find their
way around, and we have maps in our brain that tell us where we are, and where to go. And scientists have found all sorts of beautifully, navigationally relevant neurons in the brain. And we can literally just copy what those neurons do directly into our software models in order to create, for example, robots, or autonomous vehicles who can also find their way around. ♪♪♪ Artificial intelligence plays a key role in many other large technology arms races. One of the most visible is autonomous cars, self driving cars, or robotic cars, depending on which term you prefer. Now
we've seen a lot of progress made over the last 10 years, and there are dozens if not hundreds of major companies and startups, developing autonomous cars around the world. Now we have cars already and have had for 10 or 20 years that can drive pretty well autonomously on the highway. The real trick has been solving that last five or 1%, of driving conditions. So driving in complex urban environments, when it's raining, or snowing, and knowing how to react when that person jumps out in front of the car, looking at my mobile phone, and this is
the area where a lot of the state of the art AI development is actually happening. It's how machines like autonomous cars interact with us humans complex, unpredictable, and very vulnerable humans. And this is one of the areas where if the AI becomes good enough to work out what to do under all situations, at least as well as we human drivers do, you will see autonomous cars everywhere. If they can't solve that, then expect it to be much more subdued. ♪♪♪ The challenge right now, of course, is that autonomous vehicles aren't as good as us in
all situations. And for autonomous vehicles to really be widely commercially viable they have to be deployed everywhere. A car that you drive yourself, and then goes autonomous on the highway is cool and useful to some people. But it's not this multi trillion dollar market that everyone is imagining. So in order for them to really roll out at a wide scale, and potentially save lives, you really still have to solve these remaining problems of how to drive and complex city situations with pedestrians and cyclists all over the place. Two years ago, the the general consensus out
there amongst people working in the in the industry and in the in the scientific research, you know, they we're very bullish on having these cars out and working in the real world within a couple of years. The CEO of Waymo, which is the Google self driving car initiative, which was, you know, one of the biggest, probably still the biggest in the world. And they were incredibly bullish on this technology five years ago, the CEO has actually come out and said it looks like we’ll never actually have fully self-driving cars, ever…. right? There’s always what he
said was, "There's always going to be constraints". Now, what that really means in terms of you know, what we've been discussing, is you know AGI is it to do everything that a human driver does, in driving a car, simply driving a car around the streets, you need [it needs] to be as smart as human to do that. ♪♪♪ The interesting things we are dealing with are perhaps groups of robots working together although it’s not as disruptive as some things in the past. You can’t predict when fundamental changes in the science will occur and the next
big one may be around the corner or it may not be. But substantial progress particularly in the study of human-robot interaction has been going on as well. You get a better understanding of how it is that we relate to each other through robots and how we can do better in respect to that. A social robot is a robot that can communicate and interact with people or other machines. It's completely different to traditional types of robotics, where they've been designed not to have any interaction with people. For example, in car manufacturing plants, social robotics is specifically
designed to have a communication or engagement with people. Artificial intelligence is playing a really big role in being able to empower the way that social robots can understand, behave and communicate with people in their daily life. I see humanoid robots being developed a lot more in future. The world is currently designed for humans. So if you're entering through a door, you're designed to push it using your hand and your amount of force as a human. If you're tidying up a particular workspace, or you're walking around a city, exploring different sights and sounds. So the world
is ultimately built for humans. So designing robots that have either a humanoid form, or an understanding of how the human world works, makes it easier for those robots to integrate into society, but also to create some value and benefit without having to restructure buildings or tasks or the way that the world is designed to accommodate for that human. ♪♪♪ The question of can we develop a disembodied AI an AI in a box or a computer or will we have to have some sort of a will the AI have to have some sort of experience like
we do. or some sort of feedback from the world and some sort of way of modifying the world. The theories at the moment are that simply the AI needs a way of perceiving a world, whether it's real or simulated perceiving itself in that world and being able to make a distinction as we were talking before about self and everything else and being able to manipulate objects, and have basically a causal influence itself in the world. ...lazer scanner. So it's using the depth scanner to position itself on the map... It's really important to explore artificial intelligence
in the sense of embodiment and havinga look at robots as a way to create that wealth of knowledge because until your out and moving around in the world like a robot would do your not seeing the world for what it really is. If you're training through different data sets or information that's been provided to you, it's a very contained way of learning knowledge of how the world operates, how we engage with each other, and how do we go about achieving tasks on a day to day basis. But in terms of embodiment if a robot is
then going out and exploring the way that the world works, learning for itself through trial and error or through observing other peoples behaviour or through simply asking the nearest person what should I do in this situation I beleive thats when we'll start to see a big burst of creating a robot that can understand human experience and then be able to intergrate into society. The thing I focus on is evolutionary learning. And evolutionary learning is quite a bit different. What I do is based on, well, genetics, it's based on Darwinian evolution. And it's based on competition.
So for evolution to occur, you need three things, For myself, being in a group that does field robotics, you need selection, need variation, and you need heredity. And this is true for natural features, it's true for computing programs, it's true for robots. It’s really, really powerful, it's creative, it creates novelty, diversity, all of these things that are really useful when you think about them in the context of learning, right? For myself, being in a group that does field robotics where we're trying to take robots and use them to solve real problems outdoors in these nasty
environments. It's really about taking that analogy, that link to natural evolution. Natural evolution creates life that survives in a huge variety of really challenging environmental conditions and what we're trying to do is distil the things that allow that to work successfully and apply them to designing robots. One thing we do here is deploy robots into, for example, a rainforest to perform biodiversity studies, and we don't know the conditions in that rainforest really and it would be good to have robots that can not only adapt how they behave but have the process that adapts their bodies
as well. And when we start to do that we're heading into the realmsof this thing called embodied cognition. Embodied cognition is basically the opposite to Descarte's 'I think, therefor I am'. So we could consider maybe deep learning to be that 'I think therefor I am' where I am a system and I just learn things and then, and then I've learnt that thing. What embodied cognition is saying is to be intelligent in that sense, in the embodied sense, you need a body and you need a brain, and you need to act in an environment and if
you've got all those things together it's the interactions between the body and the brain, and the body and the environment that generates these really useful, rich behaviours that can help us solve really tricky problems. One way of doing that is evolution, so we can use evolution to design robot bodies, we can use evolution to learn the controllers that operate those bodies that we see in the sense of information that push out commands to the wheels or to the legs to move it around. And obviously their situated in this environment so the environment plays a key
role. If you imagine having a legged robot trying to walk through the jungle, it's going to need to behave very differently than walking across an ice rink and thats where the environment really critically links in to how we generate these behaviours. Robots will eventually start to intergrate more closely into society. They'll learn to operate around people, and then it becomes a very seamless integration. It's no longer a human, and therefor a robot, it becomes a symbiotic relationship where you're providing information or objects to a robot to carry for you and the robots providing a service
or perhaps support back to you. (crowd noises) I think that in it's current state AI proves to be quite a useful tool for science and the advancements in fields like robotics and automation. I think that in the future AI will be a very big part of our lives but it's not going to have the same effect that the media tends to sensationalise that it's going to take over and become some all-powerful intelligence. Yeah, it's gonna help us in many other ways. The one thing I don't think we've seen quite yet and we may, is the
ability to handle non routine sometimes called 'wicked problems' where creativity and human capacity for ingenuity and discovery is absolutely crucial to being succesful. Now one question of course is are there enough jobs in the world to support a workforce of 4 or 5 billion people where creativty and ingenuity are the core of it. But at least for right now that is the path to success. One has to be able to handle a job, whether its as an entrepeneur, as a sole proprietor, or even working for a big company you have to be a unique person.
Tom Peters called it 'brand you'. You have to offer a value ad that nobody else can do. So preparing to be unique, preparing not just to be one of the mass because the mass is going away, the machines are taking over mass jobs. The Chinese took over mass jobs to start with, and now in the long run it's automation and machines that will be doing most of the routine work in society. If you look at the history of work, up until almost the present, routine jobs were the bread and butter of the American middle class.
Sales people, office people, factory people, construction people, they went to work and did largely routine things most of the time. They didn't have to be creative, in fact they were discouraged from being creative. Theres going to come a time when machines are taking all of those jobs and the only way to earn a decent living is to be creative. So the holy grail for many researchers is investigating the possibility of developing AGI, or Artificial General Intelligence. There are a lot of definitions floating around for exactly what that means. The one that I like is you
create a machine, or an agent that has the broad intellectual capability of a competent adult human. It can do everything we can do. It can learn how to do everything that we learn how to do, and it can carry out all the tasks we do on a daily basis without thinking. General artificial intelligence is the idea that you can create learning algorithms that can essentially learn in any situation. We've been incredibly successful in the last few years at developing learning algorithms which can for instance learn how to play GO. Just recently it learnt to play
Quake, a version of Quake. that paper was just published a couple of weeks ago. And so these are very impressive learning procedures, but they happen in very confined domains. So general artificial intelligence is the idea that just in any domain, you can apply this, this learning approach to rapidly figure out what's the appropriate thing to do in different situations. One of the tests that people derived, or designed was the Turing test. And the idea of this test is very simple. Can a artificial intelligence agent talking talking to you through a computer screen or perhaps over
a phone line fool you into thinking it's a human? Now, the validity of the test in terms of being a true test of genuine artificial general intelligence is controversial. And it's a case of if you can fake it, does it really mean you're intelligent. and indeed, some of the initial approaches to going well on that task task have been systems that have faked intelligence really, really well. And then you get down to some deeper philosophical questions which is, if it can fake it so well it seems intelligent, is it actually intelligent? Or is it just
a shell pretending to be intelligent? And that is very much the realm of philosophers I think. There was a case where someone got a computer to simulate a foreign boy of about 12 or 14 years of age or something. But they put these constraints on it. They made him foreign, or made it foreign, so its level of English didn't need to be that high. And it was constrained to be a boy, so you wouldn't expect to have in depth conversations about a lot of difficult subjects, like politics or science or whatever. And in that case
there were some people who couldn't detect that they were talking to a computer in that case. And that was claimed then to be passing the Turing Test but most people are saying no it didn't, so we haven't passed the Turing test. When we consider the history of artificial intelligence and the sort of questions we've asked about whether a machine is intelligent have changed considerably over the course of the last 60 or so years. Fundamentally at the beginning, the questions were reasonably simple. We have come now to the conclusion that it's not enough just to mimic
intelligence but to actually attain a state of consciousness. An how we would be able to subjectively realise another person let alone machine subjective consciousness is philosophically a very difficult question to answer. When we consider the way that people approach, for example gameplay. Originally the questions were could the machine play a game of checkers and win against a human opponent. And then we considered whether it could play chess. And then of course GO. The machines have been able to not only succeed at playing these games but to beat their human opponents quite successfully. The AI Effect
is this experience that we're finding as the technology advances and we're capable of answering so many more questions about what is possible. It becomes mundane, and we start to have a reductive analysis. So that achievements are simply relabelled as computation. So the AI effect means that we say 'that's not intelligence anymore'. We actually change our minds and have decided that it has to be something more than that. It has to be something more than a simple computation. We've changed the goalpost over time. What we were asking questions about historically, those questions may have been answered,
but as they have been answered we've reduced them to this idea of 'it's just a computation'. And we've expanded our ideas of whats possible and we start to ask more of AI. One of the big barriers to achieving this goal is the idea of common sense. It's a very difficult way to teach a robot or machine what exactly is common sense. So if you were to try and script a variety of different rules on how a robot should behave in a societal setting, being able to say what is approriate behaviour and what isn't approriate behaviour
is argueably be endless. Theres a variety of circumstances, factors, variables that would influence the way that a person would engage with the world, and therefore trying to translate those societal norms and behaviors and approaches into a robot. So if we can find a way to achieve this idea of common sense with robotics, it helps to break some of the biggest barriers we have, which is when robots need to be able to create behavior for the final 10% or 5% of scenarios, where something completely unexpected happens. And we need to have some idea of how the
robot could provide some action or a task or information during that setting. AI. It's happening whether we like it or not. That's the big thing. The question is, what are we gonna do with it? So are we gonna push it for the benefit of everybody, or for the benefit of a few? I'm really hoping fo everybody, but I put as little AI into these because they're dangerous, and I don't like dangerous AI. So, of course, artificial intelligence is permeating more and more parts of our lives. And it's becoming something that we are starting to rely
on more and more. So, for example, I don't need to have an encyclopedic knowledge of the changes in traffic conditions as the day goes, goes on, because I can just look at Google Maps. And it will tell me what are the best routes of any particular time of day. And so I think there'll be an increasing trend of that. So relying on AI to help advise us on decisions, obviously, that has to be very carefully managed, so that the AI does not start making decisions, which it thinks is sensible. But we know how not sensible
because there's many many examples where that's happened in in the past. Right now AI and robotics is almost completely a private sector activity. Which is fine, the private sector is ingenious, it’s where the money is it’s where the money to be made is. But as they become larger And I’ve just heard a new acronym and it’s GAFA Google, Apple, Facebook, and Amazon. So the GAFA companies now have such enormous power that they can use that to make money for themselves to help their customers, but it can also be abused. And so when do those morph
over into a public utility. When does Facebook become a public utility like the telephone company or the pipeline company, a common carrier. So where the public has a chance to have it. So we can have this discussion on radio programs and on blogs and video which we are doing here. But at the end of the day these decisions are being made in boardrooms and laboratories that we don’t know what’s going on. And so in that sense I think we can have the discussion discussion but what’s going to happen is largely outside of the civic realm.
When we consider the future of artificial intelligence and neural networks I think at this point in time, it's important to realize that where we invest our money really counts. This is fundamentally an economic question. Government at the moment is enormously in this idea of innovation. And you'll find that there's a lot of startups arising from university culture, for example, and engineering firms are changing and challenging what they've done previously, because mechatronics engineering is becoming a more common and not only viable, but sustainable financially rewarding element of the businesses. Money's coming from different directions. And governments
are also funding of course, military budgets, both government and non government military spending on artificial intelligence, and mechatronic type research in my understanding is now exceeded $8 billion annually across the globe. This is an enormous investment in money. from a community perspective, this is changing the way in which we see Robotics moving forward. Artificial Intelligence is one of these potentially transformative technologies. And like all new transformative technologies, it acts as sort of a force multiplier if we do the right thing with it will have great outcomes for society But if we're not careful, and we
do some of the wrong things with it, it could be an overall negative for society. One of the particular areas where people are concerned about artificial intelligence and related technologies of automation and robotics, is future employment. And it's a valid concern, because artificial intelligence is able to do some of the things that we in our jobs do every day and sometimes do better than us. It's worth noting that right now, ai cannot do all of almost any one's job, it can only do part of it. And it's also worth looking at historical precedent. If we
go all the way back to things like the Industrial Revolution, there were new transformative technologies that were incredibly disruptive disruptive for generation indeed. But over the long term, it can be argued that they resulted in overall increase in the quality of life. But there was temporary disruption. Yes robots do pose a threat to employment. As it did in the days of the industrial revolution as well. How severe is it? Is unclear. Will new jobs be created instead? Absolutely! The fundamental issue is can society provide a safety net as we move through this new industrial revolution?
Or this robotics revolution that’s occurring. If society can provide that, then OK. If we just turn a blind eye to the changes that are going to occur then it is quite worrisome. One could argue that no job is safe, even that of a professor. In terms of work, let’s start there where everyone wants to talk about if the robots become the workers what do we do? And that’s happened already, that’s started in the 1970s and the 1980s with very simple machines. If you used to go to a restaurant people had to take orders and had
to cook and all this stuff. You go to MacDonald’s and there’s somebody who might not even have a high school education and they are punching buttons and the machine is doing all the rest. So it’s the deskilling of society. Now that has been... OK, so if you don’t get an education you have to work at MacDonald’s. Now you have machines that know how to design cars, who know how to build bridges, who know how to do accounting and finance - that’s the deskilling of the professional class. And then what do we all do? How do
we prepare for all this? We have to ask companies with this kind of enormous societal influence to be more transparent about what they do what their plans are what their algorithms are and how they are approaching these kinds of things. As we are with the military, as we are with the police - who also have tremendous power at their disposal, but for the most part they try and are supposed to be transparent. Now we have private businesses that do not have to be transparent "Oh it’s all proprietary and there’s nothing we can do about it."
That’s a dangerous situation. So we can have all the discussion we want but if it’s in the hands of people who don’t have to say what they are doing and can do basically whatever - until some bad happens and then oh Facebook gets a red face and they have to change and tweak this and tweak that and away they go. But for the most part it’s in their hands. Society is always concerned about how things will play out. One of the barriers with AI technology is that a lot of it is very complex, very sophisticated,
and much of it's locked behind proprietary company doors. So making sure that everyone in society, not just the technologists developing it, have a sufficient understanding of what it can actually do, what it will be able to do in the near future, and what is hype, and what is reality is critically important. because we want everyone to be able to voice an informed opinion on how AI should roll out and to be deployed in society. I see robotics as another support tool that will help what we're currently trying to achieve, which is to advance civilization in
different ways. I see robots being another technology that will help the way that we want to achieve our goals to help enhance efficiency and productivity. I don't think this is an argument between robots versus humans, I think it's really important to look at what exactly robots are doing to help support human life, what they're doing to help enhance productivity and efficiency, whether it's helping to harvest more food, whether it's helping to move goods in a way that helps support people by getting those goods sooner, or being able to provide support in education or healthcare scenarios.
So I really see this as a civilization that can work well together. If robots are created in a way that's effective, acceptable, but also making sure that they're deployed in a way ♪♪♪ So there's a lot of focus on when artificial general intelligence will ever happen. I don't know. I know that it feels like we aren't anywhere close now. As opposed to other sort of more mundane goals where it might be 5 or 10 years. All I can say is that there is definitely a lot of uncertainty. We've seen with things like the AI beating
the go player. that our ability to predict when these events would happen, is very much uncertain. And so we should treat general intelligence with the same I guess, respect in terms of unpredictability. I've always been saying somewhere between 50 and 200 years, looks like a reasonable timeline for that to happen. So and that's for artificial general intelligence, which are I guess you would say is intelligence that was comparable to human intelligence. The other idea is the singularity, right? you might hear something about that idea is when machine intelligence exceeds human intelligence. So it's important to
understand that AI is already surpassed human intelligence in many domains. So for instance, computer first beat the best chess player in the world in 1997, Deep Blue beat Garry Kasparov. And so what happens is that as soon as, so there's this idea of these challenges that if a computer could do that, it will be really intelligent. But as soon as that happens, then people sort of really sort of reclassify what it means to be really intelligent. So the humans always come out on top. So we had chess and then more recently, we've had Go. And
so I see it not so much as conversion towards a singularity is more just a succession of advances in particular domains. And we will, will learn, we will learn to live with these advances . And often, we can embrace those advances and use them to enrich our lives. So for instance, the world of chess is not collapsed, because computers now much better at chess than that humans rather, humans now, help use the computers to help them study their own games. And it enriches humans understanding of the game, the kind of insights that these computers can
now now bring. You really need to think about this timeframe, 50 to 200 years, we will have machines that are as smart as people, you know, what will that really mean for society? That's an amazing question. Will people just not need to work? You know, will it be everything can be done by machines, you can work if you want to, but you won't have to. But if you're not working, what are you doing? Just playing games? I mean, you might get bored with that If the possibility of uploading minds, and a lot of this stuff
sounds like science fiction, but the fact is, if we have artificial general intelligence, and we have super intelligence and we passed that singularity, uploading a mind is probably going to be possible. And, you know, as bizarre as it sounds, as bizarre as it seems, most people say that will happen within 200 years. And these are not crazy people. These are the experts in the field. So given how far we've come in the last 200 years, from the pre-industrial age to now to having smartphones and talking about AI as a real possibility, the next 200 years,
given the pace of change, anything could happen! This is a different way of looking at technology. Historically, machines have been inanimate objects. They haven't had this idea of self awareness awareness of moral reasoning, the ability for language or interaction. So what we're looking at is a different way of utilizing technology and technology interacting with us. That doesn't mean that we have to lose control. What it means is that we have a creative impetus, a capacity to express not only our interest, and I think fascination with what it means to be conscious, but a very real
way that this can explain it back to us. It's a collective endeavor, it's not one about handing out authority over to some sort of hypothetical machine Overlord, Arnold Schwarzenegger, he played the Terminator. And I think we all enjoyed those movies that I personally did. But at the end of the day, that's not the future future that we're working towards right now. It's far more about collective integration and understanding of what what in fact, it is to be human. What will definitely see over the next five or 10 years is continuing steady progress in some areas
of AI and its rollout often often invisible to you and I into sort of the things that we do every day from buying a coffee in the morning, getting in a car, to booking a restaurant to go to that night, we'll see AI playing an increasing role in our everyday lives. In terms of significant advances of AI that's smart enough to drive autonomous vehicles or around us AI that can act as a personal assistant that can help us in our everyday lives. That's still a little more uncertain that may happen soon, it may take us
much longer to create the technology to perform well enough to be useful in terms of the long term the general intelligence. Well, we really don't know. But there will probably be some flags raised along the way in terms of milestones that will preface getting to that artificial general intelligence. I think we've got a long way to go, So if you look at self driving cars, for instance, they've been just around the corner for quite a long time now. And these are just very, very difficult computational problems And so I think we've still got a bit
of time, There's a lot of hype in the media about about these, these technologies But I think there is still a way to a way to go. It will require a lot of involvement from different technology companies, as well as regulators and people involved in making policy and economics, roboticist themselves emphasis sociologists, anthropologist, it's really going to be a huge collective ordeal when we're looking at placing robots and different kinds of systems and environments and societies. So I think this is a conversation we all need to be having right now. How exactly can we do
this to make sure that everyone receives the most amount of benefit from using robotic systems, and that they support and help people in a way that benefits everyone as a civilization. my goal, which is unlikely to happen during my lifetime, is to consider robot as a species, species that are selected using a Darwinian mechanism that can create themselves in a way using 3d printing. So not in the sense of, you know, the Terminator, or self reproducing machines or anything like that, we very carefully, don't do that. But what I'm saying is, we could have a
small factory that has all the components to build a load of robots, and we could deploy it out, you know, on an asteroid, in deep sea in deep space, anywhere where we can't get humans easily, or we don't want to put humans right. And this factory can learn using the ability to create robots, about the best types of robots to us in a certain situation. If we create super intelligent AI that is then able to then design its successor, you know, even even design even more capable AI or even smarter AI will not necessarily have
the same safeguards that we built in into the original AI. Well, you know, I guess, if we've done our job correctly, then that super intelligent AI that we've created, will then have the desire just as we did to build in the same safeguards into anything that it designs. You know, I guess you could ask, how many iterations can that actually go on for before it goes Well, actually, maybe this isn't such a good idea. That's that's really speculative. You know, and, and really come into that So we're talking about the existential threat of AI and
how muc of a threat is AI to us or could it be in the future. And yes there are some people who are quite concerned about this. About super intelligent AI running amuck, and for whatever reason wiping out the human race. Most people who work in the field are not so concerned about that. I think the general feeling is that by the time we can construct an AI such as that which is obviously some way off we're nowhere near that at the moment. We'll also understand ways to control it or contain it. Or at least
to render it harmless whether it's controlled or contained or not to make it benign The more real concern of people in the field doing research on this is not the the malicious AI, but it's basically Well, it's our own incompetence basically So the the unintended side effects of super intelligent AI that's not going to you know, just decide to wipe us out because it deserves to live and we don't That's not really the concern. But there is this potential concern where we'll just make an AI that will lose control It will be doing something that
we want it to do But it'll be doing it in a way that we don't want it to do it. ♪♪♪ when we look what the future of AI and the singularity that people think might be a likely next step in the way this technology is moving forward. Well, there's a lot of ideas about what could reasonably happen. Unfortunately most of those hypotheticals just won't be. A lot of these ideas actually could be very problematic for us for our future and for our safety. And whilst I do think it's worth considering them, essentially, it's just
science fiction. And at this point in our lives, I think it's far more worthwhile considering science fact, where are we right now? What are we investing in? And what do we expect to get from this technology? If and when a singularity arises, it will be a single linear trajectory, it won't be a hypothetical. And so when we spend time considering all of these ideas about what's possible, we can lose sight of what's really happening now, which is an incredibly exciting moment in our collective history. ♪♪♪ Hello Adam, I have completed a full analysis of your
vital signs and have verification from your doctor. There is a 75% chance that you are having a mild heart attack. Please do not be alarmed. An autonomous vehicle has been dispatched and should intercept your position in approximately 10 minutes. Please walk slowly in your current direction. There is an open landing pad in 300 meters. Please wait there. I will continue to monitor you. Please stay on the line. ♪♪♪