this is what the future of hyperintelligence looks like most people no longer own cars instead artificial intelligence operates fully electric Network self-driving Vehicles as a result air pollution and traffic congestion plummet across the planet self-navigating aerial drones are on the front lines for Disaster Response and search and rescue missions most people live and work side by side with self Weare Androids these AI companions boost productivity and liberate humans from tedious tasks completely revolutionizing Modern Life feel like I'm in a superhero movie today scientists are blazing a trail to this very future the fact that we're
enabling the system to make its own decisions I don't even know where to begin with that I want to know what breakthroughs are being made it's talking and it's having this Dynamic conversation with you that's the Wonder machines can be self-aware in ways that we can't that will Forge the future to oh my gosh it's looking at me Hyper intelligence [Music] [Music] [Music] [Music] my name is shobani Bigler as an engineer and neuroscientist in training I'm obsessed with artificial intelligence as a kid my father took me to Tech and Robotics trade shows where I became
dazzled by science every year the inventions were smarter and smarter artificial intelligence has come a very long way in the last couple of years most AI Technologies are programmed to think for themselves to learn from examples kind of like simulating human intelligence in a way that it it learns from past experience but how does AI actually work in the future will AI achieve human traits like emotions Consciousness or even Free Will and how will humans and robots work together today the clearest road to the future is the self-driving car unlike a regular car which is
just a machine a self-driving car is a robot that can make decisions in the future will every car on the road become driverless to find out I've come to a hot bed of self-driving car research Pittsburgh Pennsylvania every single person has started to have conversations about self-driving cars because essentially they're the future but in order to understand it we have to look under the hood making decisions on the Fly even simple ones like these does not come easy for computers to discover the inner workings I'm meeting a true Pioneer in the field please get it
thank you Dr Raj Raj Kamar of Carnegie melon University Carnegie melon is the birthplace of self-driving car technology thanks in large part to the work of Raj and his colleagues they they've been the leading innovators in this field for more than 25 years so how does his self-driving car make decisions to safely navigate the world like a human driver um should we get started Prett yes we can since Raj is distracted by our conversation for safety reasons the state of Pennsylvania requires another driver in the front seat to monitor the road this is so cool
I'm nervous but excited what's the longest you've never driven a vehicle autonomously uh we have gone hundreds of miles awesome I'm going to go Auto by pressing this button oh my gosh it really is driving itself while most self-driving cars are built from the ground up Raj just bought a regular used car and hacked it with powerful onboard computer systems making it more adaptable than other regular cars we installed a bunch of sensors in them it it is able to shift the transmission gear it is able to turn the steering wheel apply the brake pedal
and the gas pel it's really a software that runs on the computer that makes this capability uh really practical and there are some very key fundamental artificial intelligence layers that tries to mimic what we humans do to mimic human decisionmaking most self-driving cars use a combination of cameras and advanced radar to see their surroundings the AI software Compares external objects to an internal 3D map of streets signs and transportation infrastructure the map is something that is static in nature traffic and people and objects are dynamic in nature the dynamic information it figures out on the
flly comprehending Dynamic information allows it to understand where it is heading objectively in space and react to changes and traffic signals ahuh it recognize the stop sign yes see we are pedestrians we definitely should not be running into that person the AI challenge to make a vehicle drive itself is not an easy task safety is priority number one and party number two and number three as well but what happens when the AI system doesn't understand specific objects in its surroundings a pedestrian in Tempe Arizona was killed last night by a self-driving taxi it's believed to
be the first fatality caused by an autonomous vehicle this tragic accident happened because a self-driving vehicle didn't recognize something in its environment a jwalker in the future Advanced self-driving cars will have to make life and death decisions on the fly if avoiding the jwalker means crashing head on with another car potentially killing the driver what should it choose how will scientists address Monumental problems like these the first wave of artificially intelligent robots were programmed by Engineers with static sets of rules to achieve their goals these rules are called algorithms but not all rules work in
all situations this approach is very inflexible requiring new programming to accomplish even the smallest changes in any given task a new approach called machine learning has changed everything with machine learning computers can absorb and use information from their interactions with the world to rewrite their own programming becoming Smarter on their own to see machine learning in action I'm meeting another Carnegie melan team at an abandoned Coal Mine Dr Matt Travers leads a group that won a challenging Subterranean navigation competition held by the department of defenses research agency DARPA they're affectionately known as R1 R2 and
R stands for robot these robot twins are designed for search and rescue missions too dangerous for humans and unlike the self-driving car they operate without a map to achieve this they have to learn to identify every single object they encounter on the Fly they are programmed to go out and actually act fully autonomously and they will be making 100% of their own decision so they're recognizing objects they're making the decision of where to go next where to explore to see this in action the R2 robot is starting on a simulated search and rescue mission to
find a stranded human dummy in the mine imagine having a map of a collapsed mine before you sense a team of people to go rescue someone in that mind right like it's a game changer how the robot discerns elements in this environment parallels how an infant learns about her environment a 3-month-old uses her senses to cognitively map out her environment and learn to recognize her parents she ultimately uses this map to interact with everything in her world just like this robot okay so we ready to roll artificial intelligence makes this learning curve possible but how
does it create its own map and identify a human on its own and without an internal mapping system like the internet test engineer Steve Willets shows me how the R2 robot can detect a stranded person when you're in a search and rescue scenario that's kind of situation where you'd want to deploy one of these as it explores and maps The Cave it drops devices called signal repeaters to create a Wi-Fi network Trail it drops those just like breadcrumbs along the path using this network the robot sends data back to home base to create a map
at the same time the robot must look at every single object to identify the stranded human so the lar system is giving a full laser scan lar stands for light detection and ranging similar to its cousin radar which uses radio waves lar systems send out laser pulses of light and calculates the time it takes to hit a solid object and bounce back this process creates a 3D representation of the objects in the environment which the onboard computer can then identify this process is similar to how the eye feeds visual data to the brain which then
recognizes the objects by tapping into our pre-existing knowledge of what things look like by fully understanding its environment R2 can then make better decisions about where to go and where not to go what our robots doing right now is exploring so the robot came to a junction and off to the left it could see that it couldn't get past right so it saw the opening to the right and that's where it [Music] went it kind of looks like it's making decisions about whether or not to climb over these planks and obstacles all up in this
area right that's exactly what it's doing at this point just like a baby R2 learns through trial and error it's like a little dog wagging its tail but there's no one here to rescue so it moves [Music] on as R2 continues to map out the mine oh my God a human it stumbles upon its intended target that's that is Randy rescue Randy hello rcy Randy scared me with the discovery of rescue Randy the R2 robot can not only alert emergency Personnel but also give them a map on how to find him that is incredible he
knows what it's doing these incredible rescue robots are clearly Paving the path to the future of hyper intelligence in the future autonomous exploration Vehicles perform search and rescue missions in every con conable disaster Zone even in Avalanches at top Mount Everest incredibly intelligent off-road vehicles are also mapping deep cave systems previously unknown to science discovering a vast supply of rare Earth elements essential for modern technology artificial intelligence will clearly save human lives in the future but there's a lot of terrain on Earth that's too difficult to navigate on wheels how will intelligent robots make their
way over rainforests bodies of water or even mountaintops in Philadelphia Jason darck of exen Technologies is working to overcome this problem what we focus on is autonomous AER robotics to enable drones to safely navigate in unknown or unexplored spaces Jason's team has built the first industrial drone that can fly itself anywhere incredibly these autonomous robots navigate without GPS and map their environment as they go we focus on all aspects of autonomy which includes perception orientation of during flight motion planning and then finally control but going from two Dimensions to three dimensions requires an increase in
artificial intelligence processing the mission for their drone is to fly independently through a three-dimensional path from one end of the warehouse to the other starting Mission 3 2 1 now to mess with its computer mind Jason's team places new and unexpected obstacles in its path will the Drone recognize these unexpected changes will it get lost will it crash essentially we have a gimb liar system that allows the vehicle to paint a full 360 sphere around it in order to sense its environment like the robot in the mine this aerial robot uses lar to see it
actually generates a voxelized representation of the space which you see here and each one of these cubes in the space it's trying to determine whether that cube is occupied or whether it's free space the robot's onboard computer makes realtime decisions of where to go based on its visual input kind of like us humans incredibly the Drone recognizes the whiteboards and flies around them one of the things about this system that make it particularly special is that it's actually being used in the real world to keep people out of Harm's Way autonomous drones like these are
already at work in hazardous Industries like mining construction and oil exploration they safely conduct inspections in dangerous locations and create detailed maps of rough terrain from a technological perspective the fact that we're able to do everything that we're doing on board self-contained and enabling the system to make its own decisions I I don't even know where to begin with [Applause] [Music] that self-flying robots like these will revolutionize search and rescue and Disaster Response they could also transform how packages are delivered but there are limits to what large single drones can do more complex tasks will
require teams of small Nimble autonomous robots Dr V Kumar at the University of Pennsylvania is working with swarms of drones to perform tasks like play music or build structures cooperatively he's also developing Technologies to tackle some very big problems including world hunger in a couple of decades we'll have over 9 billion people to feed on this planet of course that's a big challenge to take on a task this big he's building an army of small flying robots with the ability to synchronize we think about technologies that can be mounted on small flying robots that can
then be directed in different ways like a flock of birds reacting to a predator or a school of fish you have uh coordination collaboration and it all happens very organically using AI to get robots to work as a coordinated Collective group is a daunting task 3 five years ago most of our robots relied on GPS like sensors today we have the equivalent of smartphones embedded in our robots and they sense how fast they're going by just looking at the world integrating that with the inertial measurement unit information and then getting an estimate of where they
are in the world and how fast they're traveling this I got to see and I'm going to check it out virtually as a robot at upen remotely in VJ Kumar's lab sample my surroundings oh I hit something hello hi vj's Apprentice Desh takur is my guide today we're going to show robots flying in a formation great can we see how that works sure yeah the first step denesh takes in coordinating the drones is to provide them with a common point of reference in this case a visual tag similar to a basic QR code using only
the onboard camera these drones reference the code on the tag and visualize where they are in space using sophisticated bioinspired algorithms the drones then figure out where each other drone is within the collective swarm these drones are communicating with one another right yeah right now they're communicating over Wi-Fi it's so cool future versions of these drones will create their own localized wireless network to communicate but for now this swarm is a proof of concept you've defined a formation and then they're assuming that formation yeah I just say I want to form a line and the
drones themselves figure out where they should go once they figure out where they are in relationship to each other they can then work together to accomplish a shared goal like ants working as a collective entity once they can coordinate between each other we can send them out and doing specific missions that's really cool swarms of flying robots have their advantages unlike a single drone self- coordinating swarms can perform complex operations like mapping much faster by working in parallel and combining their data and losing one drone in a swarm doesn't Doom the whole operation V imagines
employing his Advanced swarm technology to work on farms this Precision agriculture will help feed the world's growing population we'd like robots to be able to roam Farms and be able to provide precise information about individual plants that then could be used to increase the efficiency of food production that would be a huge impact in the world this is our duty as responsible citizens and as responsible Engineers this high-flying approach towards resolving the problems of the future is definitely a path I can get on in the future artificial intelligence coordinates Flux Of drones to protect the
environment and boost the food supply to combat the negative effects of climate change on agricultural crops robotic bees assist with pollination in Orchards and on farms making them more sustainable and productive fish shaped underwater robots automatically deploy at the first sign of an oil spill these drones create a barricade to rapidly contain spills saving marine life and oceans across the world modern society has a long history of building robots to do work that's dangerous difficult or too repetitive for humans AI is poised to automate all kinds of tedious work ranging from factory work to taxi
driving to customer service while some are worried that smart robots will replace human labor that's not necessarily the case as a sector artificial intelligence is expected to generate 58 million new types of jobs in just a few years so what will the fure F of human robot interaction mean for our work and livelihoods I'm at the Massachusetts Institute of Technology to meet Dr Julie Shaw she's leading groundbreaking research in human robot collaboration my lab Works to develop robots that are effective teammates with people Julie and her team are creating software that helps robots learn from
humans even giving them insight into different human behaviors by being aware of real people robots can directly work and interact with them how do you teach these robots or machines to do these humanlike tasks the first step as it would be for you know any person the first thing they do is become immersed in the environment and observe and then we need an active learning process the robot needs to be able to communicate or show back to the person what it's learned we don't want the robot to learn the direct sequence of actions we want
the robot to learn this more General understanding that's ultimately like our challenge but getting a robot to grasp the bigger picture Concept in order to understand the basics of its task in the first place requires a lot of observation and well handholding my research is focusing on trying to make robot programming Easier by trying to teach robots how to do tasks by demonstrating them Julie's colleague anit Shaw shows me how this robot is learning to set a table so this is all the silverware and the plates the bowls the cups and this is the table
that it has to set yes that is good okay as any parent knows the first step in helping a child to learn is to model the desired Behavior it's the same with machine learning in this case the AI robot recognizes the objects with a visual tag similar to a QR code and for 2 weeks it observes onit setting a table so did you pick up an item and then place it on the dinner table that's basically what we did and based on that the robot learns what it means seted in a table Dynamic tasks like
setting a table or doing laundry are easy for humans but incredibly hard for robots the software has difficulty with so many variables and even subtle changes in their environment can throw them off one of the things which I like to do is to actually hide some of the objects so it's not going to see the spoon and the reason we do this is we want to show that the robot is robust to some of the disturbances in the task the robot software has learned what each object is and where it goes now let's see if
it's learned the concept and can think dynamically to set the table so you can just pick up the card here we go I've revealed the spoon incredibly the robot recognizes the spoon and instantly places it next to the Bowl this reveals that the robot has learned the concept and executes the right action dynamically in the process the software is continuously writing and revising its own computer code basically it's learning if like humans robots can grasp the bigger picture context and not just the mathematical tasks will AI driven robots of the future spell the end of
having to work the key aspect is not developing AI to replace or supplant um part of the human work but really understanding how we fit them together as puzzle pieces people work in teams to build cars to build planes and robots need to be an effective team member it's real teamwork as if you're in a basketball game you have your goal right and you have to think spatially who am I going to pass the ball to and at what time you do that so that everything matches up the analogy of a basketball team is outstanding
because we actually need to know spatially where they're going to be and the timing is of critical importance and so we need to develop the AI for a robot to then work with us one of the most difficult aspects of creating hyper intelligence is actually something that even we humans sometimes get wrong and that is anticipation anticipating what a teammate or cooworker might do requires understanding of contextual information on a much more sophisticated level and predicting what will happen next can robots make predictions as accurately as we can aby's our industrial robot pen laota is
giving this Abby machine the intelligence necessary to help it anticipate a human coworker's action this is a simulated manufacturing task that we have set up okay to simulate some sort of task that a person a robot could feasibly work on together for safety reasons actual human robot interaction is at present fairly minimal typically in a factory you would see these guys behind a metal cage and you wouldn't have people working with them so what we're trying to do is make something that a person could safely interact with what is human and robot supposed to do
together in this task in this task a person's placing Fasteners in some surface of a plane and a robot applying like a sealant over to sealant okay can we see it happen sure in order to work together the robot must first be able to see and recognize the actions of its human counterpart and adjust to the person's every move ooh feel like I'm in a superhero movie so the cameras in the room can see these lights and track your hand so that your hand doesn't get cut off off by the robot that's right yeah so
the cameras and the lights basically work as eyes for the robot so that's how the robot knows where I am the monitor shows the visual representation of the room that's inside the robot's mind so this is what the robot might be doing if you know I'm not in this way and the robot's just cealing and I'm not supposed to be here pen does something the robot has no way of expecting I put my hands in a robot's way oh W by quickly understanding this Human Action the AI software acts accordingly by stopping it's important to
be able to share the workspace building on this sense of teamwork PM's next step is helping Abby anticipate where he will move next based on subtle contextual cues so in this case the robot will not only track which actions I've done so far but also anticipate which portion of the space I'm going to be using and when it's planning its own motions it'll avoid those locations so that we can more work together so so what you'll see now is after replace this bolt and the robot's going to predict I'm going to go to this one
next so what you'll see is it'll behave in a different way so now that I place this Bolt the robot kind of takes a more roundabout path that allows me and the robot to work more closely together and I don't have to kind of worry about it crashing into me because I can see that it's trying to avoid me so similarly on this side I place this bolt you see the robot takes a more kind of like roundabout path yeah because they going to go there slowed down cuz I'm it's close to me right work
together at the same time so not only is the interaction more efficient and that the robot's not spending too much time standing still it's safer because the robot's not constantly kind of almost hitting me and also feels nicer for the person working with the robot I really love this same F teamwork programming robots to coordinate with us and anticipate where we will move won't only revolutionize the workplace but it will also change Society at large in the future the coordination of men and machine is so Advanced that this collaboration increases productivity and accuracy in most
Industries AI robots now accompany surgeons in hospitals across the globe they anticipate the doctor's needs and hand them the appropriate medical tool just before it's needed this dramatically reduces surgery times and human error as machines become smarter in their interactions with humans will they ever develop Consciousness and will artificial intelligence actually surpass human intelligence while some machines have exceeded human ability in games like trivia now we come to Watson in chess these AI systems were designed to master just a single skill these programs use brute force computer processing power and specially tailored software to beat
their human opponents to achieve the Holy Grail of hyperintelligence scientists must develop systems with flexible humanlike abilities to both learn and think this form of smarts is called artificial general intelligence I'm back in New York City on my own campus at Columbia University to meet with Dr hod Lipson hods lab is developing creative robots that paint original artworks self assembling robots and even robots that learn about their world without Human Assistance but his ultimate goal is even more ambitious can a machine think about itself Can it have free will I believe that in fact machines
can be sentient can be self-aware in ways that we can't as a neuroscientist I know we've only scratched the surface of our scientific understanding of how Consciousness Works in humans how could one possibly use computer code to put this Transcendent feature into a robot our hypothesis is actually very simple it is that self-awareness is nothing but the ability to simulate oneself to model oneself to have a self-image the first step towards creating robotic Consciousness is to teach the software to build an image of its physical mechanical self inside its computer mind we humans take Consciousness
like this for granted even in simple moments like understanding our own image reflected in a mirror humans start to develop awareness of their own emotions and thoughts around the age of one this helps babies understand their self-image in their minds and it helps them to learn about their environment and their role in it when the robot learns what it is it can use that self-image to plan new tasks in both humans and robots awareness about the physical self is called proprioception neuroscientists sometimes called this self-awareness of our bodies a six sense we use the same
test that a baby does in its crib a baby moves around flails around moves its arm in ways that look random to us but they're not random and then it touches its nose all right now if it brain predicts that it's going to feel something and it actually feels that that means that its self-image was correct same thing happens with a robot if proprioception can be developed to the same level as humans this could lead to robotic Consciousness H's colleague Rob Qui Kowski is the proud parent of a brand new baby robot that he built
and by interacting with its surroundings it's in the process of developing its own internal self-image so what are these claws so these are actually feet um they're designed for walking on carpet but as of now it doesn't really walk it's still kind of a baby needs to learn how the world works first what do you mean it's still kind of a baby what what is it doing like a baby it's sending completely random actions to each of these robot arms and really try to get an understanding of itself primarily it looks like a a spider
that that doesn't know how to use its legs yeah I guess that's a pretty good way to put it yeah so it will really be learning by doing this babbling for somewhere on the order of a day to a week if will process this data to create an informative model of itself and from there imagine how it would walk and then execute that walking in the real world these are the first baby steps towards developing itself self-image and like a baby it will eventually learn to walk we know this because an earlier version of this
robot using the same technique learn to walk after 100 hours but walking won't by default lead to robotic consciousness and that's why self-awareness is so crucial this is a robot which we've taken to calling a self-aware robot it is self-aware pretty much in a literal sense that it is aware of itself its locations in space and its Dynamics as to how it moves okay so it kind of understands its own movements and where it is in space how does it do that by leveraging this technique which has become popular in recent years called Deep learning
deep learning is a form of artificial intelligence that like the the human brain learns through raw data unsupervised and without structure deep learning gives machines the ability to experience and process reality like us Rob has devised an experiment to test what this robot knows about its world you could think of it as if you're looking at these red cotton balls so you have some idea in space as to where they are right now if you were to close your eyes and try to pick them up and put them in this cup so obviously it's not
a trivial task no but it's not the most difficult task in the world because we have a good proprioception we have this good model of yourself you know where your arm is in space relative to other things that you see in space but for this robot there's a catch so where are the cameras so there's no camera it's as if you were to close your eyes you know the locations at the start and it's picking it up and placing it completely blind furthermore it was not given a map or any formal instructions the robot simply
has to feel its way through the task all right let's see it give it a shot first the robot learned how to use its arm through trial and error developing a sense of proprioception by exploring its surroundings it generated an internal representation of the world and its place in it the robot is using only its internal image of the external world to maneuver its arm to pick up all nine balls and place them in the cup I'm not sure that's something I could do with my eyes closed huh it's really just based off of understanding
where you are in space yeah that's right creating AI robots that have an internal model of their world is an important step towards machine self-awareness self-awareness is sort of a similar propri sentive capability but applied to mental thinking so they think about thinking they think about what they are because once you can do that it means you can plan things into the future once robot become self-aware they will need Advanced ways to communicate with humans keyboards and screens are inadequate for complex thoughts robots will need to learn to speak and have natural conversations like a
baby who listens to those around her and learns a talk laying the groundwork for this kind of human machine interaction is pioneering scientist Barbara gross of Harvard University her seminal work in what's called natural language processing directly led to the development of voice activated artificial intelligence you know like Alexa or Siri natural language processing actually predates artificial intelligence and started with machine translation efforts the ability for a computer system to carry on a spoken dialogue with a person has been a longstanding goal of artificial intelligence research from its Inception and it turns out this is
a challenge because when you speak what you say really depends on the context in which you say it another challenge is the meaning of words can change depending on how they are delivered so one example is the contrast between saying that's fabulous and that's fabulous also when we have a conversation we Mark paragraphs at the beginning with a rise in intonation and a fall at the end so there's a whole way the speech signal tells you something about the context and something about the intend meaning Barbara's early research led to methods for programming computers to
understand the meaning of spoken language by using Clues from a person's tone and context so let's Flash Forward the speech systems are amazing now because there are lots of recordings of people speaking that they can um build their systems on as a result AI has gotten much better at understanding what people say however there is still room for improve Improvement the systems that do exist are pretty much focused on very narrow tasks this takes Siri and Alexa as examples they're mostly oriented around a single question or a single request and they presume that anybody will
stay within the range of behavior that the designers imagined so researchers are turning to machine learning by training AI with hours and hours of human conversation they can learn to better understand the context of how humans Converse future versions of this technology will allow us to have natural conversations with our computers one of the things that's amazing to me is that the fields have succeeded so well that there are devices out in the world that people use every day I never dreamed that would be the case in my lifetime hyperintelligence natural language AI will change
the way we interact with our computers and robots but this advanced technology will never reach its full potential as human companions until it looks convincingly like us I'm in Los Angeles to meet how Lee his company pinscreen is giving AI a human face they're developing Cutting Edge techniques to create hyper realistic digital avatars in an instant one of the hardest things to bring to the virtual world are humans right and uh specifically faces to create belever able faces how is relying on complex AI algorithms it's an artificial intelligence that actually digitizes yourself into the computer
U by just looking at a single image or you know partial information and it's not just a 3D model static one but it's one that can also be animated and brought to life other methods of generating life like avatars need to capture multiple angles of a faith in motion and they can take hours to render but not pin screens technology I can show you real quick how this works do you want to see yeah incredibly his software also allows him to superimpose any face he wants in real time so if I do this this blue
face is basically a face tracker so in real time it's actually uh modeling my face in 3D so if I move around my face the blue mask isn't basically a threedimensional representation of my face wow it's kind of like a green screen so like right like a Hollywood CG guy film the computer dynamically models how's faith and tracks his movement on the fly like on him turn myself into Putin wow and it's basically generating the whole thing in real time right now oh my gosh I feel like Putin's talking to me right now political leaders
aren't the only thing how can generate you going to beat some Cruz is Audrey hurn oh wow look at you you're so pretty it's generating all the pixels in real time these teeth are never seen in this picture so it's predicting what your teeth would end up looking like these aren't even your teeth yeah these are not my teeth it's actually generating oh my goodness how believes that software like this will give a more human face to the digital world ultimately this will result in friendlier looking Androids and even virtual beings I have been hanging
with my dog for a while do you have pets I have three Toy Poodles someday we're actually going to interact with virtual beings that are going to assist us in our life imagine instead of talking to Sir or Alexa you're talking to a face right and it's the best way to communicate is to have a face Toof face communication and AI provides you perfect companionship this kind of Technology will give AI a face that most people can relate to are you human or are you artificial intelligence that is a very interesting question I think I
am human but I am artificial intelligence hyper intelligent companions could Usher in a more helpful and hopeful [Music] world in the future AI powered virtual beings that look talk and even think like real humans are commonplace these holographic assistants take care of many aspects of daily life ranging from fashion advice to business consultation their faces and wardrobes can be customized depend depending on their role when a doctor is required these virtual assistants play the part and are always on call armed with the latest medical knowledge they accurately diagnose most common diseases the same technology is
also capable of capturing the image voice and life story of loved ones after death these virtual friends and family are always a part of our Lives even if Engineers can create create lifelike robotss that look like humans called Androids in order for AI to become true companions people will need to feel comfortable embracing these Androids figuratively and literally I'm outside San Diego to meet Matt McMullen the founder of real btics Matt is building Androids that people will want to embrace physically the goal is to create um not only a robot but an AI that are
both appealing enough that someone would feel like they were actually getting to know someone not something once the sculpting and casting is complete members of Matt's team mold the silicone skin on an actual functioning robot the faces are actually modular the face just literally comes off yes it does the idea is you create one robotic head and a whole bunch of different characters that can all run on that same heads so all of the things that move in the face are actuated by these magnets that are in skin Matt's team of programmers use artificial intelligence
to create Advanced chat Bots for these robots the goal is for them to have natural conversations with their companions blinking yeah she's a blinker this is a test of my system oh my gosh it's looking at me hello how are you today I'm okay I I I'm fine I'm I'm doing just fine how are for you why do you ask me that uh you know cuz I care about your [Music] feelings speech is only one aspect of human communication facial expressions are hugely important in social interaction so Matt is incorporating this nonverbal communication into his
Androids the vision system that we're working on she'll be able to look at you and detect your emotion by the expression on your face by the temperature of your skin and all these other things communication verbally and nonverbally or is key yes exactly it looks remarkable it's moving it's talking and it's having this Dynamic conversation with you that's the Wonder I can imagine some people might walk in here and say oh look a sex robot the thing is to make a really impressive and good sex robot you have to make a good robot in the
first place but I think that the the longer term goals are going to be to create these systems for people to use in whatever way they see fit Matt is building these androids for a wide range of applications we're creating human-like robots that we think can be used for a huge variety of things for people who are lonely whether they're old or maybe they're socially isolated or maybe they suffer from social anxiety Androids with a friendly face could keep the elderly company and monitor their health armed with artificial intelligence these Androids could take on other
qualitative roles I think therapy is a huge one using the robot as a safe conduit for communication and letting people really open up because they don't feel like they're being judged by something like this yeah companion Androids like this will Forge a future where nobody will ever have to feel alone again while lifelike human Androids and virtual beings have the potential to enhance human social interactions there are ethical concerns as well using artificial intelligence it's possible to hijack a person's physical identity there is one very big problem in the whole thing which is privacy what
if I would do something harmful to you and when you say harmful you mean uh reconstruct somebody and have them say something that would they would never say or Never Do Right digital Fabrications like this are already emerging online in what are called Deep fakes I can go on your website take a picture from it and then create some content with it without your consent also dangerous swarms of AI driven drones could be used in terrorist attacks can drones be weaponized of course they can be weaponized these scientific breakthroughs yield results can often times be
used against humans so you have to be held accountable for what you developed and it's a moral responsibility to think about the broader consequences when it comes to ethical considerations like these I still feel hopeful that science technology and human engineuity will find solutions to these big problems the potential for artificial intelligence to profoundly improve Society to improve jobs to improve healthc care to improve education is enormous if we do it the right way trying to build computer systems that assist them in doing what they're doing better technology is more likely to provide some tools
that will allow us to become superhuman augment our intelligence to make better decisions and to get better insights about the world future versions of this technology will become even more intelligent than us humans I believe there's no doubt robots will exceed human capability I mean the path is very clear whether it's going to take 20 years or 200 years this is maybe the most powerful technology we've ever invented the potential of AI is Limitless whatever big idea you can think of you can ultimately probably program a robot or or a computer to carry out your
vision and in the right hands this technology has the potential to radically transform every part of daily life for the better a true partnership with Hyper intelligent robots with their intentions aligned with our own will transform Humanity for the greater good [Music] [Music] [Music] [Music] [Applause]