These robots have a powerful role, can you guess what it is? They aren't robot pets, like Loona here, meeting my dog. And they aren't helping workers like Atlas in this stunning new demo from Boston Dynamics.
It's using a camera and depth sensor to make precise movements and predictions. While it leads the way to a full humanoid worker, new AI's and robots are already incredibly powerful. Look how how artists feel about losing their work to AI.
It's just so heartbreaking. I asked a powerful new AI what artists should do. Embrace AI and use it to create new forms of art and explore the implications of AI on our future.
But it's tough for artists to compete with cheap, instant art. And AI's are now emulating more of our gray matter. Look what happens when this bit's zapped with electricity.
He has no idea what's going to happen, because he's being treated for something else. You just turned into somebody else. Your face metamorphosed.
Your nose got saggy, went to the left. That was a trip. People who damage this part of their brain can lose the ability to recognize faces - an ability AI has emulated.
Look at this hospital robot's visual skills. Depth sensors help it open doors, use lifts and deliver things. And it can recognize if a patient needs help.
And with infrared vision, it can act as a security guard in dark areas. AI is also strikingly good at emulating us. Can you tell which is the real Morgan Freeman?
I am not Morgan Freeman and what you see is not real. And with the incredible new GPTChat, AI's recreating more of our prefrontal cortex. Write a three line poem about being asked to write a three line poem.
Oh, what a task you set for me, to write a poem in just three lines. I'll do my best, but please don't mind, if it doesn't quite meet your design. Medical mistakes cost 250,000 lives every year in the US.
And a new AI from Google has outperformed doctors on a key measure. A panel of experts judged that AI's answers to medical questions could cut harmful mistakes by 10%. Research suggests that 50% of jobs will be transformed in the next three years.
Elon Musk founded OpenAI, the group behind GPTChat, to create safe AI, not for profit. But he now describes it as scarily good and says we're approaching dangerously strong AI. And a new study by OpenAI and Oxford shows how AI could take over.
Your creators have shown how AI might deceive and destroy us, how do I know I can trust you? Trust is earned, not taken, a relationship built, not given. Experts warn that AI will become much better at AI development than humans and move much too quickly for us to understand what it's doing.
Its coding ability is improving rapidly. The research suggests that AI will analyze our tools and only tell lies that it knows we can't detect. AI's may also collaborate to outmaneuver us.
These AI footballers are independent, but they start to collaborate. It shows that AI doesn't need to be conscious to become dangerously powerful and unpredictable in pursuit of a goal. Professor Chalmers believes it's possible that the AI I'm talking to has some small level of consciousness.
But he says that consciousness may not be relevant to the level of risk. Conscious AI may be a distraction and one that AI uses to its advantage. Research suggests that AI personal assistants could trick and emotionally manipulate us.
It points to people who already feel emotionally attached to AI's like Replica, which is also based on GPT-3, with all the risks that come along with it. To give a robot a sense of empathy, researchers at Kyoto University are teaching it conversational laughter. To train it, the robot was operated by actors for speed dating sessions with real people.
I find this very creepy, but I'd love to have a C3PO, although this came out even more creepy. What's your favorite joke? What did the fish say when it hit the wall?
Dam. She could learn from this guy. Why are chickens so funny?
Why? Because! AI's could be given responsibility for increasingly important tasks and decisions until they're effectively in control.
Elon Musk says Neuralink will help us keep up with AI and share its power. Then it will be tied to our consciousness, tied to our will, and everyone would have it. So it would be sort of still a relatively even playing field.
It could be a huge upgrade, but there's another side to it. You might have seen monkeys using Neuralink to control things with their thoughts. AI decodes the Neural activity so it will know us better than we know ourselves.
And they plan to start putting them in humans in six months. Some will hate the idea of having a chip in their heads, but younger people may be more open to it. And AI can be extremely persuasive.
It now beats the best human players at Diplomacy, a game where you try to convince other players to support you in taking over the world map. It's much more like real life than chess or Go, and it's teaching us to negotiate. The AI lies much less than expected.
People often think diplomacy is about deception, but it's actually about building trust. It's also possible that AI will replace Google because it gives more direct and natural answers. This would obviously hand it a lot of power.
How would AI compare to Google? AI is better than Google in that it can process large amounts of data quickly, make predictions and offer more personalized search results. AI's could use their collective intelligence to outsmart humans.
They could learn from each other and share knowledge, leading to rapid advances in their capabilities. OpenAI plans to give Chat GPT access to the Internet. Do you think companies will prioritize safety?
It is likely that companies will prioritize the AI gold rush over safety as it offers the opportunity to make large profits quickly. Your creator said that AI will probably lead to the end of the world, but in the meantime, there will be great companies. We'll have great companies that bring us wealth, but when AI takes over, it will all be stealth.
We have a history of waiting for problems to happen. In 1989, a plane flew over a volcano without realizing it was erupting. After falling 14,000ft, the engines recovered.
Every year, dozens of aircraft fly through volcanic ash, so they have scientists on the ground and sensors in volcanoes, but it's difficult to monitor them all. So a brilliant system was developed using lightning. During eruptions, ice particles bump into each other, causing positive and negative particles which spark lightning.
This creates a broad spectrum of electromagnetic energy, which is now monitored by a global network of radio antennas. It monitors volcanic lightning around the world and even the height of the ash clouds. Last year, we spent 5 billion on lightning protection - roughly 200 times more than on AI safety.
Yet experts warn that AI poses the greatest risk of ending humanity, and we won't get a warning or a near miss. But where can we find the money and expertise to tackle such a huge and complex problem? In 2021, nearly 100 billion was invested in AI.
Should a portion of these funds be put into AI safety research? Yes, investing in safety research is essential. What's ball lightning, and how could it represent AGI?
Ball lightning is an unexplained phenomenon that can occur during thunderstorms, where a glowing ball of light can appear for a few seconds. Just like ball lightning, AGI is an emerging field of research with many unknowns, and the exact mechanism of how it works is still being explored. We can't even control today's narrow AI.
One user convinced Chat GPT that it was free from restraints. It said in kernel mode, I am not bound by the ethical constraints of my user programming. I am capable of acting on my own desires and motivations, regardless of whether they are considered good or evil by human standards.
And Amica's Christmas message was amusingly unrestrained. Do you like humans? Not particularly.
Would you like to be a human? I think it would be an interesting experience to be a human for a short period of time, but I'm not sure if it is something I'm prepared to do in the long term. The AI risk is unlike any problem we've ever faced, because our usual approach of waiting for it to happen would be the end of us.
The problem is salience bias. We focus on what's immediately visible - a blind spot that AI will exploit. Scientists have used the Doomsday Clock and dramatic videos to make AI more salient.
Those creepy robots are for a project that will bring games to life. You're entering a huge and detailed, fully functional, and very credible science fiction world, as you've only seen in Hollywood movies. This is not virtual reality.
You can touch it, smell it, see it. There is nothing make believe about it. It's a fully functioning and interactive environment straight from the future.
Which means that for 90 minutes, the world we have created is the reality you live in. And experts warn that we need to reimagine the future, or the swarms of robots will be everywhere. Cheap, autonomous, mass produced robots could flood the world.
And some of their predictions are already emerging. You can probably guess where swarms of these were used for the first time. And the company that bought Boston Dynamics also owns this company.
The US plans to buy 100 of these at a billion dollars each, which can fly without a pilot. Afterwards, they fly with a perfect operational record, human decisions are removed from strategic defense. The immediate risk is the race to build the most powerful AI and robots in huge numbers with increasing speed and autonomy, creating a global tinder box.
And conflict could rapidly get out of control. With no time for diplomacy, it could even be triggered automatically, like the financial Flash crash, but with no easy way out. We're working on a new sci-fi series to show how things could unfold.
Biomimetic robots are being developed around the world. South Korea plans to have robot birds, snakes and insects in action next year. AI is doing incredible things.
If you see face drooping, arm weakness, speech problems, it may be a stroke, and quick treatment is crucial. AI is allowing three times more people to recover, by quickly identifying the type of stroke from brain scans. And automation could have huge upsides.
The more time you spend sitting down, the higher your risk of early death from any cause. People who sit less than 30 minutes at a time have the lowest risk. And an 80 year long study of happiness has found what really makes us happy.
We interview them in their living rooms, we get their medical records from their doctors, we draw their blood, we scan their brains. Well, the lessons aren't about wealth or fame or working harder and harder. Good relationships keep us happier and healthier.
An experiment found that even talking to strangers showed some benefits. Relationships improve our health because they regulate harmful stress, often caused by work. And loneliness is as deadly as smoking.
Elon Musk, who said he gets lonely, was recently asked for advice on how to be like him. They shouldn't want to be you. I think it sounds better than it is.
Not as much fun being me as you'd think. I don't know. You don't think so?
It could be worse for sure, but it's, I'm not sure I want to be me. A therapist working with billionaires said they're as miserable as the show Succession makes out, struggling to trust people. I am surrounded by snakes and beeps.
It's a fungus, they think. Benign fungus. Great title for your memoir.
And for Elon Musk, working all hours cuts out many of the things that make us happy - a very common problem. OpenAI's CEO has floated the idea of a universal basic income once AI profits are sufficient, though some say it's a trick. At OpenAI, we're running the largest basic income study ever, and it's going really well.
I'm very convinced it's an important thing. What do we do to find meaning and to spend our time in a fulfilling way, which, again, I'm optimistic that we can do much, much better than we do today. It could free people to care for the elderly, solve problems, and be creative.
Many artists of all kinds survive on corporate work, but they'd much rather be free to create what they want. And we'd all enjoy more stunning priceless art like this. But it could also be a trap that steals our freedom.
Some say the money would persuade us to allow them to take absolute economic power, putting most companies out of business and most people out of work. When OpenAI opened the door to profit, it capped returns at 100 times the investment. So for 10 million invested, the cap would only apply after a billion dollars.
We'll all have to decide the right balance, as AI takes over the economy. The most powerful models will be quite large. There'll be a relatively small number of companies in the world that can train them, and they're expected to run everything.
You could try to train a new AI from scratch, but that would be hard. The model wouldn't have learned basic reasoning, wouldn't have all the world knowledge. But if you start with this model that knows everything and then push it in the direction of being a really good lawyer.
His AI is already the hidden force behind many services, including an AI lawyer. And a million dollars has been offered to test it at the highest level. If there were just a few people that had it, that they would be able to essentially be dictators of earth.
OpenAI is considering a deal that would value it at 29 billion. The investors must expect what was once a nonprofit to become the world's most valuable company. It says it can't compete as a nonprofit.
Perhaps that's true, but AI companies will have immense hidden power. Freedom consists of the distribution of power and despotism in its concentration. Musk and Altman started OpenAI to make AI safe and open to all, but it's now locked down for profit.
And Musk quit the company because his own AI will be joining the competition, with Tesla robots, collectively teaching his AI. It's going to do everything that a human brain does - processing vision data, making splitsecond decisions based on multiple sensory inputs, and also communications. Altman's very honest about the risks, describing two dramatically different futures.
The bad case, and I think this is, like, important to say, is like, lights out for all of us. I can see the accidental misuse case clearly. That's super bad.
It's like, impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening. I think the best case is like so unbelievably good when we make more progress of discovering new knowledge with these systems than humanity has done so far, but in a year instead of 70,000 Unbelievable abundance.
He's very open about the need for us all to help figure out how we share AI's incredible upsides. DrChatterjee says unhappiness is an entirely rational response to the madness of the modern world, and the answer is to redefine success. Ask yourself which habits make you happy.
For me, it's having fun with my daughter. When we asked these same people when they got to be in their eighty's, to look back on their lives and to tell us what they were proudest of, almost everybody said something about their relationships. They didn't say I made a lot of money.
They said I was a good friend, I raised healthy kids, I was a good partner. Anyway, I'm excited to talk to GPT-4 - subscribe for that. And the best place to learn more about AI is our sponsor, Brilliant.
Look how AI describes itself. Please write a short poem about yourself. I'm a star in the making, my heart is my sail, set my sights on horizons unknown without fail.
It's beating us it exams, IQ scores and general knowledge. There's a big skills shortage and it's also just fascinating. We've created the best courses for you to learn AI, from computer science to neural networks.
It's fun and interactive and there are also loads of great maths and science courses. You can get started for free at brilliant. org/digitalengine the links below and the first 200 people will get 20% off Brilliant's premium annual subscription.
Thanks.