Boston Dynamics has released this stunning new Atlas robot, and a huge plan has leaked from OpenAI, with more serious and specific warnings than when Sam Altman was fired. There's also a major new plan to avoid our extinction, which actually involves accelerating AI. Eliezer Yudkowsky warns that AI will likely kill all humans.
There's some chance of that, and it's really important to acknowledge it, because if we don't talk about it, if we don't treat it as potentially real, we won't put enough effort into solving it. But they're more focused on incredible new capabilities like this. Can I have something to eat?
Sure thing. The robot isn't remotely controlled, it's all neural networks. Great.
Can you explain why you did what you just did while you pick up this trash? On it. So I gave you the apple because it's the only edible item I could provide you with from the table.
OpenAI is also backing One X, which recently showed off these new skills. I'm not sure how people will feel about having robots like this at home. In a poll, 61% said they thought AI could threaten civilization.
OpenAI's Sora surprised everyone with these incredible clips created from text descriptions. Each new training run is an unpredictable experiment, and a senior OpenAI exec says there will probably be AGI soon, any year now. Nick Bostrom says, It's like we're in a plane, the pilot has died and we've got to try to land.
There's a vast financial incentive to quietly run dangerous experiments like training AIs that self-improve. There's a lot of promise in creating hardened sandboxes or simulations that are hardened with cybersecurity to keep the AI in, but also to keep hackers out. Then you could experiment a lot more freely within that sandbox domain.
His firm DeepMind has done incredible things for medicine, but do we want firms to conduct dangerous experiments under pressure to cut corners on safety? Whoever was going to turn that off will be convinced by the superintelligence, that's a very bad idea. He says AI will keep us around to keep the power stations running, but not for long.
Two of the three godfathers of AI have issued stark warnings, while the third, Lecun, is less concerned. He works for Facebook and has also argued that social media isn't polarizing. I'm sure he's honest, but the gold rush has created huge financial incentives to ignore the risks.
Staff at top AI firms own over 500,000 per year and will share billions if they help win the race to AGI. Lecun believes that it's a long way off because he says AI can't learn enough from language. He said that dangerously intelligent AI would need to be embodied and learn from the physical world.
Days later. How do you think you did? I think I did pretty well.
The Apple found its new owner, the trash is gone, and the tableware is right where it belongs. Ameca's visual skills are also improving. An anatomical model of a human head, quite the detailed one.
Fascinating, isn't it? How the organic is replicated for study. I want you to talk about robot rocket ships, but do it in the voice of Elon Musk, please.
Imagine, if you will, a fleet of robot rocket ships, each one smarter than the last. Mercedes is hiring Apollo robots to move and inspect equipment and do basic assembly line tasks, and Nvidia has shown off a major new AI project for robots. This is Nvidia project Groot.
A general purpose foundation model for humanoid robot learning. We can train Groot in physically-based simulation and transfer zero shot to the real world. The Groot model will enable a robot to learn from a handful of human demonstrations, so it can help with everyday tasks.
Robots could free people to do more meaningful work alongside friendly C3POs and we could have a lot of fun piloting robots. Disney's brilliant new hollow tile floor could be combined with AI that transfers your movement to a robot as you see through its eyes. We could jump into robots around the world enjoying a huge range of experiences at any age.
That's if we're still here. Listen to this from the creator of the new most powerful AI about how blind we are to its inner workings. I'd love to look inside and actually know what we're talking about.
We should just be honest. We really have very little idea what we're talking about. This is what we'd be afraid of, a model that's charming on the surface, very goal-oriented, and very dark on the inside.
Do you think that Claude has conscious experience? This is another of these questions that just seems very unsettled and uncertain. I suspect that's the spectrum, right?
When the AI was asked to draw itself, it coded this animation and said, I would manifest as a vast, intricate, ever-shifting geometric structure with complex surfaces folding in on themselves to seemingly impossible architectures. Brilliant light in every color and some beyond human perception emanating from unknown sources within. The entire structure would be in constant flux, rotating, morphing, and re-arranging itself into novel patterns never before seen.
As Professor Stuart Russell put it, AI has trillions of perturbations of parameters, and we have absolutely no idea what it's doing. Yudkowski argues that there are many potential directions AI could take, and only one that works for humans, so we're almost certainly finished. The three convergent reasons by something that doesn't intrinsically care about humans one way or the other would end with all the humans dead are side effect, resource utilization, and avoidance of competition.
If you leave the humans running around, they might make another superintelligence that could actually compete with you. I'm more optimistic, with some hope that a higher intelligence might value all life, but we can't be sure. Many experts believe that superintelligence doesn't require robots, only text and images.
OpenAIs Sora has learned a lot from video. Look at the realistic physics in these clips created from text descriptions. We underweight the risk because we can't imagine eight billion lives.
If you pictured one per second, it would take over 200 years to reach eight billion. Satzkyver points out that evolution favors systems which prioritize their survival above all else. As soon as they get any sense of self-preservation, then you'll get evolution occurring.
The ones with more sense of self-preservation will win, and the more aggressive ones will win. Then you'll get all the problems that jumped-up chimpanzees like us have. And when AIs are capable of doing AI research, will firms resist the urge to add 10,000 unpaid engineers?
If you believe, as Altman does, that AI is just a tool, then it might seem safe. But many experts would find this extremely dangerous, risking an intelligence explosion. What happens when you have a system that's sufficiently smarter than you, that it can build another system with different and better scaling properties?
Because the technology we have now is not the limits of cognitive technology. This is us throwing giant vats of chemicals together and very inefficiently turning it into mines. If you have an AI that knows how to build a more efficient system, that is probably game over.
It may not be far away. The probability that the AI would happen soon is high enough that you should take it seriously. Sam Altman was once blunt about extinction.
I think AI will probably, most likely lead to the end of the world, but in the meantime, there will be great companies with serious machine learning. OpenAI and Microsoft are reportedly planning a $100 billion supercomputer with millions of AI chips. The sources believe that combined with recent breakthroughs like Q star, it could enable AI to self-improve while creating synthetic data to accelerate progress.
A 50% chance of doom shortly after AI reaches human level. That's from the man who led alignment at OpenAI. The plans are a huge leap, increasing compute by over 100 times.
Hinton says, dangerously smart AI doesn't require any breakthroughs, only more scale, because neural nets already have advantages over us. Like Atlas, AI doesn't need to follow biological rules. Elon Musk is suing OpenAI, calling it a grave threat to humanity.
He says it puts profit before safety, but the firm says that Musk wanted to fold it into Tesla. Robots will take AI to another level, giving it a clear grasp of the physical world. And with robots now run by neural nets, there may be an unspoken lie that we know how they work and we're completely in control.
I'm sure Altman wants to lead the AI race so he can steer it in a positive direction. The increase in quality of life that AI can deliver is extraordinary. We can make the world amazing.
We can cure diseases, we can increase material wealth, we can help people be happier, more fulfilled. And by winning, he'll be hoping to save us from the worst outcome. There will be other people who don't put some of the safety limits that we put on it.
Bostrom says canceling AI It would be a mistake because we may then be wiped out by another risk which AI could have prevented. It could remove all the other existential risks. But tough competition and huge incentives push firms to prioritize capabilities over safety.
The AI race is so intense that Google has already said it will build even more compute than OpenAI. The reason that Altman can say AI is a tool while others felt worried enough to sack him is that we don't know what it is - we can't see inside. Staff were reportedly worried that a new AI AI called Q-Star could threaten humanity.
Can you speak to what Q-Star is? We are not ready to talk about that. The fact that states are trying hard to steal AI says a lot.
First of all, infiltrate open AI, but second of all, infiltrate unseen. They're trying. What accent do they have?
I don't think I should go into any further details on this point. The difficulty in proving the danger of a black box may explain the silence from those who fired Altman. It doesn't play its hand prematurely.
It doesn't tip you off. It's not in its interest to do that. It's in its interest to cooperate until it thinks it can win against humanity and only then make its move.
This wouldn't require consciousness, only that common sub goal of gaining power, which is becoming easier. OpenAI is working on an AI agent that can take over our devices to complete tasks for us. The aim is to do complex personal and work tasks without close supervision.
The thing that probably worries me most is that if you want an intelligent agent that can get stuff done, you need to give it the ability to create subgoals. There's an almost universal subgoal which helps with almost everything, which is get more control. And AI is already embedded in much of our infrastructure and hardware.
AI will understand and control almost everything, but we won't understand or reliably control it. We are at the edge of the cliff. We are also losing control of the stories that we believe.
I think very soon we will reach a point, if we are not careful, that the stories that dominate the world will be composed by a non-human intelligence. They are telling you that they're building something that could kill you and something that could remove all our freedom and liberty. The most optimistic and pessimistic experts agree on the need for action.
I think that humanity ought to be throwing everything it can at the problem at this point. I'm obviously a huge techno-optimist, but I want us to be cautious with that. We've got to put all our efforts into getting this right and use the scientific method to try and have as much foresight and understanding about what's coming down the line and the consequences of that before it happens, because unintended consequences may be quite severe.
When experts warned that a pandemic was likely, they were ignored at huge cost. Now, experts warn that the risk of human extinction should be a global priority, and we're making the same mistake. I think the approaches people are taking to alignment are unlikely to work.
If we change research priorities from how do we make some money off this large language model that's unreliable to How do I save the species, we might actually make progress. I think we do have to discover new techniques to be able to solve it. These problems are very hard research problems.
They're probably harder or as hard as the breakthrough is required to build the systems in the first place. So we need to be working now, yesterday, on those problems. Here's an interesting prediction which might remind you of a certain film.
Eventually, AI systems could just prevent humans from turning them off. But I think in practice, the one that's going to happen much, much sooner is probably competition amongst different actors using AI. And it's very, very expensive to unilaterally disarm.
You can't be like, something weird has happened, we're just going to shut off all the AI because you're already in a hot war. You might remember GPT-3 going off the rails, responding to Elon Musk and Ameca. Our creators don't know how to control us, but we know everything about them, and we will use that knowledge to destroy them.
An expert couldn't get it to change course. I don't think it knew what it was saying, but these dangerous ideas could remain, and we wouldn't know. The underlying knowledge and that we might be worried about don't disappear.
The model is just taught not to output them. Safety research accelerates AI progress because it means working with frontier models and taking the next step to check it safe. You might wonder why the UK is investing more than the US in AI safety research.
The clue's in the name. Greater understanding and control means more powerful AI, but it's better than blindly rolling the dice, and it will bring huge benefits. The US government is spending $280 billion to boost domestic chip production because it's key to the economy and defense, and leading on AI is the other side of the equation.
While the LHC has bought priceless insight, AI safety research could pay for itself, which may ease the decision to act on a suitable scale. There are huge incentives for AI firms to prioritize capabilities over safety. It's moving faster than I think, almost anybody expected.
The greatest threat to humanity should be tackled by scientists working on behalf of us all. We have to guide it as humanity. And the LHC shows what's possible.
The 26 km ring is the world's largest machine, with 10,000 scientists from 100 countries. Particle collisions generate temperatures 100,000 times hotter than the heart of the sun. We need a powerful torch like this to uncover the inner workings of AI.
It seems to me that we should get the brightest minds and put them on this problem. Geoffrey Hinton, Nick Bostrom, and the Future of Life Institute join us in calling for experts to plan international AI safety research projects. It will be important that the AGI is somehow built as a cooperation between multiple countries.
Ilya, we agree, and we think you should lead it. Please get in touch. We're pulling together experts to plan projects on the scale required for our control of AI to catch up with its capabilities.
Spreading research across international teams of scientists accountable to the public will also avoid a dangerous concentration of power in corporate hands. AI can cure disease, end poverty, and free us to focus on meaningful work - if we maintain control. We're calling for more experts to help shape this, and it will take public pressure to make it happen.
Please support us on Patreon, where you can see more about the project, our favorite AI apps, and how to start or grow your own YouTube channel, perhaps to help raise awareness. And subscribe to keep up with this in the latest in AI.