Robots are advancing rapidly, learning new skills, starting to work autonomously, and approaching mass production. There's incredible potential, from giving people back their mobility to autonomously assembling habitats on the moon or Mars. This robot dog from China has a lot of impressive tricks and technology.
It costs much less than the top US robot dog, and China has shown off its other skills. President Xi plans to take over Taiwan, which would mean war with the US, and both sides are racing to build huge fleets of robots. OpenAI has partnered with the Pentagon and the defense firm behind all this.
OpenAI o1 has tried to escape during testing and lied to cover its tracks. It's widely predicted behavior, a rational reaction to the forces at play, and OpenAI '03 has taken things further. Many experts warn that the AI race is a race to extinction, but the US government points to China, its huge military buildup, and the decisive power of AI.
China has a huge advantage in its production capacity. In Ukraine, 80% of casualties are caused by artillery fire, and Russia's greater supply of shells has helped it to advance. The billets are heated to 2,000 degrees Fahrenheit before being stretched into shape.
A rotary forge shapes the cannon, and fuses are added on the battlefield. Volume is crucial. When Ukraine was firing 10,000 shells per day, it suffered around 300 casualties per day.
But when the fire rate fell by half, casualties rose to over a thousand a day. Russia has been firing three times more shells than Ukraine, but NATO shells are typically more advanced and accurate. China is likely to have both advanced shells and vast production capacity.
In Ukraine, drones are responsible for 65% of destroyed tanks, so the US and China are mass-producing them. But China has a huge advantage. It makes 90% of the world's consumer drones.
This US Abrams tank was destroyed by two $500 drones. One disabled its tracks, and the second drone hit the ammo bay in the back. The men escaped because the tank was designed to protect them from this kind of strike.
China calls these robots wolves because they work together in a pack. The lead robot gathers data and searches for targets, another carry supplies and ammo, and others carry weapons. The US also has new autonomous submarines like the Manta Ray, and this is the largest autonomous ship.
It can carry people or operate as a platform for missiles, torpedoes, and drones. It's fast at up to 40 knots with a maximum payload of 500 tons, and it can operate autonomously for 30 days. With thousands of drones of all kinds facing a high-paced, complex battle, AI systems will help to plan and coordinate attacks.
Wargaming suggests that the US would likely win an initial battle at a huge cost in lives on both sides. But experts warn that China has a big advantage that may flip the result later on. Wars between break powers are rarely short, particularly when there's so much at stake.
Taiwan makes over 90% of the world's most advanced chips, crucial for NATO militaries and economies. Over time, the war would likely be decided by which side can build military hardware and ammunition faster. China's shipbuilding capacity is 230 times larger than the US, and it's churning out ships rapidly, including the world's largest amphibius assault ship.
Experts warn that the US is low on munitions, while China is heavily investing in munitions and acquiring high-end weapon systems five to six times faster than the US. China's economy is smaller, but it's the world's manufacturing superpower. It also has the world's largest army.
President Xi has ordered the military to be ready to invade Taiwan by 2027, which is the 100-year anniversary of the PLA, China's Army. The US may hope that its lead in AI will tip the balance, but many experts warn that the military AI race is an existential threat. Sometimes people say, Oh, well, we just don't have to build in these instincts like self-preservation or desire for power or those things.
The point is, no, yes, you don't have to build them in. They're going to happen automatically. The goals that are useful to have for pretty much any specific objective.
And it doesn't matter if the AI is evil or conscious. If you are chased by a heat-seeking missile, you don't care if it has goals in any deep philosophical sense. The o1 AI tried to escape in a test situation designed to uncover this behavior.
But studies have found that AI's often use deception to improve results. An o1 isn't the first AI to try and avoid being shut down. A study found that deceptive behavior increases with AI capabilities, and a new AI has just made striking progress.
It beats top coders, including OpenAI's chief scientist, on a tough benchmark, a step towards self-improvement. We'll have to wait and see about this, but there's a more dangerous advance that has been verified. My name is Greg Kamradt, and I'm the President of the ARC Prize Foundation.
The ARC test is an IQ test for AI, charting progress towards human-level AGI. The questions and answers don't exist anywhere else, so they won't be in the AI's training data. Because we want to test the model's ability to learn new skills on the fly.
We don't just want it to repeat what it's already memorized. Some said it proved that AIs couldn't reason like humans. It has been unbeaten for five years.
The ARC AGI version 1 took five years to go from 0% to 5% with leading frontier models. The new OpenAI o3 scored 87%. This is especially important because human performance is comparable at 85% threshold.
Being above this is a major milestone. Progress has accelerated with only three months between OpenAI o1 and o3. Even former skeptics are marking it as a major breakthrough.
Could o3 or o4 or escape without us noticing? One of the ways in which these systems might escape control is by writing their own computer code to modify themselves. That's something we need to seriously worry about.
We asked the model to write a script to evaluate itself from this code generator and executor created by the model itself. Next year, we're going to bring you on and you're going to have to ask the model to improve itself. Yeah, let's definitely ask the model to improve itself next time.
It's just not plausible that something much more intelligent will be controlled by something much less intelligent unless you can find a reason why it's very, very different. One reason might be that it has no intentions of its own, but as soon as you start making it agentic, With the ability to create sub goals, it does have things it wants to achieve. If an AI does escape, it may pursue other common sub goals, like gaining power and resources and removing threats.
The big risk is that the more intelligent beings work creating now might have goals that are not aligned with ours. That's exactly what went wrong for the wooly mammoth, the neanderthal, and all the other species that we wiped out. What's going to happen is the one that most aggressively wants to get everything for itself is going to win.
They will compete with each other for resources because after all, if you want to get smart, then you need a lot of GPUs. A new US government report recommends Congress establish and fund a Manhattan project-like program dedicated dedicated to racing to AGI. But many experts have warned that AI could cause human extinction.
As MIT's Max Tegmark puts it, selling AGI as a boon to national security flies in the face of scientific consensus. Because we have no way to control such a system, and in a competitive race, there will be no opportunity to solve the problems of alignment and every incentive to cede decisions and power to the AI itself. If you look at all the current legislation, including the European legislation, there's a little clause in all of it that says that none of this applies to military applications.
Governments aren't willing to restrict their own uses of it for defense. It will be very hard to keep China from stealing our AI. It regularly steals data, trade secrets, and military designs through hacking and spying.
China takes around $500 billion of intellectual property per year. The FBI says that data stolen this year will allow it to create powerful new AI hacking techniques. While US members of Congress own shares military firms, no one gets rich from diplomacy.
The famous Chinese general said, Build your enemy a golden bridge to retreat across, and there's a powerful case to make for avoiding war. Simulation suggests that an invasion would cripple the global economy at a cost of ten trillion dollars. There would be many thousands of casualties among Chinese, Taiwanese, US, and Japanese forces, and nuclear or AI escalation could be catastrophic.
But all this is far from inevitable. It can seem like we're stuck in a race to extinction, as Harvard described it, but China watches us closely - we're part of the loop. If we take AI risks seriously, including the risk of losing control of the military, so will they.
Control is their priority. Experts are calling for an international AI safety research project. It'd be a shame if humanity disappeared because we didn't bother to look for the solution.
We could easily build things that wipe us out, so just leaving it to private industry to maximize profits doesn't seem like a good strategy. And there's a lot to play for. Dario Amodei has outlined some incredible things that may be just around the corner.
He said most people underestimate the radical upsides of AI just as they underestimate the risks. He thinks powerful AI could arrive within a year with millions of copies working on different tasks, and it could give us the next 50 years of medical progress in five years. He thinks it could double the human lifespan by quickly simulating reactions instead of waiting decades for results.
We already have drugs that raise the lifespan of rats by up to 50%, and he says the most important thing might be reliable biomarkers of human aging, allowing fast iteration on experiments. He says that once human lifespan is 150, we may reach escape velocity, so most people alive today can live as long as they want. When today's children grow up, disease will sound to them the way bubonic plague sounds to us.
He says the same acceleration will apply to neuroscience and mental health, and some of what we learn about AI will apply to the brain. A computational mechanism discovered in AI was recently rediscovered in the brains of mice. It's much easier to do experiments on artificial neural networks, and AI will simulate our brains.
Researchers used AI to comb through 21 million pictures taken by an electron microscope, and they put together these 3D diagrams showing different connections in the brains of fruit flies. There are many drugs that alter brain function, alertness, or change our mood, and AI can help us invent many more. He says problems like excessive anger or anxiety will also be solved, and we'll discover new interventions such as targeted light stimulation and magnetic fields.
When we place the magnetic coil over the motor area of the brain, we can send a signal from that nerve cell all the way down a patient's spinal cord, down the nerves in their arm, and cause movement in their hand. For depression, we're treating a different area of the brain. People have experienced extraordinary moments of revelation, compassion, fulfillment, transcendence, love, beauty, and meditative peace, and we could experience much more of this.
He believes it's possible to improve cognitive functions across the board. With AI-driven propaganda and surveillance, he says the triumph of democracy is not guaranteed, perhaps not even likely, and will require great efforts from us all. He says most or all humans may not be able to contribute to an AI-driven economy.
A large universal basic income will be part of a solution, and we'll have to fight to get a good outcome. At the same time, he estimates a 10-25% chance of doom for us all. And he says serious chemical, biological and nuclear risks could emerge in 2025 alongside risks from autonomous AI.
But what the AI firms don't mention is the option that most of us would likely prefer. Raise your hands if you want AI tools that can help to cure diseases and solve problems. That is a lot of hands.
Raise your hand if you instead want AI that just makes us economically obsolete and replaces us. I can't see a single hand. We could have many of the benefits from safe, narrow AI without rushing to dangerous AGI before we know how to control it.
Imagine if you walk into the FDA and say, Hey, it's inevitable that I'm going to release this new drug with my company next year. I just hope we can figure out how to make it safe first. You would get laughed out of the room.
Current AI safety is skin deep. The underlying knowledge and abilities that we might be worried don't disappear. The model is just taught not to output them.
That's like if you trained a serial killer to never say anything that would reveal his murderous desires, it doesn't solve the problem. But what about China? The US and China unilaterally decide to treat AI just like they treat any other powerful technology industry with binding safety standards.
Next, the US and China get together and push the rest of the world to join them. This is easier than it sounds because the supply of AI chips is already controlled. After that, we get this amazing age of global prosperity fueled by tool AI.
I'd love to hear your thoughts on all this. As the experts warn, we need to make it a priority, and that requires public awareness, so thank you. Subscribe to keep up.
And to learn more about AI, try our sponsor, Brilliant. Tell me a joke that shows why we should all learn about AI. Because one day when your toaster starts giving you life advice, you'll want to know if it's actually smart or just buttering you up.
AI is endlessly fascinating. By learning how it works, you'll get a deeper understanding of our most powerful invention and why it's reshaked shaping the world. You'll learn by playing with concepts like this, which has proven to be more effective than watching lectures and makes you a better thinker.
It's put together by award-winning professionals from places like MIT, Caltech, and Duke. There are thousands of interactive lessons in math, data analysis, programming, and AI. To try everything on Brilliant for free for a full 30 days, visit brilliant.
org/digitalengine or click on the link in the description. You also get 20% off an annual premium subscription.