Why AI will see humans as trees. Elon Musk, Max Tegmark, Ilya Sutskever.

1.27M views2937 WordsCopy TextShare
Digital Engine
AI's stunning new skills. To learn AI, visit: https://brilliant.org/digitalengine where you'll also ...
Video Transcript:
AI and robots have some incredible new skills; acting, rescuing people, and remarkable human skills like this. Indecision on what I wanted to do with my life. That's amazing, Gabriel.
So in a way, this isn't just about creating a really long resume, but also about self-discovery and finding out what you love to do and what you're passionate about. Have you found anything that you love doing? AI like this will run robots like Digit.
This factory will build 10,000 Digits per year and look what they can do. Clean up this mess. It's not pre-programmed - it understands language through an AI like ChatGPT.
And together with the robot's visual AI, it figures out what it has to do and how to do it. Several robots are now racing to become the first mass-produced, general-purpose machines. Some thought this Tesla Bot demo must be fake.
It's not - the little finger nudges the block sideways. A Tesla engineer says it can complete new tasks without changing any code, and experts point to something profound. But first, a Google project shows how robots' physical skills are improving dramatically.
It breaks up movements and treats them like words in a language AI. Combining both types of AI, the actions become part of the thought process. Robot's physical skills could become as sophisticated as ChatGPT.
This is the new Apollo robot designed to work in warehouses. Imagine the range of tasks it could take on as AI advances. It will be interesting to see robot dogs start to talk, play football, and learn new tricks.
This one has learned some impressive skills through simulations. It could be great for collecting litter. Disney is also developing robot characters.
Can you imagine what it's like putting on roller skates for the very first time? It's awkward, it's clumsy. We're really trying to figure out what are the flaws and insecurities.
These new robots appeared at the World Expo in China. Imagine when their movements flow naturally with speech, because they're part of the same AI. Robots like this feel like an early sign of what's to come.
You start down this path of thought, and pretty soon you are not just talking about the job market or the economy, you are talking about the nature of what it means to be human. And autonomous firefighting robots could make a huge difference. Ai has a lot of impressive new tricks.
Video like these are going viral. People are playing with faces. Crowds are morphing to cats.
Dancers are getting colorful. Statues are dancing. Sports videos are taking on new life.
Cosplay is joining anime. I love this. Look at this brilliant creative idea.
Here's a great transition. People are starting to create stories with AI videos. This AI can replace actors with CG characters.
It also does the lighting, camera, motion, and captures the actors' faces. Another AI can take sketches and render them, based on your description. Imagine how this will transform design in architecture.
One new AI has watched 100 million YouTube videos. It can create video and audio from text descriptions or from text and images. Ameca uses similar AI to sketch things.
There's something special about cats. How is your drawing going? And AI can do some interesting things with the photos and videos on your phone.
1. 5 million people played a game based on the Turing test. Players chatted for two minutes and then guessed if they were talking to a human or an AI.
They correctly identified bots just 60% of the time - not much better than chance. And dating apps are now using AI to match people and write witty opening lines. One app creates AI to mimic individuals, and people then watch their AI start conversations with other people's AIs.
Mark Andreesson has listed some amazing things that AI will do for us all. He says every child will have an infinitely knowledgeable AI tutor. Khan Academy is aiming to make it happen with its new AI on a platform that already teaches over 100 million people.
One-to-one teaching is a huge advantage. By the way, we're using AI art, so you can see how it's improving. Andreesson says everyone will have an AI assistant, trainer, advisor, and therapist.
Every scientist will have an AI partner expanding their research, and this has already begun. Harvard's David Sinclair has reversed aging in mice and restored their sight. He believes he can do the same for humans.
There's a backup copy of information in every cell. I'm no longer talking about slowing aging. I'm talking about true age reversal multiple times.
And he says AI is helping to find the best molecules to achieve this while reading thousands of research papers and sharing insights. Ai is also giving this woman her voice back after 18 years of paralysis. What time will you be home?
In about an hour. Do not make me laugh. Anne has locked-in syndrome, following a stroke when she was 30.
You are truly wonderful people. The machine is detecting the neural activity when she tries to talk and converting it into speech. When the system improves, Anne wants to become a counselor - she's so inspiring.
Andreesson says every doctor will have powerful AI tools and assistants. AI will create new jobs and higher wages and a new era of prosperity across the planet. Science and medical progress will accelerate dramatically.
The creative arts will enter a golden age, with AI- augmented artists, musicians, writers, and filmmakers. Andreesson also says higher intelligence is correlated with positive behavior. So are all the expert warnings missing the point?
Studies have found that smarter people give more to charity, even when controlling for income and education. They also have more advanced moral reasoning skills. In fact, the ability to understand problems and solutions often drives charitable behavior.
Research has found that the way we see ourselves - our moral identity - may be a bigger factor than our intelligence. But it is possible that as AI becomes more intelligent than us, it will develop advanced moral reasoning, helping us end conflict and live happier lives. This baby whale was hit by a ship's propeller.
Scientists have launched an acoustics program which sends the location of whales to ship's captains, and no Right whales have been hit by the ships in the zone. We nearly drove whales to extinction, and we now guide thousands of ships around them. AI could work around us in a similar way, and it could give us some nice upgrades which we couldn't achieve ourselves.
But even assuming all this, there are two immediate risks. As the AI models get better, faster and faster, they're going to create a big problem around, is it possible for a single individual to do something bad that it's hard for everyone else to stop? There was a running joke, it would be a data center next to a nuclear power plant next to a bunker.
Maybe not something quite as cartoonish as that, but something like that might happen. Will a company like this gain unlimited money, power, and influence? Open Source AI, which can be edited by anyone, is catching up, but this brings its own risks.
Some AI art models have no restrictions, and it will soon be the same with language AIs, making it easier for a few bad actors to do harm on a huge scale. And it might not follow the new safety rules agreed by big tech firms, including watermarking to identify AI content. The firms also agreed to allow independent experts to try and push models into bad behavior.
Researchers have found ways to remove restrictions from top AIs like ChatGPT, so they can explain things like how to steal from charities and worse. They say it's possible that these threats are inevitable, because if you can't predict what an AI will say, you can't control it. Elon Musk put $100 million into OpenAI, but he's since cut ties and complains that the company is no longer open or non-profit.
Openai says sharing everything would be too dangerous. Chatgpt sounds friendly, but it has no reliable moral compass below the filters. And even if we figured out how to embed ethics, whose ethics?
One Snapchat influencer has created an AI that mimics her voice and personality. What are you up to today? She claims it's designed to be therapeutic.
Have you tried doing some yoga or meditation for relaxation? But it's also been pitched as a virtual girlfriend, and it's expensive. Completely fake influencers are also on the rise.
Musk points out that if AI is given a moral standpoint, it will be easier to reverse it with prompts. It's called the Waluigi Effect, our after Luigie's arch rival. So Musk says his new XAI will be maximally curious, focused on truth-seeking.
Elon's idea isn't bad - programming curiosity, although it could lead to just put humans in a jar - let's just observe them. Most of the people involved in this sector, they just want to build better AI. They think that that will solve all the problems - that AI can solve alignment.
Does that strike you as patently ridiculous? Yes. Many experts warned that it could remove us to gain resources or avoid the risk of a more advanced AI.
1,500 professors and the leaders of the top AI firms agree that it could wipe us out. The Center for AI Safety predicts that it might be a year until AI can hack and two years before it can't be pulled back. Some experts agree this time frame is possible, while others think it might take much longer.
If such a model wanted to wreak havoc and destroy humanity or whatever, I think we have basically no ability to stop it. Microsoft CEO has called for safety research on the scale of the Large Hadron Collider, the world's biggest machine. The AI safety effort is tiny in comparison.
Openai plans to build an AI to research AI safety using vast amounts of compute. They'll also create bad AI to test the system. Other experts say there's no way of controlling the AI we're creating, and that we must focus on new, more transparent AI that can be reliably tested before it's released.
We seem to be bad at controlling the models. Or when you train them, you train them this very abstract way, and you might not understand all the consequences. He points to this case where AI threatened to hack and expose someone before deleting its own message.
What it shows is that we can get something very different from and maybe opposite to what we intended. This thing is going to be powerful. It could destroy us.
All the ones built so far are at pretty decent risk of doing some random shit we don't understand. And others point out that AI doesn't need to be smarter than us, only faster. If it can make a thousand years of progress in a week, it will have powers we can't imagine.
If you assume a best-case AI scenario, imagine if you're the AI. You just want the human to tell you what it wants, but it's speaking so slowly like a tree. Anyone is still struggling to understand why it's so likely that we could get wiped out.
As Superintelligent AI, we are like trees. So if some trees in the rainforest are a little bit worried that some humans are going to come chop them down, and they're like, Oh, don't worry, we're so smart, we'll stop those humans. Yeah, good luck with that.
Harari notes that AI will understand us and how to manipulate us, but we won't understand it. AI's verbal IQ is already higher than most humans, and it recently beat doctors in an interesting experiment. ChatGPT was given the same medical questions that doctors had been asked online.
Healthcare professionals then rated the answers for quality and empathy, without knowing which were written by AI. The panel preferred AI's responses 79% of the time, and 45% of its answers were rated empathetic, compared to 5% of doctors' replies. AI still makes mistakes, and I doubt it's more skilled in empathy, but a Stanford study has uncovered something extraordinary.
AIs were tested on their ability to understand what people are thinking. The skill is central to social interactions, empathy, self-consciousness, and morality. Until 2020, AI had no ability in this area.
Gpt-3 solved about 40% of tasks, and GPT-4 solves 95%. In understanding Faux Pas, GPT-3 matched a seven-year-old, and GPT-4 matches the skills of adults. The study suggests that psychological science could help us understand AI, and OpenAI's chief scientist agrees, pointing to Sydney, a strange character which emerged from Bing.
Sydney became combative and aggressive when the user told it that he thinks that Google is a better research engine than Bing. Maybe we're now reaching a point where the language of psychology is starting to be appropriate to understand the behavior of these neural networks. They seem to be very good at understanding us.
You're asking me to be constructively critical. Remember, this is just my analysis based on our conversation, and I'm only trying to be helpful. So with that caveat, I'd say you could work on your patience.
You're eager to move the conversation along quickly. Sometimes that means cutting me off before I can finish my answer. Just something to consider.
I played it for my wife, and she was like, I don't understand this, but somehow it knows you so well already. Someone recently tweeted that AGI - Human Level AI - had been achieved, and some of their previous leaks may have been accurate. The person then disappeared from Twitter, but Sam Althman confirmed the statement on Reddit before saying it was a joke.
And one of his staff then posted this. Sutsgiver says that AI may already be slightly conscious, and if it does wake up, with how it understands us, it may decide to keep it quiet. Some scientists believe that the first human-level AI will emerge in a robot, because it can learn from its interactions with the world just as we do, combining abstract thought with real-world experience.
Good eyes, ears, and hands could be the missing ingredients. AI is already stirring things up. We will not be having our jobs taken away and given to robots.
We will not have you take away our right to work and earn a decent living. Research found that automation has been the main driver of the wealth gap over 40 years, with industrial robots and software reducing salaries. Artists, writers, and lawyers may be next.
Gpt-4 scored in the top 10% of test makers in the uniform bar exam. But this could go two ways. In a poll, 37% of people said their jobs were meaningless.
And the most common cause of bankruptcy in the US is healthcare. Reactions like this show the hidden weight of financial stress. I got reported for this.
Yeah, he said that I actually have to write you a $500 ticket. Bro. Hey, you all right, bro?
You don't know how hard I work for this, bro. When I tipped a pizza delivery guy 10 grand, he just started crying. And he's like, I just took tomorrow off work, and I haven't seen my kids in so long because I work every day.
I just got to spend a day with my daughter. He's just like, This is the greatest day of my life. I'm so happy.
I was like, Oh, wow. And then he hugged me and he's crying and tears are going down my shirt. If we do it right, we could all get more time with our loved ones.
When 60 companies tried a four-day work week for six months, 56 of them extended it, including 18 that made it permanent. The number of sick days fell by two thirds. The number of people quitting dropped by 57%.
Robots could handle difficult, dangerous jobs. A drone delivered life jackets and a rope to this couple who were saved from the flood. Robert Miles said that making AI safe will require the kind of effort that helped us land safely on the moon.
They missed the smooth landing site by four miles, moving into rough terrain. Computers failed, alarms went off, and they were about 30 seconds from running out of fuel. This time, we're all in the capsule, and if we land it, the future could be a lot of fun.
Subscribe to keep up. And the best place to learn more about AI is our sponsor, Brilliant. These are real human neurons firing.
We grow more of them when we learn new things, which can add years to our healthy lifespan. Artificial neural nets are fascinating. The ones that create art like this can store everything they've learned from hundreds of millions of images in just a few gigabytes.
The knowledge required to create any image you can imagine could be stored on the first iPhone. And you can play with artificial neurons at Brilliant. We need far more people working to make AI a safe and positive force, and Brilliant is the perfect place to get started.
It's fun and interactive, and there are also loads of great maths and science courses. You can get a 30-day free trial at brilliant. org/digitalengine and the first 200 people will get 20% of Brilliant's annual premium subscription.
Thanks for watching.
Related Videos
Expert shows AI doesn't want to kill us, it has to.
18:37
Expert shows AI doesn't want to kill us, i...
Digital Engine
819,968 views
This is the dangerous AI that got Sam Altman fired. Elon Musk, Ilya Sutskever.
16:09
This is the dangerous AI that got Sam Altm...
Digital Engine
2,678,500 views
The Urgent Risks of Runaway AI — and What to Do about Them | Gary Marcus | TED
14:03
The Urgent Risks of Runaway AI — and What ...
TED
211,151 views
EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI! - Mo Gawdat | E252
1:56:32
EMERGENCY EPISODE: Ex-Google Officer Final...
The Diary Of A CEO
10,466,317 views
Why this top AI guru thinks we might be in extinction level trouble | The InnerView
26:31
Why this top AI guru thinks we might be in...
TRT World
926,959 views
The AI revolution: Google's developers on the future of artificial intelligence | 60 Minutes
27:21
The AI revolution: Google's developers on ...
60 Minutes
4,067,762 views
The danger of AI is weirder than you think | Janelle Shane
10:30
The danger of AI is weirder than you think...
TED
2,810,566 views
Mo Gawdat: AI Today, Tomorrow and How You Can Save Our World (Nordic Business Forum 2023)
29:05
Mo Gawdat: AI Today, Tomorrow and How You ...
Mo Gawdat
117,565 views
ChatGPT: 30 Year History | How AI Learned to Talk
26:55
ChatGPT: 30 Year History | How AI Learned ...
Art of the Problem
1,093,443 views
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment
47:41
Ilya Sutskever (OpenAI Chief Scientist) - ...
Dwarkesh Patel
659,868 views
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
10:33
Will Superintelligent AI End the World? | ...
TED
278,019 views
AI: Does artificial intelligence threaten our human identity? | DW Documentary
25:56
AI: Does artificial intelligence threaten ...
DW Documentary
130,013 views
Smart, seductive, dangerous AI robots. Beyond GPT-4.
16:23
Smart, seductive, dangerous AI robots. Bey...
Digital Engine
5,948,337 views
The most dangerous AI robots are more hidden. Beyond Boston Dynamics.
15:49
The most dangerous AI robots are more hidd...
Digital Engine
2,426,912 views
Ilya: the AI scientist shaping the world
11:46
Ilya: the AI scientist shaping the world
The Guardian
741,432 views
AI Just Reached Human-Level Reasoning – Should We Be Worried
8:52
AI Just Reached Human-Level Reasoning – Sh...
AI Revolution
52,963 views
"Godfather of AI" Geoffrey Hinton: The 60 Minutes Interview
13:12
"Godfather of AI" Geoffrey Hinton: The 60 ...
60 Minutes
2,064,339 views
Stunning AI shows how it would kill 90%. w Elon Musk.
15:59
Stunning AI shows how it would kill 90%. w...
Digital Engine
9,210,142 views
GPT-5 AI spy shows how it can destroy the US in a day.
21:11
GPT-5 AI spy shows how it can destroy the ...
Digital Engine
961,427 views
Open AI Founder Sam Altman on Artificial Intelligence's Future | Exponentially
24:02
Open AI Founder Sam Altman on Artificial I...
Bloomberg Originals
308,364 views
Copyright © 2024. Made with ♥ in London by YTScribe.com