We may look on our time as the moment civilization was transformed, as it was by fire, agriculture, and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer, which is to say, with creativity, truth, error, and lies. The technology known as a chatbot is only one of the recent breakthroughs in artificial intelligence—machines that can teach themselves superhuman skills.
We explored what's coming next at Google, a leader in this new world. CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. The revolution, he says, is coming faster than you know.
The story will continue in a moment. Do you think society is prepared for what's coming? You know, there are two ways I think about it.
On one hand, I feel no, because the pace at which we can think and adapt as societal institutions compared to the pace at which the technology is evolving seems to be a mismatch. On the other hand, compared to any other technology I've seen, more people are worried about it earlier in its life cycle, so I feel optimistic. The number of people who have started worrying about the implications, and hence the conversations are starting in a serious way as well.
I guess our conversations with 50-year-old Sundar Pichai started at Google's new campus in Mountain View, California. It runs on 40 percent solar power and collects more water than it uses—high tech that Pichai couldn't have imagined growing up in India. With no telephone at home, he was on a waiting list to get a rotary phone for about five years, and when it finally came home, he can still recall it vividly.
It changed their lives. To him, it was the first moment he understood the power of what getting access to technology meant. So, it's probably led him to be doing what he's doing today.
What he's been doing since 2019 is leading both Google and its parent company, Alphabet, valued at $1. 3 trillion worldwide. Google runs 90 percent of internet searches and 70 percent of smartphones.
We're really excited about it, but its dominance was attacked this past February when Microsoft linked its search engine to a chatbot in a race for AI dominance. Google just released its chatbot named Bard. It's really here to help you brainstorm ideas, generate content like a speech, or a blog post, or an email.
We were introduced to Bard by Google Vice President Shell and Senior Vice President James Manika. Here's Bard, and the first thing we learned was that Bard does not look for answers on the internet like Google search does. So, I wanted to get inspiration from some of the best speeches in the world.
Bard's replies come from a self-contained program that was mostly self-taught. Our experience was unsettling—confounding, absolutely confounding. Bard appeared to possess the sum of human knowledge, with microchips more than 100,000 times faster than the human brain.
So, we asked Bard to summarize the New Testament. It did, in five seconds and 17 words in Latin. We asked for it in Latin, and that took another four seconds.
Then we played with a famous six-word short story often attributed to Hemingway: "For sale: baby shoes, never worn. " Wow. The only prompt we gave was, "Finish this story," and in five seconds, holy cow!
The shoes were a gift from my wife, but we never had a baby. They were from the six-word prompt. Bard created a deeply human tale with characters it invented, including a man whose wife could not conceive and a stranger grieving after a miscarriage and longing for closure.
I am rarely speechless. I don't know what to make of this. "Give me," we asked for the story in verse in five seconds.
There was a poem written by a machine, with breathtaking insight into the mystery of faith. Bard wrote, "She knew her baby's soul would always be alive. " The humanity at superhuman speed was a shock.
How is this possible? James Manika told us that over several months, Bard read most everything on the internet and created a model of what language looks like. Rather than search, its answers come from this language model.
So, for example, if I said to you, "Scott, peanut butter and jelly," it tries and learns to predict, "Okay, so peanut butter usually is followed by jelly. " It tries to predict the most probable next words based on everything it’s learned. So, it's not going out to find stuff; it's just predicting the next word.
But it doesn't feel like that. We asked Bard why it helps people, and it replied, "Because it makes me happy. " Bard, to my eye, appears to be thinking, appears to be making judgments.
That's not what's happening. These machines are not sentient; they're not aware of themselves. They're not sentient.
They can exhibit behaviors that look like that because, keep in mind, they've learned from us. We're sentient beings; we have beings that have feelings, emotions, ideas, thoughts, perspectives. We've reflected all that in books, in novels, in fiction.
So, when they learn from that, they build patterns from that. So, it's no surprise to me that the exhibited behavior sometimes looks like maybe there's somebody behind it—there's nobody there. These are not sentient beings.
Zimbabwe-born, Oxford-educated James Manika holds a new position at Google. His job is to think about how AI and humanity will best co-exist. AI has the potential to change many ways in which you've thought about society, about what we're able to do, the problems we can solve, but AI itself will pose its own problems.
Could Hemingway write a better short story? Maybe. But Bard can write a million before Hemingway could finish one.
Imagine that level of transformation. Automation across the economy means a lot of people can be replaced by this technology. Yes, there are some job occupations that will start to decline over time; however, there are also new job categories that will grow over time.
But the biggest change will be in the jobs that will be transformed—something like more than two-thirds will have their definitions change, not go away but change, because they are now being assisted by AI and by automation. This is a profound change that has implications for skills. How do we assist people in building new skills to learn to work alongside machines, and how do these complement what people do today?
This is going to impact every product across every company, and that is why I think it's a very, very profound technology. We are just in the early days. Every product in every company—that's right, AI will impact everything.
For example, you could be a radiologist. If you think about five to ten years from now, you’re going to have an AI collaborator with you. It may triage your cases when you come in the morning.
Let’s say you have 100 things to go through; it may say, “These are the most serious cases you need to look at first. ” Or, when you're examining something, it may pop up and say, “You may have missed something important. ” Why shouldn't we take advantage of a super-powered assistant to help you across everything you do?
You may be a student trying to learn math or history, and you will have something helping you. When we asked Sundar Pichai what jobs would be disrupted, he said knowledge workers—people like writers, accountants, architects, and ironically, software engineers. AI writes computer code today.
Sundar Pichai walks a narrow line; a few employees have quit, some believing that Google’s AI rollout is too slow, while others think it’s too fast. There are some serious flaws. There is a return of inflation.
James Manika asked Bard about inflation; it wrote an instant essay in economics and recommended five books. But days later, we checked, and none of the books is real—Bard fabricated the titles. This very human trait, error with confidence, is called hallucination in the industry.
Are you getting a lot of hallucinations? Yes, we just expected, no one in the field has yet solved the hallucination problem. All models do have this issue.
Is it a solvable problem? It's a matter of intense debate. I think we’ll make progress to help cure hallucinations.
Bard features a "Google it" button that leads to old-fashioned search. Google has also built safety filters into Bard to screen for things like hate speech and bias. How great a risk is the spread of disinformation?
AI will challenge that in a deeper way. The scale of this problem is going to be much bigger. Bigger problems, he says, with fake news and fake images.
It will be possible with AI to create a video easily where it could be Scott saying something, or me saying something, and we never said that. It could look accurate, but you know, at a societal scale, it can cause a lot of harm. Is Bard safe for society?
The team launched it today as an experiment in a limited way. I think so, but we all have to be responsible in each step along the way. Pichai told us he’s being responsible by holding back for more testing advanced versions of Bard, which he says can reason, plan, and connect to internet search.
You are letting this out slowly so that society can get used to it. That’s one part of it. One part is also so that we get user feedback and can develop more robust safety layers before we deploy more capable models.
Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren’t expected to have. How this happens is not well understood.
For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know. We discovered that with very few prompts in Bengali, you can now translate all of Bengali. So now, all of a sudden, we have a research effort where we’re now trying to get to a thousand languages.
There is an aspect of this which we, in the field, call a "black box. " You don’t fully understand it, and you can’t quite tell why it said this or why it got something wrong. We have some ideas, and our ability to understand this gets better over time, but that's where the state of the art is.
You don’t fully understand how it works, and yet you've turned it loose on society. Let me put it this way: I don’t think we fully understand how a human mind works either. About that black box, we wondered how Bard drew its short story that seems so disarmingly human, discussing the pain that humans feel and talking about redemption.
How did it do all of those things if it's just trying to figure out what the next right word is? I mean, I’ve had these experiences talking with Bard as well. There are two views of this: there are a set of people who view this as just algorithms, merely repeating what they've seen online.
Then there is the view where these algorithms are showing emergent properties—being creative, reasoning, planning, and so on. Personally, I think we need to approach this with humility. Part of the reason I think it's.
. . Good that some of these technologies are getting out, so that society— you know, people like you and others—can process what's happening, and we begin this conversation and debate.
I think it's important to do that. When we come back, we'll take you inside Google's artificial intelligence labs, where robots are learning. The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom.
Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it. We saw what's coming next in machine learning at Google's AI lab in London, a company called DeepMind, where the future looks something like this. The story will continue in a moment.
Look at that! Oh my goodness, they've got a pretty good kick on them! Good game!
A soccer match at DeepMind looks like fun and games, but here's the thing: humans did not program these robots to play; they learned the game by themselves. They’re coming up with these interesting, different strategies, different ways to walk, and different ways to block, and they're doing it—they're scoring over and over again. This robot here, Raya Hadsall, vice president of research and robotics, showed us how engineers used motion capture technology to teach the AI program how to move like a human.
But on the soccer pitch, the robots were told only that the object was to score. Self-learning programs spent about two weeks testing different moves. They discarded those that didn't work, built on those that did, and created All-Stars.
There's another goal! And with practice, they get better. Hadsall told us that independent from the robots, the AI program plays thousands of games from which it learns and invents its own tactics.
Here, I think that red player is going to grab it, but instead, it just stops, hands it back, passes it back, and then goes for the goal. And the AI figured out how to do that by itself. That's right, that's right.
And it takes a while. At first, all the players just run after the ball together, like a gaggle of, uh, you know, six-year-olds the first time they're playing ball. Over time, what we start to see is now, "Ah, what's the strategy?
You go after the ball; I'm coming around this way, or we should pass, or I should block while you get to the goal. " So we see all of that coordination emerging in the play. [Music] This is a lot of fun, but what are the practical implications of what we're seeing here?
This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. You know, think about mining, think about dangerous construction work or exploration or disaster recovery. Hadsall is among 1,000 humans at DeepMind.
The company was co-founded just 12 years ago by CEO Demis Hassabis. "So if I think back to 2010 and we started, nobody was doing AI. There was nothing going on in industry.
People used to eye-roll when we talked to them, investors about doing AI. So we could barely get two cents together to start off with, which isn't crazy if you think about now the billions being invested into AI startups. " Cambridge, Harvard, MIT—Hassabis has degrees in computer science and neuroscience.
His PhD is in human imagination. And imagine this: when he was 12, in his age group, he was the number two chess champion in the world. It was through games that he came to AI.
"I've been working on AI for decades now, and I've always believed that it's going to be the most important invention that humanity will ever make. Will the pace of change outstrip our ability to adapt? I don't think so.
I think that we, um, you know, we're sort of an infinitely adaptable species. You know, you look at today—us using all of our smartphones and other devices—and we effortlessly sort of adapt to these new technologies. And this is going to be another one of those changes like that.
" Among the biggest changes at DeepMind was the discovery that self-learning machines can be creative. "So this is," Hassabis showed us, "a game-playing program that learns. It's called AlphaZero, and it dreamed up a winning chess strategy no human had ever seen.
" But this is just a machine; how does it achieve creativity? "It plays against itself tens, tens of millions of times, so it can explore parts of chess that maybe human chess players and programmers who program chess computers haven't thought about before. It never gets tired, it never gets hungry, it just plays chess all the time.
" Yes, it's a kind of amazing thing to see because actually you set off AlphaZero in the morning, and it starts off playing randomly. By lunchtime, you know, it's able to beat me and beat most chess players, and then by the evening it's stronger than the world champion! Demis Hassabis sold DeepMind to Google in 2014.
One reason was to get his hands on this: Google has the enormous computing power that AI needs. This computing center is in Pryor, Oklahoma, but Google has 23 of these, putting it near the top in computing power in the world. This is one of two advances that make AI ascendant.
Now, first, the sum of all human knowledge is online; and second, brute-force computing that very loosely approximates the neural networks and talents of the brain—things like memory, imagination, planning, reinforcement learning—these are all things that are known about how the brain does it, and we wanted to replicate some of that in our AI systems. You predict one of those. .
. those are some of the elements that led to. .
. DeepMind's greatest achievements so far include solving an impossible problem in biology: proteins are the building blocks of life, but only a tiny fraction were understood because 3D mapping of just one could take years. DeepMind created an AI program for the protein problem and set it loose.
Well, it took us about four or five years to figure out how to build the system; it was probably our most complex project we've ever undertaken. But once we did that, it can solve a protein structure in a matter of seconds, and actually, over the last year, we did all the 200 million proteins that are known to science. How long would it have taken using traditional methods?
Well, the rule of thumb I was always told by my biologist friends is that it takes a whole PhD, five years, to do one protein structure experimentally. So if you think 200 million times five, that's a billion years of PhD time it would have taken. DeepMind made its protein database public—a gift to humanity, Hasabis called it.
How has it been used? It's been used in an enormously broad number of ways, actually, from malaria vaccines to developing new enzymes that can eat plastic waste to new antibiotics. Most AI systems today do one or maybe two things well.
The soccer robots, for example, can't write up a grocery list, book your travel, or drive your car. The ultimate goal is what's called artificial general intelligence: a learning machine that can score on a wide range of talents. Would such a machine be conscious of itself?
So, that's another great question. You know, philosophers haven't really settled on a definition of consciousness yet, but if we mean by sort of self-awareness and these kinds of things, I think there's a possibility AIs one day could be. I definitely don't think they are today, but I think, again, this is one of the fascinating scientific things we're going to find out on this journey towards AI.
Even unconscious, current AI is superhuman in narrow ways. Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own: push the blue cube to the blue triangle; they comprehend instructions. Push the yellow hexagon to the yellow heart and learn to recognize objects.
What would you like? How about an apple? How about an apple?
On my way, I will bring an apple to you. We're trying. Vincent Vanouck, senior director of robotics, showed us how robot 106 was trained on millions of images.
"I am going to pick up the apple," and it can recognize all the items on a crowded countertop. "If we can give the robot a diversity of experiences, a lot more different objects in different settings, the robot gets better at every one of them. " Now that humans have pulled the forbidden fruit of artificial knowledge—thank you—we start the genesis of a new humanity.
AI can utilize all the information in the world, what no human could ever hold in their head, and I wonder if humanity is diminished by this enormous capability that we're developing. I think the possibility of AI does not diminish humanity in any way. In fact, in some ways, I think they actually raise us to even deeper, more profound questions.
Google's James Manyika sees this moment as an inflection point. "I think we're constantly adding these superpowers or capabilities to what humans can do in a way that expands possibilities as opposed to narrow them. I think so; I don't think of it as diminishing humans, but it does raise some really profound questions for us.
Who are we? What do we value? What are we good at?
How do we relate with each other? Those become very, very important questions that are constantly going to be—in one sense exciting, but perhaps unsettling too. " It is an unsettling moment.
Critics argue the rush to AI comes too fast, while competitive pressure among giants like Google and startups you've never heard of is propelling humanity into the future, ready or not. "But I think if I take a 10-year outlook, it is so clear to me we will have some form of very capable intelligence that can do amazing things, and we need to adapt as a society for it. " You know, Google CEO Sundar Pichai told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.
"You know these are deep questions, and we call this alignment. One way we think about how do you develop AI systems that are aligned to human values, including morality. This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on, and I think we have to be very thoughtful.
I think these are all things society needs to figure out as we move along; it's not for a company to decide. " We'll end with a note that has never appeared on 60 Minutes, but one in the AI revolution you may be hearing often: "The proceeding was created with 100 percent human content," explains the evolution of Google's founding "Don't be evil" motto. "It's a lot more of a nuanced view, but it underpins how we think about things at 60MinutesOvertime.
com—incredibly enabling.