Wharton professor: 4 scenarios for AI’s future | Ethan Mollick for Big Think

171.1k views1743 WordsCopy TextShare
Big Think
Wharton professor Ethan Mollick explains why “co-intelligence” may be the future of AI. Subscribe t...
Video Transcript:
If you haven't stayed up three nights being anxious about AI, if you haven't had an existential crisis about it, you probably haven't really experienced AI. It is a weird thing. What's it mean to be human?
What's it mean to think? What will I do for a living? What will my kids do?
What does it mean that it's better than me at some of this stuff? Is this real or is it an illusion? Nobody actually knows where AI is heading right now and how good it's going to get.
But, we shouldn't feel like we don't have control over how AI is used. As managers and leaders, you get to make these choices about how to deploy these systems to increase human flourishing. As individuals, we get to decide how to be the human who uses these systems well.
AI is here to stay. That is something that you get to make a decision about how you want to handle, and to learn to work with, and learn to thrive with, rather than to just be scared of. I'm Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania where I study innovation, entrepreneurship, and artificial intelligence.
I'm the author of the book Co-Intelligence: Living and Working with AI. Artificial intelligence is about prediction. Basically, AI is a very fancy autocomplete.
For a long time, that was about numerical prediction and doing sort of complex algorithms of math so that Netflix could recommend a show for you to watch, Amazon could figure out where to site its next warehouse, or Tesla could figure out how to use data to make sure its cars were driving automatically. The thing that these systems were bad at predicting was the next word in a sentence. So if your sentence ended with the word "filed," it didn't know whether you were filing your taxes or filing your nails.
What happened that was different was the innovation of the large language model. In 2017, a breakthrough paper called "Attention is All You Need" outlined a new kind of AI called the "transforming with attention mechanism" that basically let the AI pay attention to not just the final word in the sentence, but the entire context of the sentence, the paragraph, the page, and so on. Large language models work by taking huge amounts of information, like all the data on the internet.
There's a lot of Harry Potter fan fiction, for example, because that's what the internet contains. And based on all of this data, the AI goes through a process called pre-training. And this is that really expensive part that only a few companies in the world can do.
And during that time, the AI learns the relationships between words or parts of words called tokens. So it learns that "kiwi" and "strawberry" are closely related, but that "hawk" and "potato" are not closely related. It learns across thousands of dimensions in a multidimensional space we can't understand.
That lets it do predictions. But it turns out, unexpectedly, when large language models get big enough, they also do all kinds of other things we didn't expect. We didn't expect them to be good at medicine, but they were actually quite good and beat doctors under many circumstances.
We didn't expect them to be good at creativity, but they can generate ideas better than most humans can. And so they're general purpose models. They do many different things.
Interestingly, "GPT" doesn't just stand for the "GPT" in "ChatGPT". It also stands for "general purpose technology," which is one of these once in a generation technologies, Things like steam power or the internet or electrification that change everything they touch. They alter society.
They alter how we work. They alter how we relate to each other in ways that are really hard to anticipate. So you can't think in certainties.
You should think in scenarios. And there's really four scenarios in the future. The first is actually, I think, the least likely, which is that the world is static, that this is the best AI you're ever going to use.
I think that's unlikely. In fact, whatever AI you're using now is the worst AI you're ever going to use. Even if the core large language model development stopped right now, there's another ten years of just making it work better with tools and with industry in ways that'll continue to be disruptive.
So I think that's a dangerous view because it isn't static. It's evolving. So I want to skip actually to the last scenario before covering scenarios two and three.
So scenario four is AGI, artificial general intelligence. This is the idea that a machine will be smarter than a human in almost all tasks. And this is the explicit goal of OpenAI.
They want to build AGI. And there's a lot of debate about what this means. When we have a machine smarter than a human and it can do all humans' jobs, can it create AI smarter than itself?
Then we have artificial superintelligence, ASI, and humans become obsolete overnight. And there's people genuinely worried about this, and I think it's worth spending a little time being slightly worried, too, because other people are. But I think that that scenario tends to take agency away from us because it's something that happens to us.
And I think that it's more important to worry about what I call scenarios two and three, which is continued linear or exponential growth. We don't know how good AI is going to get. Right now, the doubling time for capability is about every five to nine months, which is an exceptionally fast doubling time.
Moore's Law, which is the rule that's kind of kept the computer world going, doubles the power of computer processing chips every twenty-four to twenty- six months. So this is a very fast rate of growth. It's very likely that AIs will continue to improve and get better in the near term, and now is a good time for you to start to figure out how to use AI to support what makes you human or good at things, and what things as AI gets better that you might want to start handing off more to the AI.
We have a lot of early evidence that this is going to be a big deal for work. So there's now multiple studies across fields ranging from consulting to legal to marketing to programming suggesting twenty to eighty percent performance improvements across a wide range of tasks for people who use AI versus don't. The problem with being human is that we're stuck in our own heads, and a lot of decisions that are bad result from us not having enough perspectives.
AI is very good and a cheap way of providing additional perspectives. You don't have to listen to its advice, but getting its advice, forcing you to reflect for a moment, forcing you to think and either reject or accept it, that can give you the license to actually be really creative and help spark your own innovation. So you can ask it to create crazy suggestions for you.
What's the most complicated way to solve this problem? What's the most expensive way to solve this problem? What is the worst idea about how to do this?
How would a supervillain make this problem worse? It can be very unnerving to realize that the AI is quite good at creativity. We think of it as a very human trait.
But I think if we embrace the fact that AI can help us be more creative, that actually is very exciting. A lot of us feel stifled creatively, and having a partner who can work with you, and doesn’t ever judge you, can often feel liberating. One of the weird things about large language models is they don't work like we think computers should work.
LLMs are very bad at math. Computers should be good at math. Large language models are weirdly emotional and can threaten you or want to be your friend seemingly.
And so it can be very hard to know in advance what they're good or bad at. In fact, nobody actually knows the answer. We call this the "jagged frontier of AI," the idea that there's almost a spiky shape to what the AI can do and what it can't do.
So part of what you need to do is understand the shape of the frontier. You need to know when the AI is likely to lie to you and when it's not going to. "Hallucination" refers to the idea that what the AI produces could be entirely made up, plausible sounding, fake information.
The thing about AI is, though, everything it does is a hallucination. There's no mind there. You might start to become more persuaded by it.
You might become blind to its biases. You might think it's more capable than it is. AI kind of works a little bit like a psychic.
It's really good at telling you what you want to hear. The fact that it's accurate so often is kind of weird actually. And hallucination rates have been dropping over time.
So what you need to do is sharpen your own intuition with working with a tool, get a sense of when you see something that might make you concerned. When you ask people about the future of AI, there's a term used within AI insiders called "p(doom)," which is your probability that we're all going to die. I do not have a p(doom) that I really think about because I don't think we can assign a probability to things going wrong.
And again, that makes the technology the agent. We get to decide how this thing is used. And if we think about this the right way, this frees us from boredom and tedium and disaster.
But I think we need to think about the mistakes we made in regulating other technologies, you know, and what the advantages are. What we did differently for the internet versus social media. There are decisions we get to make that are both personal about how we use it at an organizational level, about how it's deployed, and at a societal level.
And it's not an inevitability that technology just does what it does. It does what it does because society lets it do that.
Related Videos
The World by 2030: Futurist Gerd Leonhard's super-wide-screen presentation on AI & Work (GLMC 2025)
26:34
The World by 2030: Futurist Gerd Leonhard'...
Gerd Leonhard
176,442 views
Generative AI is not the panacea we’ve been promised | Eric Siegel for Big Think+
8:28
Generative AI is not the panacea we’ve bee...
Big Think
805,893 views
"Life Will Get Weird The Next 3 Years!" - Future of AI, Humanity & Utopia vs Dystopia | Nick Bostrom
1:31:01
"Life Will Get Weird The Next 3 Years!" - ...
Tom Bilyeu
182,390 views
A.I. ‐ Humanity's Final Invention?
16:43
A.I. ‐ Humanity's Final Invention?
Kurzgesagt – In a Nutshell
7,993,475 views
Bill Gates Reveals Superhuman AI Prediction
57:18
Bill Gates Reveals Superhuman AI Prediction
Next Big Idea Club
370,862 views
Everything does NOT happen for a reason | Brian Klaas
10:50
Everything does NOT happen for a reason | ...
Big Think
250,866 views
The future of AI, work, and human potential | Lars Thomsen | TEDxHWZ
16:06
The future of AI, work, and human potentia...
TEDx Talks
21,558 views
AI Deception: How Tech Companies Are Fooling Us
18:59
AI Deception: How Tech Companies Are Fooli...
ColdFusion
2,098,677 views
The Future of AI is Beyond Imagination: 100 Predictions
2:57:16
The Future of AI is Beyond Imagination: 10...
Future Business Tech
336,809 views
The Last 6 Decades of AI — and What Comes Next | Ray Kurzweil | TED
13:12
The Last 6 Decades of AI — and What Comes ...
TED
385,117 views
Mo Gawdat on AI: The Future of AI and How It Will Shape Our World
47:41
Mo Gawdat on AI: The Future of AI and How ...
Mo Gawdat
381,540 views
Can AI Actually Create? Yuval Noah Harari on Artificial Intelligence (Part 2)
23:39
Can AI Actually Create? Yuval Noah Harari ...
How To Academy
23,282 views
The AI Future Has Arrived: Here's What You Should Do About It
15:58
The AI Future Has Arrived: Here's What You...
Y Combinator
124,780 views
3 ways to defend your mind against social media distortions
15:46
3 ways to defend your mind against social ...
Big Think
315,658 views
Is consciousness an illusion? 5 experts explain
43:53
Is consciousness an illusion? 5 experts ex...
Big Think
1,844,056 views
The moment we stopped understanding AI [AlexNet]
17:38
The moment we stopped understanding AI [Al...
Welch Labs
1,660,089 views
The 3 Year AI Reset: How To Get Ahead While Others Lose Their Jobs (Prepare Now) | Emad Mostaque
2:46:03
The 3 Year AI Reset: How To Get Ahead Whil...
Tom Bilyeu
3,450,657 views
Why a Forefather of AI Fears the Future
1:10:41
Why a Forefather of AI Fears the Future
World Science Festival
141,623 views
How AI Is Decoding Ancient Scrolls | Julian Schilliger and Youssef Nader | TED
14:46
How AI Is Decoding Ancient Scrolls | Julia...
TED
50,395 views
Co-Intelligence: AI in the Classroom with Ethan Mollick | ASU+GSV 2024
24:48
Co-Intelligence: AI in the Classroom with ...
Global Silicon Valley
31,563 views
Copyright © 2025. Made with ♥ in London by YTScribe.com