How AI Got a Reality Check

468.68k views1449 WordsCopy TextShare
Bloomberg Originals
Two years ago, OpenAI’s ChatGPT became the tech industry’s biggest product in years. Now, leading de...
Video Transcript:
It broke the internet overnight. A new artificial intelligence tool is going viral. It's called ChatGPT.
It can answer follow-up questions. It can omit its own mistakes. When ChatGPT came out, it really quickly gained traction.
You might ask it, 'write a poem about unicorns. ' And it might spit out something that sounds just like it was written by someone. That really struck a chord with a lot of people.
It also struck a chord with investors. Tech companies and their investors have sunk billions of dollars into building AI systems with a bet that these tools will keep getting more sophisticated, and one day prove wildly profitable. There's this concept in AI that if you just sort of keep feeding an AI model more and more data, more and more compute power, that it will start to just sort of teach itself and become smarter and smarter.
But progress seems to be getting a little bit trickier for some of these big companies. As expenses skyrocket, an industry known for moving fast and breaking stuff could be slowing down. I think the progress is going to get harder.
When I look at (2025), the low hanging fruit is gone. They're trying to figure out how do we make these models that are getting increasingly expensive, eating up increasing amounts of computing power worth that trade off, right? If you're not getting this incredible boost in performance, what do you do?
AI really started in the 1950's, Alan Turing, being one of the really early academics. You might be familiar with the Turing test? There have been since then a bunch of periods of AI innovation, and then what you might call an AI winter.
But fast forward to today. In the past couple of years, this intense pickup in AI came from the breakthroughs that we saw with open ais with ChatGPT. Search as we know of it, is going to change and the web as we know it is going to change.
I was very skeptical, like I did not expect ChatGPT to get so good. When ChatGPT came out, it looked like wow, overnight we had this huge advancement - it sort of came out of nowhere for most people. I think it really kickstarted a lot of interest in innovating in AI, and also investing in AI, and companies trying out using generative AI.
Chat. GPT and AI systems like it work by using massive software engines called Large Language Models or LLMs. A Large Language Model is an AI system that's trained on lots of data sourced from the internet typically, and it can respond to written prompts with other amounts of text that sound really like they were written by a human.
To do this, these models use an algorithm to process the parameters of any given prompt, and this is how that unicorn poem comes into being. And people keep expecting these companies to keep rolling out better and better models that are more and more capable. What if I were to say that you are related to the announcement, or that you are the announcement?
Me? The announcement is about me? Well, color me intrigued.
For a while these models have been getting better quickly. But right now at least three of the top companies, OpenAI, Anthropic, and Google are having issues training their models to the level where they would like them to be. The easy gains, they are gone.
We will keep scaling them, but it will not be at the same rate of pure scale that we've seen in recent years. So we're working on new things too. Good curated data sets that are created by humans are increasingly scarce.
I mean, the internet at this point has been largely scraped by these companies. So if you want to make models better than what they are now, you need yet more data. That's harder and harder to find.
Some companies are going as far as paying people that have advanced degrees to help train their models so that they can get this expertise to get better data. So how do you continue to teach an AI? Where do you get that data from?
Especially, once this AI is getting so smart that it needs expert level data, it needs the kind of data that a PhD student or a Nobel Prize winner would be able to give you. How do you get all of that? How do you feed all of that when it's not as simple as just scraping the web anymore?
Some projects are experimenting with what's called Synthetic Data, which includes training AI with content that is itself AI generated. Synthetic data basically means you take the output of an AI model and actually use that to start training more AI models, right? But of course that has its challenges and that's still a technique that's being tested, right?
And we don't know how much companies can really rely on that or they will need to continue to also actually get human created higher quality data. While we have seen meaningful new products and advances this year, tech companies are struggling to clear the high bar for improvements that would justify the tremendous amount of money they're spending. Because let's be real, AI is not cheap.
The CEO of Anthropic, which is a leading AI lab and probably a number two competitor to open AI in the startup world, has said it costs a hundred million to train a new AI model, and in the coming years, that could increase to a hundred billion. As we're spending more and more money, the engineering complexity of getting it right is increasing, and that's why companies need to be larger, need to have more talent. It's hard to say if anyone's making money off of AI right now.
We do know that OpenAI has quite a number of paying business customers, but we don't know how that breaks down in terms of the exact number of companies that are paying for it, or what kind of benefits these companies are actually seeing. ChatGPT is one of the most quickly growing consumer software products of all time, but when that will start to match, the costs is in question. But it's not all about the money, is it?
Let's keep in mind, OpenAI for example, was established as a nonprofit research company focused on advancing AI for the benefit of humanity. Of course, it plans on shifting to a for-profit model, but the timeline on that is unclear. And still, despite the wintery forecasts the money keeps pouring in.
They have plenty of money at the moment, but it's not totally clear if companies are going to be using it long-term, what kind of return they're getting on their investment. These companies are having to do bigger and bigger fundraising rounds, billions and billions of dollars, and at a certain point it's unclear where that money's actually going to come from if it's not coming from customers. Investors may be doubling down because of the promise for even more mind boggling advancements just around the corner.
OpenAI came out with a new reasoning based model recently. And the idea there is that if you sort of give these AI models more time to sit and think about a problem, that they can reason their way into giving a more accurate or more intelligent answer. Another breakthrough that may be happening is also around things like agents, which are this idea of a kind of AI that can just talk to you like a chatbot can, but actually commit tasks for you; like let's say book travel or actually integrate code into an application.
Perhaps the most exciting thing for AI enthusiasts and the most horrifying for its detractors is the prospect of artificial general intelligence or AGI. AGI would be a system that can reason and think like a human, applying its synthetic brain across disciplines. Not just reacting to prompts with poems or six fingered hands, but accomplishing complex tasks independent of us humans and maybe even surpassing us.
Could we one day be working for them? No one exactly agrees on when we're going to reach AGI or what exactly it will look like. A lot of people are predicting that it can come very quickly.
You see other people saying it could take decades, it could take a century. It could even never happen, and something like some of these setbacks that we're seeing give people reason to maybe reassess or say, "Hey, this path to AGI may not be as simple as some wanted.
Copyright © 2025. Made with ♥ in London by YTScribe.com