(light upbeat music) - Welcome to "The So What from BCG," the podcast that explores the big ideas shaping business, the economy, and society. I'm Georgie Frost. In this episode, AI agents are everywhere, but are they really the game changer for business that they're hyped up to be?
We bust some myths with Nicolas de Bellefonds, BCG's Global Leader for AI. Nico, what's your so what? - Everyone's been talking about agents this last six months, I would say, and people hesitate between, "This is just more buzzwords that will not provide any value" and between "This is going to, again, revolutionize the world.
" What we're gonna do is to look at a series of myths and realities around the topic of AI agents and what true value companies can get from it. - With the rise of generative AI, businesses are racing to adopt AI agents. But to unlock real value, businesses must rethink their processes, manage risk, and set realistic expectations about what these systems can and can't do.
It's time to stop falling for myths that could lead to wasted investment and missed opportunities and start making the most of this transformative technology. Nico, welcome. There's been, as you said, lots of buzz around AI agents, particularly over the last six months.
Just if you would just, what are they? How long have they been around? - So first, let's start with AI agents are not new.
Just as an example, we were building AI agents 18 months ago and published an open source toolkit to build agents 12 months ago already. Now what are they? They are machines that can reason, break down a problem into sub-problems, access systems and tools to solve these problems, and get a feedback loop on their action and remember their actions in order to further improve.
- Can you give me some examples? - You could have a customer relationship, CRM agent, that would listen to a customer's request, automatically figure out in the different systems the profile of that customer, the past history of, for example, its transactions, how to solve the customer's problem. And then, let's imagine the customer has lost its credit card, automatically request and ship a new credit card to that customer, log the result of the action into the CRM system, and close the conversation.
A series of activities that would take a regular human multiple minutes would require accessing probably five to 10 different subsystems, and could be prone to errors, done now by a single machine in one go. - Some businesses seem to be calling every automated tool an AI agent, is that accurate? Is that what they are?
- No, of course. And if we go back to to the definition, you know, agents are machines that reason, access multiple system, and then learn from memory. This means that this notion only applies when these characteristics are combined.
They don't apply to retrieving knowledge from a database. They don't really apply to synthesizing data. They don't apply to predicting, for example, your price or your stock.
Now, an agent can access other systems that do these things. So you could imagine that I have a supply chain agent that accesses other AI systems that predict my stock or that synthesize the latest information about my financial forecasting in a nice report. - I guess another myth, or maybe this is wishful thinking, is that companies are believing that AI agents will solve all of their problems, they're investing heavily in this space.
Is that just wishful thinking, though? - Of course, unfortunately, it is. I would say some things are simpler with agents in the sense that, you know, they can solve more complex problems that were difficult for, let's say, LLMs to resolve alone, and especially, they can help automate multi-step processes.
But there are some difficulties that remain that I would say are even amplified. One of them is the strength and quality of underlying data and technology. Agents will access multiple sources of data and multiple systems to perform tasks.
But if the data in this is incorrect, if the systems are not functioning properly, then the agents by definition will not perform the right task. Also, agents will follow usually a process. Now, if we use agents to automate broken processes, they're still broken.
And finally, we tend to talk about the 10/20/70 rule at BCG. 100% of the effort is 10% on the algorithms, 20% on data and technology, and 70% on change management adoption, upskilling, process redesign. Well, with agents that is even more true because agents will even more profoundly change the way people work because they automate or augment not tasks, but end-to-end processes and workflows.
They impact people even more than task-specific generative AI applications or predictive AI models. - Nico, what have you seen, you know, from your experience that happens when companies deploy AI agents without really first rethinking their processes? - Agents will unlock value if you transform the gains that you could have in productivity or inefficiency or ineffectiveness into real organizational change, real, you know, cost savings in certain cases.
And so that only happens if you change the way work is organized. Let me just give you one very simple example. We built a simple agent for a consumer goods company to help them automate the way they do media reporting and optimization.
The starting point is you had roughly half a dozen people involved in every single marketing campaign to retrieve data, synthesize it, take the right recommendations out of it, transmit it to the right folks internally and externally to then take action, and optimize the campaign for the next week. Today, that process is only one person, taking 10% of the time it used to take and doing everything on its own without any handoffs. The data is retrieved automatically, the synthesis is produced automatically, the recommendations are produced automatically, the person just has to validate them and then it's sent automatically to all of the systems in order to optimize the campaigns.
But that fundamental redesign of the process and of the organization around that process, is what allows them to really get rid of the unnecessary task and redirect the time saved toward other types of activities. - So instead of seeing AI agents as a silver bullet then, what should leaders really be focusing on? You mentioned the 10/20/70 rule there.
Where should the focus lie when we think about AI agents? - You start from what we call the reshape opportunities. What are the functions that I can fundamentally transform in order to gain in efficiency, in speed, in effectiveness?
Or what are the new services, the new customer experiences that I want to build, what we call the invent type of opportunities? And these opportunities will then be unlocked by a combination of predictive AI to optimize decision-making, generative AI, and LLMs to help with new content creation, let's say knowledge extraction and synthesis, and conversations, and by agentic AI and agents to simplify end-to-end processes and workflows. And yes, working on the total system engineering.
So there is, you know, quite a bit of engineering that is involved. I mean software engineering in order to construct the right agent, it's not plug and play. - I wanna talk a little bit more about that because it's not just a question of building these AI agents, you also need to take a really good look and improve your data infrastructure.
So whereabout, how do you start doing that? - If I take another example, I've been working quite a bit in R&D in the consumer goods space. We've been working for one of the largest consumer goods companies on accelerating their development lifecycle.
One of the topics was they had bad quality in all of their formulation data and they had been working for years on making that formulation data better with some progress but not fast enough. But then once we started building for them and with them an agent that accelerates their formulation lifecycle, it created the momentum to dramatically accelerate the improvement in structure and quality of their performance data. But also it helped focus on the type of data that was necessary to achieve the target outcome as opposed to trying to work on every kind of data without a clear view of what will be useful, which is of course an endless process.
- Nico, what are the biggest challenges that companies are facing when they integrate AI agents with their enterprise systems? What have you seen? - One of them is cost.
You know, in the short term, agents are going to increase the tech costs of companies and you could even imagine a world where tech costs are going to grow much faster than any other line in the P&L, up to the point that it becomes the biggest part of the P&L of a company. So thinking proactively on how to manage that cost down, building flexible architecture so you can switch to the least costly models, having a lot of strategic autonomy on some of these costs is quite important. So anticipating the cost impact, then, you know, you might still have the same cost structure and the same organization while also adding the tech cost of agents.
- I'm wondering, Nico, if you're seeing among leaders perhaps the approach is still a bit too conservative? Is the thought that the focus is much more on cost management, that it is on value creation or creating competitive advantage with these tools? - The cost saving part is easier to see directly.
I think, you know, creating new services, new experiences requires a stronger leap in imagination. But some examples are out there, you know, if you look, for example, L'Oréal released its Beauty Genius to the public at the end of last year. This is a virtual beauty agent that serves any beauty needs of consumers from inspiration to product recommendations, beauty routines, you know, product usage, etc.
And this in a way for them is a new way to engage with consumers, and it's new go-to market model. But I would say there is also a second reality when you talk about conservatism. A lot of leaders are thinking, "Is this really working?
And, you know, is it not just something that everyone is talking about and especially tech companies to essentially sell licenses. " So they'd rather wait a little bit. And I think what we are observing is that leaders tend to stay in the lead, meaning that you can wait for as long as you want and the technology becomes more mature and simpler and more effective.
But what does not change is the difficulty on your data and tech foundations. And what does not change is the difficulty to transform the organization and to upskill people. and these two things are incompressible in terms of time it takes.
And so, the more you wait, the longer it will take. This is a technology that continues to evolve and continues to mature and by the way will not be stable or fully mature in the short term. And there is real value in starting fast.
- Is it just a stepping stone, a leap onto something next? And if so, what is that next step? - You are right that this is just one step in an evolution.
Some of the next steps could be much more autonomous AI agents. The more they evolve, the more they have the ability to reason and break down problems into sub-problems, the more they have access to systems and tools, the more they can be open-ended. So that's one path.
Another path, completely different, is that one of the next waves is going to be embodied AI, meaning AI, and in particular agentic AI, embedded into robots, because that would unlock a whole new set of opportunities. We would go outside of the pure knowledge work and into the physical world, which of course is the vast majority of jobs today. - So, if the real shift is coming when AI agents move from automation to self-directed intelligence, how close are we to that?
- Some of the things that we predicted two years ago would be 10 years out are actually live right now. For example, realistic voice bots, it's essentially machines that can speak like you and me and could be having this conversation right now, or realistic video creation, or being able to solve PhD-level problems. I mean, these things we believed were at best five years, if not 10 years out a couple of years ago.
They are live today. So when precisely, I don't know, but for sure much faster than we think. - There's a big assumption that AI agents, the risks around them are very similar to LLM risks.
Is that the case? And if so, how are they different? How should you be approaching risk management when it comes to AI agents?
- If you think about LLMs, LLMs were harder to manage from a risk standpoint than predictive AI. Predictive AI is a bit quote, unquote "easy" because you have a set of numbers, you make a prediction, you can check the validity of that prediction. So you understand whether you have something robust or not.
LLMs are more black box, you have an input, you have an output, you're not always sure of what happens in between. And, you know, if you ask the same question 1,000 times, you're not necessarily going to get the same answer 1,000 times. Agents are multiple steps, each of them accessing different sets of systems, sometimes different LLM models.
And so they compound the level of uncertainty between input and output. And so measuring the quality of the answers, doing some quality assurance, doing some testing is exponentially harder than anything we had before. And so, if you imagine a world where you have hundreds of agents running around in your company, this can be somewhat scary if you don't have the right set of controls in place.
- Okay, Nico, we've covered the so what, we've busted some myths, now we're gonna talk about the now what? This is the immediate steps that you think that business leaders should be taking right now. - So I think the first thing is think about the priorities where you should focus when it comes to AI.
And these priorities would have a few simple characteristics. One, they need to align with the strategic agenda of the company. AI should be an accelerant to your strategic agenda.
And two, they should be focused on the biggest internal transformation opportunities or the biggest customer-facing business opportunities, new services, new customer experiences. So that's the first thing. The second thing is you need to, in these opportunities, look at the totality of, let's say, AI technologies in order to solve for that opportunity and get to that outcome.
So agents are just one component of the solution. But three, you need to start building an experimentation muscle around agents in these opportunity spaces because agents are really an impact multiplier. - Nico, thank you so much, and to you for listening.
We'd love to know your thoughts. To get in contact, leave us a message at thesowhat@bcg. com.
And if you like this podcast, why not hit subscribe and leave a rating wherever you found us? It helps other people find us too. And if you'd like to dive deeper into AI agents, check out the link in our show notes.