Two years ago, I did a video on cybersecurity trends for 2023. Then last year I did another one for 2024. Well, let's dust off the crystal ball and take a look and see what I'm seeing for 2025 and beyond.
But before we do that, let's take a quick look at what I predicted last year and see if it came true or not. That way you can decide whether you want to believe this YouTube prophet or not. So how did I do on last year's predictions?
The first one was about the adoption of pass keys over passwords, moving from passwords to doing this more sophisticated, security conscious pass key technology from FIDO. Past Keys. In fact, we found there was a password management company that particularly pointed out that they saw 4.
2 million pass keys saved in their software over the course of the last year. That's a big improvement. A big uptick.
They found that it was 1 in 3 users are now storing pass keys and hopefully using them as well. And that, in fact, they saw twice as many companies, in other words, websites that were accepting pass keys as an option. So I would say that's a big improvement.
That one definitely came true and I expect to see that one continue even more as we go forward. Now, my next prediction had to do with AI. phishing.
In other words, using generative AI in order to generate phishing emails. We've in fact seen this occur as well. There was an email security company that said they are now seeing these perfectly crafted and legitimate sounding phishing emails that look better than anything we've seen before.
In fact, these things are highly personalized. In fact, we could use information that's available on the Web in order to make them even more personalized and more targeted and therefore more believable. And that whole business of looking for grammar errors and spelling errors and phishing emails, that's slowly going away because generative AI.
doesn't make that mistake. So we're in fact seeing and have already seen that AI is improving phishing attacks. Now we need to do something about the defense as well.
Okay. Deep fakes. What's happening in that case?
It turns out almost two months after I recorded last year's video, there was a an attack where a deep fake was able to. Emulate and impersonate the CFO, the chief financial officer of a company, and convince an employee to wire $25 million out of that company into the attacker's account. All using a deepfake in a video call.
So the employee thought for sure they were talking to the CFO and therefore following those instructions. In fact, it was a deep fake. It was an AI generated impersonation of the actual person and they lost $25 million in that particular case.
We also saw another example in the US, the presidential election run up in the early part of 2024. Again, just a few months after I made this prediction about deep fakes in the New Hampshire primary. For the Democratic primary, there was a deepfake robo call of Joe Biden's voice calling people and telling them they didn't need to vote in the primary.
They could just save their vote for the general election. So these things have in fact occurred and they started occurring almost instantly after I referred to them as a prediction. How about hallucinations.
So Generative AI continues to have some issues with the truth. Sometimes it's not well grounded in the truth. Sometimes it does amazing stuff.
But just to give you an example, I did one really recently. A friend of mine who is a runner was quoting to me what her time was on a run that she did recently, and she's not from the US, so she quoted me her time as 5. 45 per kilometer.
And I thought, well, I don't think in kilometers, so I need to convert that into a per mile pace. And so I went to a chat bot, a very popular chat bot, and asked it what does that convert to? 5.
45km. What is that pace? And Miles, if someone was running it and you know what it said.
Said it was a 3. 43 which congratulations to her. She would have literally broken the world record by more than 10s if that had been the case.
It wasn't true. I went to the chat bot and said, That's not right. That's literally all I said.
And it said, yeah, well let me correct my numbers. Actually, that would have come out to a 9. 15 pace per mile.
Well, that's a big difference. That's not a world record. That's respectable.
Not a world record. So all I did was just prompt it and say. Tell me again.
Try again. And then all of a sudden, it got it right. So we're still having hallucination problems.
It's getting better, but it's not solved yet. And then the last prediction I made had to do with the use of cybersecurity needing to secure AI. In other words, companies are going to be deploying AI, and they're going to be wondering, how can I use cybersecurity technologies to make sure those deployments can't be attacked, that they're robust?
That has, in fact, turned out to be the number one question I get when I'm out meeting with clients. This is the reason, for the most part, they're bringing me in to have conversations. I talk about a lot of other things, but this is the number one concern for all the clients I've seen virtually in the last year.
How am I going to secure my AI deployment? Now, I think there's also this other part where I which I made a prediction about, and we're seeing this happen also, and that is how can we use AI to do a better job of cybersecurity? Well, one of the thing is we could use this to create essentially an online Q&A type chat bot.
So in other words, if we had a chat bot that didn't hallucinate, that was grounded in the facts and we could do that with something like retrieval, augmented generation, RAG technology and things like that, it could do a better job of answering questions for cybersecurity analysts just go in and ask questions in natural language and get responses back. We're starting to see that technology make its way to the market. And another one is cases being able to look at incidents and things like that and be able to track them.
Look at all the indicators of compromise and give a summarize version of a particular case. Because one of the things that generative AI is good at is generating summaries as well. And those summaries can be helpful when you need to pass off an incident or a case to someone else who now is going to pick up the ball and run with it.
So overall. I think we did pretty good. Okay.
Enough of living in the past. Old man. Let's get rid of those.
And now we're going to take a look at 2025 and beyond. I don't know exactly what year all of these things will happen. So we're just looking toward the future in general.
And even though they say history repeats itself, actually, Mark Twain said it doesn't repeat itself, but it often rhymes. So we're going to see some of the same trends that we saw before that will continue maybe in a little bit different form. And not surprisingly, AI is going to be a big part of everything that happens in technology, and cybersecurity is no different in that regard.
We're going to see it give us some pluses and minuses, some pros and cons, some things where it'll help us and some things where it might help us. I'm going to start with some of the things where it necessarily will not be helping us. And that's, first of all, a prediction about shadow AI.
That is, this stuff is so good and everyone is going to want to do it and everyone is going to do it. And not all of those AI deployments will in fact be authorized, will be the ones that are approved by the organization. So we could have, for instance, in some places that somebody goes into a cloud instance, pulls down a model and stuff starts running away.
And that shadow AI could present a problem for the organization. Other examples on mobile phones. So people are using AI is being built into mobile phone operating systems and we're going to see more and more of that.
If that's not handled well, it could be a source of data leakage. It could be a source of misinformation. So that's this kind of sort of unapproved shadow AI.
Is going to represent a particular problem for us, and I expect to see that grow as we go into the future. What else? Deepfakes.
I mentioned that one before and that one's not going away. In fact, Deepfake technology is only going to be getting better and there are going to be implications to business. I gave an example of that where an organization was basically swindled out of or convinced to send $25 million.
There was another case a few years ago where $35 million was sent as a result of a deepfake call, just an audio call, and someone followed those instructions. So it's going to effect business. It's going to affect governments as someone puts out a deep fake of a head of state or something like that.
Then if we don't have reliable sources for that, people are going to see those messages and some portion of the people will believe it because some portion of belief will of the people will believe anything. So how are we going to make sure that what we're seeing are, in fact, the real leaders and not deepfakes? And think about law.
The legal aspect of this, we look at evidence and we take those into court, a video of someone committing a crime. What if it was a deepfake and it wasn't really that person committing the crime? Or by the same token, what if it was an actual video?
And the defense just argues that it's a deepfake and now that creates some sort of reasonable doubt. So there are going to be implications to all of this, and we have not figured out yet. Our legal system, government systems and so forth, have not figured out what all the implications of those will be.
The bad guys will continue to use these in ways that will represent a threat to us. Another one is exploits. And writing malware.
In fact, we know that generative AI is able to write code. Well, why wouldn't it be able to write malware? In fact, it can.
In fact, there was one study that was done that found that one of the very popular generative AI chat bots was able to, when given an adequate description of a zero day vulnerability, it was able to generate exploit code 80% of the time, 87% of the time. That's really good. That means a bad guy doesn't even have to know how to write code.
They just need to know how to take the information about the description of the problem, put it into the right chat bot, and now they get their exploit and they can launch that. Well, that's a theoretical threat. Has it actually happened?
In fact, it has already. We've started to see this already. One major online retailer that you're all familiar with has already reported they've seen a seven fold increase in the last six months in their attacks.
The number of attacks coming in and they attribute some large percentage of that. They believe that that's not a coincidence, that it's gone seven X in the last six months. They believe generative AI has a is a big part of that, that attackers are starting to use this technology more and more.
That trend, I expect, will continue. How about the attack surface? Well, every time we add a new piece of componentry into a system, it extends the attack surface.
It's one more thing that a bad guy can use to break into. So the attack surface now includes AI So the shadow AI that's out there. Any of these other technologies could potentially be things that someone will exploit.
So here I was talking more about breaking into the existing IT infrastructure using generative AI In this case, I'm saying the use of the AI itself will become something that gets attacked. And if someone is able to poison that, it's going to mess up the operations of the business. They may be able to pull data out of it.
And we have a data loss of some sort. Another one, that's a big concern. In fact, I've talked about this one before and done a couple of videos on this topic.
Prompt injection. Generative AI is subject to some of the same failings that humans have. That is, it believes a lot of things and it can be naive and it can be socially engineered in essence.
And that's what a prompt engineering prompt injection attack does. We tell it to do things that the originators of the technology did not intend it to do, and that way the bad guys will continue to figure out ways to get this thing beyond and break out of its guardrails. To do anything now, as it's been referred to.
And we are going to need to be able to have better and better defenses against these kinds of attacks because those in fact, the OWASP organization Open Worldwide Application Security Project says this is the number one attack type against large language models, which are what the generative AIs are based on. So we haven't seen a solution to that. I'm sure we'll see more of it.
What else? Well, those are a lot of negatives that we have to face. How about at least one positive in here?
And I mentioned a little bit of this before, and that is using AI to improve cybersecurity. How can we do a better job of cyber now that we have this AI tool? It's not just an attack surface.
It's not just a negative, but let's leverage that tool as well. Well, what we've seen a lot so far is using generative AI in a more passive role where it's doing analysis and things like that. But what if it gets involved a little bit more in response?
Now, maybe it doesn't automate the response. In fact, I'd be very cautious about doing that because we still have elucidation problems and I don't want it hallucinating. What the answer should be to a particular break in, but giving advice and saying here within the understanding of this AI, we believe this is the most likely response you should take.
Then here's our our confidence in that and here's the next most likely thing that you should do and the next priority thing that you should do. So giving us that kind of expert advice, or at least for an expert to look at, here's a bunch of suggestions, and now I can decide which ones I want to do and which ones I want to discard. So that's a potential positive use of generative AI in doing cybersecurity.
And then one thing that's not related to I just saw that, you know, not everything is about AI, and that is quantum computers, quantum safe cryptography. Quantum computers are going to do some amazing stuff. But one of the things they're going to do, we wish it didn't, is it's going to be able to break our cryptography at one point.
We don't know when this will be. Maybe five years, maybe ten years could be tomorrow. Someone is going to be able to read all the encrypted messages that we have put out by using a quantum computer to break that.
Now, the quantum computers will do some wonderful things. That's not one of them. But we're going to need to start moving and we need to have already started moving toward these new quantum safe or post quantum crypto algorithms, the ones that will not be susceptible to attack and vulnerable to quantum attacks.
A lot of people are still sitting in the starting blocks and have not begun this activity and they need to because of a thing called harvest now decrypt later I make a copy of your data right now, and then I wait for a quantum computer to get strong enough and then I can read what your stuff was. And that could be a problem, especially if we're talking nation states where some of the information will be classified for generations to come. So we really need to start working on projects to convert to this new quantum safe cryptography.
And I expect we will see more organizations realizing that and starting to do that. Okay. So those are a few of my predictions for 2025 and beyond.
And by the way, I've got videos on the IBM Technology Channel on virtually every one of these topics, including this one. So go check out those on the IBM Technology Channel. If you want to see a deeper dive into each one of these subjects.
But enough about my predictions. How about your predictions? What does your crystal ball show you?
Go ahead and put in the comments section, what your predictions are so that we can all benefit from your wisdom and.