Cybersecurity Architecture: Five Principles to Follow (and One to Avoid)

382.84k views3254 WordsCopy TextShare
IBM Technology
IBM Security QRadar EDR : https://ibm.biz/Bdyd7k IBM Security X-Force Threat Intelligence Index 202...
Video Transcript:
With the rise of cyberattacks and data breaches, it's never been more important to make sure that your organization is protected against hackers. This series is about cybersecurity architecture, and we're going to talk about two different areas. Fundamentals, where we're going to go through and discover what some of the principles of cybersecurity are that need to be applied to everything that you do.
And then the second part is on various cybersecurity domains. Here we're going to explore how to identify vulnerabilities, implement best practices, and defend against a wide range of cyber threats through an all encompassing cybersecurity architecture. By the way, I'm an adjunct professor at NC State University.
And there I teach a 400 level course on enterprise security architecture. This video series is based upon that course. The bad news is you won't get college credit for watching these videos.
The good news is no homework and no exams. So yay. All right, let's get started with cybersecurity fundamentals.
All right, we want to start with five security principles that you absolutely should do and one that you never should do. So stay tuned to the end to find out what that one is. The first one we're going to talk about is this notion of "defense in depth".
Defense in depth is trying to create an obstacle course, a difficulty for the bad guy. So if we take a look at an old security model, the castle. Well, the castle was designed with thick, tall walls to keep the good guys on the inside, the bad guys on the outside.
And it worked pretty well until you realized the good guys sometimes needed to come out. And therefore, we needed to put a door on this thing. Well, the door then became a vulnerability.
And so we we would try to reinforce it. And then maybe put a moat around the whole thing, because that made it even harder. And then the door with a drawbridge.
So now we've got a moat, which is harder to cross. We've got the thick, tall walls. And maybe we even had in an angry dog here on this side.
Something to give a system of security mechanisms, because defense in depth is all about not relying on any single security mechanism to keep the system safe. Now, let's move and transition into a modern security example. Here, we've got a user who is on a workstation who's going to go across a network to get to a web server, which is going to hit an app server, which is ultimately going to hit a database.
Now, what would we do for defense in depth in this example? Well, one thing I might do here is I might add multifactor authentication (MFA). That is a system where I make sure that this user is who they are because I'm asking them for something they have, something they are, something they know--some combination of those kinds of things.
Now, how about over here? I might do if it's a mobile device or an endpoint of some sort--mobile device management (MDM), endpoint device management software that makes sure that the security policy that we have set for the organization is in fact followed on this device-- it's got the right patches, it's got a password that's of sufficient length and things like that. We might also add something like an EDR, which is sort of a next generation antivirus.
An endpoint detection and response capability to make sure that this platform is secure. Then from a network standpoint, well, I'm going to add in firewalls to keep the web server secure from the outside and also allow only traffic that I choose to allow to get back to these more sensitive zones. And then for the app server and the web server, I might do some testing, some vulnerability testing on those, so that I make sure that those systems are not vulnerable to attack.
And then ultimately, I'm going to take the data back here and I'm going to encrypt it. Lock it up, put access controls around it. So you can see what I've done here, is there's no single security mechanism that is protecting this thing.
If any one of these fails, the rest of the system still works. And that's the idea that we're after here. So if you think about it this way: we've got no single point of failure.
We're trying to avoid single points of failure. And we want a system that ultimately, if it fails, it fails safe. That's what we're trying to get.
And that's what the old model and the new model of security were designed to do. The second principle we're going to review is the "principle of least privilege". Principle of least privilege basically says I'm only going to give access rights to people that need that-- that are authorized and needed in order to do their job and can justify it, and for only as long as they need that access right.
For instance, in this example, I've got three users. This guy does not really have a business need, so we don't give it to him. The other guys get the access right.
They can prove their need. And the other thing is, even for these guys, the clock is ticking. I'm not going to give them this access right in perpetuity, forever.
We're going to constantly be going back and making sure that they still need that capability. If they don't, we're going to remove it from them as well. Now, another notion in the principle of least privilege is hardening a system.
Let's say we've got a web server like this and the web server out of the box-- default configuration--is that it runs a HTTP, of course, because we need that in order to do web traffic. But let's say it also turns on an FTP service and SSH service so that I can log in remotely. Well, there's some things that I might look at and say, "Okay, do I really need this FTP server?
" If it turns out I'm not going to use it, I should remove that service entirely. In the SSH, if I'm not planning to use it, remove it entirely. Because every single one of these services is potentially expanding our attack surface and making us more vulnerable.
Another example of hardening is to remove all of the unnecessary IDs that are on the system and change the names of the IDs that we do keep from their default. So, for instance, if the administrator ID on this system--out of the box as it's configured--is admin, let's change that. Let's make it something more specific.
And I'll name it after me, or give it some other name. Change all the default passwords. We don't want this thing in just a vanilla configuration because the bad guys will know what that is and they'll know how to break in.
Now another example is this idea of privilege creep. Let me illustrate that. Let's say there's two people that work for the company, and each one of these are access rights that they have.
So this guy is able to do these things. He can do the same things because they perform essentially the same role. Now, this guy gets a promotion and a new job and new responsibilities.
Well, he goes to the administrator and says, "Okay, now I'm doing my new job. I need you to add to my capabilities. And these are the things I need.
" The administrator gives him those and then also says, "You know what? Just in case, I think you're going to probably need this. Let me give you that as well.
That way you won't have to come back and ask again. " Or, come back and bother me, is what he really means. The problem with this is, just-in-case is just the opposite of principle of least privilege.
In fact, what we should be doing is running an annual recertification campaign. At least annual. Some organizations do it more frequently than that.
And in re-cert, I go back and look at all of my users and all of their access rights and make sure they still have a justified need. So this person is still doing the same job, still needs all of that. Great, they keep it.
This guy, though, no longer needs this capability because his new job doesn't require it. So we're taking it away. And this thing that he got just in case, we're taking that away, too.
So what we're trying to do with the principle of least privilege is to give only the access rights you need for as long as you need them. Hardened systems. We're going to eliminate privilege creep, and we're going to eliminate the just-in-case principle.
Our third principle to keep in mind with cybersecurity is this notion of separation of duties. That is, we won't have any single point of control. In fact, what we're trying to do is force collusion by two bad actors-- or more than two bad actors --in order to compromise the system.
But no single person can create the compromise. So, an example, in the physical world, would be if I had two people here and I've got a door with two locks on it. And this guy has a key to this lock and this guy has a key to this lock.
And what it is, is now, if he uses his key to open the door, he still can't open the door. He can't open the door alone. But the two of them together, cooperating, can in fact, open the door.
So there's no single point of control. Therefore, we have a separation of duties. Now, taking a look at another example here, let's say in an IT case, here's a requester.
And this user wants access to this database. So he's going to ask for that. He's going to send in his request.
But then there's an approver who's going to have to take action on it and say yes or no based upon whether we think they should have it or not. Then if they get the approval, then they're given the action that they want. Whatever it is, the funds transfer, the access to the database, the package delivered, whatever it happens to be.
But notice the point here. This person, the requester, is not the same as the approver. They cannot be the same person.
Because if it was, if I could request and approve my own request, then there is no separation of duties. So again, what we're trying to do with this is create a necessary case for collusion, which is hard to do because it's hard for lots of people to work together and keep a good secret. And what we're trying to avoid is this single point of control.
The fourth security principle that we're going to talk about is secure by design. In other words, it shouldn't be an afterthought that we put security in. Think of it this way: If you were designing a building in an earthquake zone, you want to make this building able to stand the pressure.
So, you don't go build the building, and then after it's all done, go back and say, "Now let's make it earthquake-proof". No, you want to do that from start-to-finish, all the way from design through completion. So let's take a look at an IT example.
So when we have a project, we will tend to start off with requirements stage. We'll go then into design, we will code the thing, then we'll install whatever it is that we've written, then we'll test it out and then we'll promote it to production. And then, in theory, we should feed that loop back and continue the continuous development process that way.
Well, what we don't want to do, is what too many people do in these cases, and they wait until really about this phase to do security--once it's already out there. Security can't just be a bolt-on that you do at the end. In fact, it needs to be something that we're doing throughout, pervasively.
We look at the security aspects of the requirements. We build security into the design. We are thinking about secure coding principles all along the path.
When we install, we do it on a secure system. We're testing and and guarding that test data. And then in production, obviously, we keep testing.
So security is something we do throughout, but it doesn't begin here. It begins in these phases. That's what we're really looking for here.
Now, if you think about another example, let's say: Whose job is security? Well, it's really everyone here. We have a designer, an administrator and a user.
So really, all of them are responsible for security in one way or another. But who does the job begin with? Well, it begins with this guy right here.
We need to make sure that he is designing security in. In other words, what we're trying to do is make security start to finish. And we want "secure by design", means it's secure out of the box.
That's the way we'd like it to be. Now, sometimes we're going to have to do some configuration changes to make it more secure. But this is the goal that we're trying to shoot for--secure by design, secure out of the box.
Our fifth security principle is the "K. I. S.
S. principle". It stands for "Keep It Simple, Stupid".
In other words, we don't want to make it harder than necessary because that will make it easier for the bad guys and harder for the good guys. To give you an example: we're trying to create some level of complexity so that it's not easy for the bad guy to get in. But a lot of times the security department will create this complex maze of things that the good guys essentially have to go through.
And what happens in that case is, I start in here, okay, I log in. Now, I have to traverse and eventually I'm like, "Oh, I'm at a dead end". Okay, maybe let's try this again.
You know what? It's too much trouble to do what the security department has asked me to do. I'm just going to subvert this, and I'm going to end up doing it that way, which is, of course, not what we're after.
So the lesson here is, if we make it harder to do the right thing than it is to do the wrong thing, people are going to do the wrong thing. So we need to be able to make the system secure, but also as simple as possible. So keep it simple, stupid.
Here's an example of how we do this in security departments, for real. We'll come up with password rules. So we'll say this is your password and it equals this.
And it's this because we created a complex set of rules that say you have to start with an uppercase, then you follow with a lowercase, then you need a special character, then you need to throw in some numbers, and then you have to have some mixture of upper and lower case and special characters and all this kind of stuff. And it has to be really long. And by the way, we need lots of these.
You're going to have a different one on every system, and I'm going to make you change it on a frequent basis. That's this-- that's what the user sees --is a complex maze, and they're going to find a way to do this. And what they're going to do is find one password and write it down and set all their systems equal to the same thing, which is again, not what we were after.
So what we want to do is understand complexity is the enemy of security. So we want to make the system just complex enough to keep the bad guys out, but not so complex that it's hard for the good guys to do what they need to do. For instance, you might have noticed, well, what about Defense in Depth, which I talked about up here?
There might be some conflict in that because there we're trying to set a system of security mechanisms in place to put an obstacle course for the bad guy. We want that obstacle course to be for the bad guy, not for the good guy. All right, now we've gone over five security principles that you should always observe.
And now the big reveal, the security principle you should never observe, and that is security by obscurity. That is, relying on some sort of secret knowledge in order to make the system safe. It turns out that secrecy and security are not the same thing.
In fact, what we want is a system that is open and observable. And this guy called Kerckhoff came up with what's now known as Kerckhoff's Principle, which basically describes that. He was specifically talking about a crypto system.
And he said basically, a crypto system should be secure if you know every single thing about it except for the key. In other words, the key is the only secret in the whole system. Now, why would this be an issue?
Well, it turns out a lot of people and you should whenever you hear this, you should run and not walk, but run away when you hear somebody say, "I've invented a crypto system that's proprietary and it will take your clear text, you feed it into my algorithm along with a key, and then it will produce from clear text to ciphertext". Okay, great. The problem is this is a black box.
I can't see how it's working. And if the individual says it's unbreakable, I've been hacking at it for weeks, months, years. All that means is the inventor couldn't find a way to break it.
But that's not the same thing as the whole world, given access, would, in fact, break this. And, in fact, given time, they will. Even if it is a black box.
History has shown that to be the case. So what we want is not black box security, we want glass box security. So in this case, what we have is the clear text, goes into a crypto algorithm, which we understand, it's been published.
In fact, if you look at the good crypto algorithms that we rely on today, it's things like, AES, the Advanced Encryption Standard and RSA. These are algorithms that everyone that wants to know how they work can look it up and see. And the secrecy, therefore, the security doesn't come from some secret knowledge of how this thing works.
It's able to produce ciphertext from clear text without having to keep this part secret. The only secret is the key. And that's what we want.
We do the same kind of things when we talk about secure operating systems or secure applications or things like that. As long as the security is based on secrecy, it's not really something that we can rely on. Thanks for watching!
Before you leave, don't forget to hit subscribe. That way you won't miss the next installment of the cybersecurity architecture series.
Copyright © 2024. Made with ♥ in London by YTScribe.com