Using the Principles of the CIA Triad to Implement Software Security

[article]
Summary:

If you're starting or improving a security program for your software, you probably have questions about the requirements that define security. Data need to be complete and trustworthy, and also accessible on demand, but only to the right people. The CIA triad defines three principles—confidentiality, integrity, and availability—that help you focus on the right security priorities.

Several years ago, I worked with my employer to start a software security program. We truly started from the ground up, with no dedicated security development team. Testing is a related discipline, with relevant skills in investigation, troubleshooting, and reporting bugs, so we started there. I'm still a tester, but now I focus on security first and foremost. With the new systems now in place, I took some time to look back on our process and lay out an example of what worked for us.

While we were building our team, I frequently heard two questions: Why do we need to care about security? And how do we even start thinking about it?

The first question is the easier to answer. No company wants to star in the next data breach headline or upset their user base with an embarrassing privacy slip. That sort of mistake costs big money, but even in the absence of an attention-grabbing breach, there are hidden costs to developing without security in mind that can be addressed by changing the way we think about what constitutes security.

This article is an attempt at answering the second question. There’s a common perception that security is all about protecting your system from malicious users. That’s a good start, but it isn’t enough. Relying on the single question “Does an attacker care about this?” both underestimates the creativity of a hacker with a goal and, more importantly, limits the effectiveness of the security program.

What Does It Mean to Be Secure?

Security isn’t a feature of a piece of software; it’s a property of the entire system, including its users.

First, let’s define the user. Human users are a good start, but that isn’t sufficient for figuring out the security a system requires. I had a lightning bolt moment when, the day after a product failed, another product that relied on a file delivered by the downed system also errored out. My first thought was, “We’ve just failed our users!” No people were angry at us yet, but the second system was affected. Broadly, anything relying on your software is a user, whether it’s a machine or a human customer. I now rely on this definition when writing and executing my tests.

The definition of security was our next big hurdle. While everyone would like to claim that their product is secure, the truth is that perfect security isn’t possible in the real world. No set of rules is unbreakable, and no software system is truly unhackable. Instead of “How do we get to perfection?” we asked, “How do we get secure enough?” Then, we had to figure out what “secure enough” meant. Some pieces were obvious, as they were required by regulators or our partners. Some were less obvious and came up only as we started to dig into what the most visible aspects actually meant in implementation.

We found right away that the big requirements had hidden underpinnings. Keeping our users’ data safe also meant knowing which data were important and how safe they ought to be. Being able to make decisions on that data meant it had to be complete and trustworthy, and also accessible on demand, but only to the right people. Fortunately, a framework exists to help define those baseline requirements.

To get closer to the true goals of security, we decided to model our measures on what’s known as the CIA triad: confidentiality, integrity, and availability. The terms are simple, but their definitions do not match the common usage of the words; a quick redefinition is necessary to get the full benefit of the model.

Confidentiality: The Right Data Going to the Right Users

Confidentiality is about not just keeping information private, but also keeping the right information, whatever that may be, from being exposed to the wrong people. The “right” information is sensitive or crucial for system operation. For example, a revealed server name is unimportant to most users, but is a roadmap to an attacker, and many forms of personally identifying or financial information can be used for profit.

User Comments

1 comment

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.