Using the Principles of the CIA Triad to Implement Software Security

[article]
Summary:

If you're starting or improving a security program for your software, you probably have questions about the requirements that define security. Data need to be complete and trustworthy, and also accessible on demand, but only to the right people. The CIA triad defines three principles—confidentiality, integrity, and availability—that help you focus on the right security priorities.

Several years ago, I worked with my employer to start a software security program. We truly started from the ground up, with no dedicated security development team. Testing is a related discipline, with relevant skills in investigation, troubleshooting, and reporting bugs, so we started there. I'm still a tester, but now I focus on security first and foremost. With the new systems now in place, I took some time to look back on our process and lay out an example of what worked for us.

While we were building our team, I frequently heard two questions: Why do we need to care about security? And how do we even start thinking about it?

The first question is the easier to answer. No company wants to star in the next data breach headline or upset their user base with an embarrassing privacy slip. That sort of mistake costs big money, but even in the absence of an attention-grabbing breach, there are hidden costs to developing without security in mind that can be addressed by changing the way we think about what constitutes security.

This article is an attempt at answering the second question. There’s a common perception that security is all about protecting your system from malicious users. That’s a good start, but it isn’t enough. Relying on the single question “Does an attacker care about this?” both underestimates the creativity of a hacker with a goal and, more importantly, limits the effectiveness of the security program.

What Does It Mean to Be Secure?

Security isn’t a feature of a piece of software; it’s a property of the entire system, including its users.

First, let’s define the user. Human users are a good start, but that isn’t sufficient for figuring out the security a system requires. I had a lightning bolt moment when, the day after a product failed, another product that relied on a file delivered by the downed system also errored out. My first thought was, “We’ve just failed our users!” No people were angry at us yet, but the second system was affected. Broadly, anything relying on your software is a user, whether it’s a machine or a human customer. I now rely on this definition when writing and executing my tests.

The definition of security was our next big hurdle. While everyone would like to claim that their product is secure, the truth is that perfect security isn’t possible in the real world. No set of rules is unbreakable, and no software system is truly unhackable. Instead of “How do we get to perfection?” we asked, “How do we get secure enough?” Then, we had to figure out what “secure enough” meant. Some pieces were obvious, as they were required by regulators or our partners. Some were less obvious and came up only as we started to dig into what the most visible aspects actually meant in implementation.

We found right away that the big requirements had hidden underpinnings. Keeping our users’ data safe also meant knowing which data were important and how safe they ought to be. Being able to make decisions on that data meant it had to be complete and trustworthy, and also accessible on demand, but only to the right people. Fortunately, a framework exists to help define those baseline requirements.

To get closer to the true goals of security, we decided to model our measures on what’s known as the CIA triad: confidentiality, integrity, and availability. The terms are simple, but their definitions do not match the common usage of the words; a quick redefinition is necessary to get the full benefit of the model.

Confidentiality: The Right Data Going to the Right Users

Confidentiality is about not just keeping information private, but also keeping the right information, whatever that may be, from being exposed to the wrong people. The “right” information is sensitive or crucial for system operation. For example, a revealed server name is unimportant to most users, but is a roadmap to an attacker, and many forms of personally identifying or financial information can be used for profit.

The wrong users, then, are any people or systems not authorized to have access to the data. This is defined by the user’s role. A trusted employee who isn’t an administrator probably shouldn’t have access to some data, a thief shouldn’t have any confidential data at all, and a content scraper should only see public information. Privacy concerns nearly always map neatly to confidentiality problems.

On the other hand, not all data is the right data. Things the public can easily find out usually aren’t confidential, although there are exceptions. This will depend on your system and what it handles, as well as any legal or regulatory requirements affecting your projects.

Integrity: Good Data from Trustworthy Sources

In order for the system to have integrity, the data must be valid, come from a trusted source, travel through secure means that don’t allow it to be intercepted or tampered with, and be stored where it can’t be viewed or altered by the wrong users. In effect, if your data can’t be tampered with in motion or at rest but it came from who-knows-where, you probably still shouldn’t trust it or make business decisions based on it.

This is a wider definition of integrity than is usual for the CIA model because it takes into account whether the data is trustworthy to begin with. Thus, it defines “Can I make a good decision on this?” as part of information security. Bad decisions cost resources, from the small example of having to make an extra phone call to the large examples that make the news every day.

Availability: Keeping the Data Flowing

Availability is probably the most straightforward measure used by this framework. A system is available when its data are accessible by the right people, when they need it. This can be easily expanded to include considerations of load: “When they need it” can also mean “as fast as needed, as often as needed.” While this is a slight extension of the CIA framework, I feel it’s a logical enough leap that it does not stretch the model past its original intent.

All three of these factors affect each other, and while the words are simple, the implementation is as complex as the system under test. The real value of the CIA framework lies in expanding the definition of “secure enough” to also include “Can the business trust this data to be valid and accessible for making critical decisions?”

The CIA Triad in Practice

We discovered several gaps in our development systems using the CIA model. Even though we wanted to have secure products, there were pieces missing from our requirements and, thus, from our development and tests. High load was a good example; we discovered the hard way that certain pieces of software would not stand up to much more traffic than they ordinarily received. As a software tester, one of the best tools I have to get a discussion started is to log a bug or a missed requirement.

Our information security team also started conversations with the product planning team about security requirements. When the questions started coming in, we found good external training resources on how to reduce vulnerabilities in code and gave them not only to our developers, but also to our testers, so that we could come at the problems from multiple angles.

The downside of the CIA framework is that it looks only at data. Stopping there doesn’t do justice to the full complexity of any given system. A full list of aspects covered by other frameworks and how we addressed them is well out of scope for this writing, but the usual tools of gap analysis, good research and planning, and strong teamwork can overcome those obstacles as well.

When more people started using the CIA framework and asking questions about the software, we saw a sudden and dramatic increase in the quality of our work. If you need to implement or improve a security program for your software, think first about these three crucial principles: confidentiality, integrity, and availability.

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.