In his Behaviorally Speaking series, Bob Aiello discusses hands-on software configuration management best practices within the context of organizational and group behavior.
Target’s well-publicized disclosure that customers’ personally identifiable information (PII) had been compromised is the latest software “glitch” that is getting a fair amount of attention. Read on if you would like to know how to secure your systems without having to rely upon security scans that only detect the presence of a problem after it is already on your server.
Target’s well-publicized disclosure that customers’ personally identifiable information (PII) had been compromised is the latest software “glitch” that is getting a fair amount of attention. Target is not alone as other retailers are stepping forward and admitting that they, too, have been “victims” of a cyber-attack. Target CEO Gregg Steinhafel called for chips on credit cards while admitting that there was malware on his point-of-sale machines. The malware was not discovered on the retailer’s machines despite the use of malware scanning services and why should it? The fact that retailers rely upon security software to detect the presence of a virus, Trojan, or other malware is exactly what is wrong with how these executives are looking at this problem. Read on if you would like to know how to secure your systems in the coming year without having to rely upon security scans that only detect the presence of a problem after it is already on your server.
Malicious hackers do not give us a copy of their code in advance so that vendors can make security products capable of recognizing the “signature” of an attack. This means that we are approaching security in a reactive manner after the malicious code is already on our systems. What we need to be doing is building secure software in the first place and to do this you need a secure trusted application base,which, frankly, is not really all that difficult to accomplish.
Creating secure software has more to do with the behavior of your development, operations, information security, and QA testing teams than the software or technology you are using. We need to be building, packaging, and deploying code in a safe and trusted way so that we know exactly what code should be on a server and are able to detect unauthorized changes that occur, either through human error or malicious intent. The reason that so much code is not secure and reliable is that we aren’t building it to be secure and reliable, and it is about time that we fixed this readily addressed problem.
Whether your software system is running a nuclear power plant, grandpa’s pacemaker, or the cash register at your favorite retailer, software should be built, packaged, and deployed using verifiable automated procedures that have built-in tests to ensure that the correct code was deployed and that it is running as it should be. In the IEEE standards, this is known as a physical and functional configuration audit and is among the most essential configuration management procedures required by most regulatory frameworks—for very good reason. If you use Ant, Maven, Make, or MSBuild to compile and package your code, you can also use cryptographic hashes to sign your code using a private key in a technique that is commonly known as asymmetric cryptography.
This isn’t actually all that difficult to do and many build frameworks have the functions already built into the language. Plus, there are many reliable free and open source libraries available to help automate these tasks. It is unfortunate, not to mention rather costly, that many companies don’t take the time to implement these procedures ad best practices as they rush their updates to market without the most basic security built in from the beginning of the lifecycle.
We have had enough trading firms, stock exchanges and big banks suffer major outages that impacted their customers and shareholders. It is about time that proper strategies be employed to build in software reliability, quality, and security from the beginning of the lifecycle instead of just trying to tack it on at the end if there is enough time.
The HealthCare.gov website has also been cited as having serious security flaws, and there are reports that the security testing was skipped due to project timing constraints. The DevOps approach of building code through automated procedures and deploying to a production-like environment early in the lifecycle is essential in enabling information security, QA, testing, and other stakeholders to participate in building quality systems that are verifiable down to the binary code itself. If you have put in place the procedures needed to detect any unauthorized changes then your virus detection software should not need to detect the signature of a specific virus, Trojan, or other malware.
Using cryptography, I can create a secure record of the baseline that allows me to proactively ascertain when a binary file or other configuration item has been changed. When I baseline production systems, I sometimes find that, to my surprise, there are files changing in the environment that I do not expect to be changing. Often there is a good explanation. For example, some software frameworks spawn off additional processes and related configuration files to handle additional volume. This is particularly a problem with frameworks that are commonly used to write code faster.
These frameworks are often very helpful, but sometimes they are not necessarily completely understood by the technology team using them. Baselining your codelines will actually help you understand and support your environment a lot better when you learn what is occurring on a day-to-day basis. There is some risk that you might have some false positives in which you think that you have a virus or other malware when in fact you can determine that there is a logical explanation for the changed files (and that information can be stored in your knowledge management system for next time).
The Target point-of-sale (POS) devices should have been provisioned using automated procedures that also could be used to immediately identify the code on the machine (or networking device) that was not placed there by the deployment team. Identifying malware is great, but identifying that your production baseline has been compromised is a lot better.
When companies start embracing DevOps best practices then large enterprise systems will be more reliable and secure, all while helping the organizations achieve their business goals.