When it came to security testing, Sylvia Killinen noted that her company's most frequent difficulty wasn't the testing itself. Instead, it was the communication that provided problems, in part because of the words used to explain what would be performed. If you take care with how you describe your process, you may get more support while executing tests and repairing systems.
With frequent, high-profile data breaches making the news, hackers on the big screen, and cyber security terms like phishing and identity theft entering the household vocabulary, the testing community has a perfect opportunity to discuss security testing.
There are several steps to a security test at my organization. The first is when we move to sign off that the test is OK to perform, with permission from the product owner as well as the environment manager for whichever system is under test. This is best done by a document, because it’s easy to forget a conversation or lose an IM, so I fill out a statement of intent for each system to be under test. In that document, I describe at a high level what sort of testing will be performed, when, and what impacts may result. Considering any good security test aims to find and exploit vulnerabilities, impacts can include anything from system downtime to loss of data. I use the same language to describe tests as I would to describe vulnerabilities, as the statement of intent will also serve to guide development of test cases.
My company has had security testing for a reasonable period of time, but we’re still introducing new people to the process. It quickly became apparent that our most frequent difficulty was not acquiring or using tools, knowledge, or even time. Instead, it was the communication that provided a constant stream of problems, in part because of the words used to explain what would be performed.
Here’s a real story: A new product required security testing due to its handling of sensitive data. When I was assigned, the first few questions I asked were the standard we used at the time: What was the scope of the product? What sensitive information did the application under test handle? If I fouled up and knocked over a piece of the test environment, how long would it take to put it back up, and how much impact would the project expect? After that question, the process came to a grinding halt. Wait a minute, said the team. The idea that this kind of testing could damage something seemed to be entirely new.
I tried to explain that I test very carefully, and I don’t use destructive techniques where a gentle one could show the same vulnerability, but it was too late. After the meeting, the product owner went to his manager and my manager to say that he couldn’t possibly allow testing that could be dangerous to the deadlines and the test environment. I had some explaining to do, so I reworded everything I’d previously said to be more acceptable. I explained the risks in terms of impact and odds—the standard language of risk—and we came to an agreement, but the initial conflict could have been avoided if I had been more careful with my language. This gave me a trigger list: a set of words and phrases that mean something different to other people and can even cause “allegeric”-like reactions.
Even though I use hacking tools and techniques, I don’t generally refer to my work as hacking; “security testing” or “penetration testing” gives a better idea that I am following the same sort of guidelines as the rest of our testing teams and am working to the benefit of our products and our company.
Loading is an alternative to stressing, for the same reason. I may place a system under load to simulate a denial of serivce, or DOS, attack. Rather than using “denial of service” or “DOS” in my documentation or language, I will instead describe “unusual load,” typically borrowing language from service-level agreements used in performance testing.