Has Continuous Deployment Become a New Worst Practice?

[article]
Summary:
Software development has been moving toward progressively smaller and faster development cycles, and continuous integration and continuous deployment are compressing delivery times even further. But is this actually good for businesses or their users? Just because you can deploy to production quickly and frequently, should you?

From waterfall to RAD to agile, software development has been moving toward progressively smaller and faster development cycles. Continuous integration and continuous deployment are compressing delivery times even further, which is good for business and good for users—or is it?

In the ’90s, we had automated nightly builds where all code that was checked in would be built and ready for deployment. Then came continuous integration, where a build was done as soon as the code was checked in and automated unit tests could be run on this new build. Now comes continuous deployment, which allows us to painlessly deploy as often as needed.

The ability to deploy to production easily and rapidly is nice to have. We’ve all been in a situation where we need to make an emergency fix due to a major production bug or lived through a painful release process. However, the ability to deploy quickly and frequently, while very appealing to companies, is often not to the users.

I recently heard people from one agile company boast about how they “deploy software to production one hundred to two hundred times a week.” Based on a five-day workweek, this averages twenty to forty deployments per day—or, based on a ten-hour work day, two to four per hour. What could the downside of this be? From a user’s perspective, here is an example.

As a web-based Microsoft Team Foundation Server user, I can recall several times where the look and feel of TFS changed without notice, which threw everyone off for a while. Making matters worse, the “enhancements” were questionable at best, because they usually involved more clicks to perform the same function.

However, the real danger with continuous deployments is that some companies are moving away from having their software tested by professional software testers. If there’s no pain or cost for deploying software, who cares if it’s buggy? "We do continuous deployment, so we’ll just push out more fixes," these software development managers say. 

In my experience, testing has never been fully appreciated, and continuous deployment may reduce this appreciation even further. Testers are often the low man on the totem pole, and in tight economic times, they are usually the first ones to be let go—with the assurance that the developers will just have to test more, and that the rest of the testers’ tasks will be replaced with test automation. Automated tests run like magic after each build, and they run quickly, too.

But before considering automated testing as a replacement for professional testers, answer these questions:

  • Are your automated tests being checked for false negatives or, worse yet, false positives?
  • Are you periodically injecting data that should cause your automated tests to fail and verifying that they are, indeed, failing?
  • Is your automation fully exploring the product, seeking to learn and adjust as tests are run?

Remember, not all tests can be automated. Tests based on timing can be difficult to automate, if not impossible.

Some business people and development managers may think, “Okay, our QA person found some bugs that automation did not, but would a ‘real user’ find them? Would they care? Let’s just deploy it and see what happens.” They may see users as unpaid beta testers or crowdsourced testers. 

But how many users will take the time to report a bug? How well will their bug reports be written? How easy will the company make it to report bugs, and how responsive will communications between the company and bug reporter be? Also, there are certain kinds of critical bugs that should never happen in production. Are you confident that your test automation is sufficient to find them?

Test automation focuses on testing functionality but has no clue about usability. Who is using your program—another program or a person? If it’s a person, then you will want a professional tester exercising that application with a keen eye on usability. An alternative to having a professional tester do this is having a group of users participate in a formal usability test. This is a good idea, but in all my years of testing, not a single application I’ve tested has ever gone through a formal usability test. I wish more companies would perform usability testing, but it is faster and cheaper to use a knowledgeable tester.

The need for speed in software development and continuous deployment has caused some traditional things, like software versions visible to the user and release notes, to fall by the wayside. For companies that are deploying ten updates a day to an application, how will users know that changes have been made without release notes or versioning? In my experience, testers were the ones who put together release notes. If a user wanted to report a problem, testers are usually the ones who try to reproduce the bug. With no software version being reported, how will users or testers know if the bug has been fixed? Agile is based on feedback, and cutting out versions and release notes hinders feedback. It will also limit your analysis if you’re collecting metrics.

For the company that boasted about deploying to production up to two hundred times a week, is this good, bad, or ugly? I don’t know the number of applications involved, so I can't say. If they have ten apps and we assume half are new or improved features and half are bug fixes, we have five new features and five bug fixes per app per week at the low end and ten each at the high end. If this is a new application, then adding five new features a week may be reasonable, but after six months, this would be excessive. Might this be an indication of feature creep or gold-plating?

The same figures apply to bug fixes. Five bug fixes a week may be very reasonable in the beginning, but if this figure remains steady, then it may be a good indication of inadequate testing.

We’ll have to wait and see if continuous deployment has a positive or negative impact on software quality in the long run. If companies feel they can replace professional software testers completely with automated tests, I think we know the answer. While continuous deployment has the potential to be a very positive trend, never underestimate people’s propensity to misuse a good thing.

User Comments

3 comments
Justin Thomsen's picture

Hey John, great to consider how continuous deployment can affect QAs.  One of the things I think should be called out is that continuous deployment does not necessarily mean continuous release.  With concepts like A/B testing, feature toggling, and dark launches, it's possible to be able to continuously deploy in a safe way, even to production, while being safe to your users.  

When done in a not safe way, continuous deployment absolutely is a disaster.  Taking the steps to make your deployments safe includes thinking about how to make them safe while still providing the benefits of seeing how that small snippet of code will affect people when it's actually released.

As far as release notes, I agree, for feature releases, depending on your business, you should take good care to communicate those features that have a big impact to your users.  For every deployment though, especially when things aren't toggled on, release notes aren't necessary, and you can use the version control comments to help understand what may have changed between push A and push B. 

September 25, 2017 - 9:43am
John Tyson's picture

I wasn't familiar with the term 'dark launch,' even though I've been in shops that use this technique, in my case, rolling out minor or partial features without turning them on.

I see that people talk of enabling these new features to a subset of users, so they can get their feedback.  So it sounds like the users know of the change (are warned of it) and were asked to give feedback or at least know how to give feedback.  Or are people just looking at analytics like "Did they bail out of this page?" which could be open to interpretation.  Implicit feedback can be tricky - did I stay on the page 10 minutes because I loved it, didn't understand what I should do, the doorbell rang, etc.?

If the changes are being tested, that's great.  And it's people analyzing what other people did, which makes for a good complement to test automation.

September 25, 2017 - 3:48pm
Oliver Erlewein's picture

Indeed, what gets me most is that many don't distinguish deployment from release. To me that is the exact moment it changes hands. It moves from being an IT project to become business responsibility. The business is responsible on how and when it goes to production based upon the input they have from handover.

One other option is to use APM tools in PROD to monitor your dark launch. What are people doingin the areas that have changed. But that necessitates that you know your application very well and that you have enough instrumentation to see. 

As for Testing (QA is something else entirely), it is always good to read the resources around "checking vs testing". Even if you don't use the terms realising what's what is essential. Automation has some serious limitations and heaps of misconceptions that surround it. If you don't know of those there is a high chance you'll get it wrong whit all the consequences.

October 2, 2017 - 3:15pm

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.