Building a Continuous Deployment Environment: An Interview with Jared Richardson

[interview]

Josiah Renaudin: Welcome back to another TechWell interview. Today I am joined by Jared Richardson, a principal consultant and a member of the core team at Agile Artisans. He'll be giving a tutorial at our Better Software West conference titled, “Continuous Testing to Drive Continuous Integration and Deployment.” Jared, thank you very much for joining me today.

Jared Richardson: Hey, thanks for having me, Josiah.

Josiah Renaudin: Absolutely, and first, before we get rolling into the content, tell us a little bit about your experience in the industry.

Jared Richardson: Oh, wow.

Josiah Renaudin: Big question, I know.

Jared Richardson: Let's see. Well, I was in college back in the late '80s, early '90s, sold my first program, which was one of my data structures classes back in '91, so I've been writing software for a while. I got started in the Smalltalk arena, moved from Smalltalk to Java, Java to Ruby on Rails, and landed in the right place at the right time, with a few different opportunities. Now I'm a full-time agile coach, and working with Andy Hunt on the GROWS methodology.

Josiah Renaudin: Yeah, and when we get into topics like the continuous integration and continuous testing, sometimes we forget to even define exactly what these terms are. For you, can you quickly define both CI and continuous testing?

Jared Richardson: Well CI, I like to call it ... It's the gateway drug for a lot of the agile technical practices. It's usually a software process that watches your code repository. You want something that will works GETS, Aversion, whatever you happen to be using. When anybody touches the code, developer, tester ... I've had document teams break code before. When anybody touches it, for any reason, it's going to notice it, every five minutes, ten minutes, it's going to look, see the change, check out the code, and build it, just to make sure it still compiles in what we're taking to production.

If you're in a progressive team, and you're really moving forward in maturation, we'll then have a series of unit tests that can be run. For years, that was the gold standard. Let's make sure it compiles, and if we have unit tests, let's see if they run. Eventually, people figured out the end users don't really care if it compiles. They don't care about unit tests particularly. They care if the product can be installed and run.

When you talk about continuous deployment, and continuous testing, that's the process of taking that successfully compiled code, and running the installer in an automated fashion, installing it to something that looks like production, and then running your integration tests against it.

We go from getting feedback to those testers and developers. Rather than waiting days, weeks, or months, we get that feedback in tens of minutes, twenty, under an hour, you know if everything works on your app. It's a very, very efficient way of getting feedback into the hands of the people that are working.

Josiah Renaudin: How do continuous integration and continuous testing both lead to a continuous deployment environment? To branch off of that, too, are these feedback loops only really found in agile teams, or can they be found everywhere?

Jared Richardson: I have brought continuous integration into mainframe teams, I've brought them into waterfall teams. This type of feedback loop is very valuable no matter the methodology. I like to tell people, I've had clients before that tell me, "Hey, we're not doing agile here anymore. The command has come down from on high, senior VP, C-levels have said, ‘No agile,’ so we can't work with you." My response is always, "I don't care if we do agile, I care if we drive solid engineering practices. Now I happen to pull a lot of those from the Agile space, and here's one. This isn't agile, or not agile. This is just rapid feedback for the people writing code, and so it fits in any methodology that I'm aware of."

Josiah Renaudin: Whenever, I talk about open source on any of these interviews, I feel like Jenkins is one of the main names that comes out, so why has Jenkins become such a popular and standardized open source continuous integration tool? What really makes it so effective? What's pulling so many people to it?

Jared Richardson: Well ten years ago, CruiseControl was the CI tool anywhere you went. You didn't ask if they were running CI, you asked if they were running CruiseControl. I've contributed to that. I wrote some of the threading code for CruiseControl, so I was really ... I loved CruiseControl, a great community, great tool. Then along came Jenkins, and it had a few key improvements that just really took over. These days very few people even know what CruiseControl is. Jenkins had a very nice plugin architecture. They had a very nice open approach to adding plugins. They provided a GUI of configuring the job, so everything can be done from a web page. You don't have to touch a config file, you don't have to touch an XML file. You can even install those plugins from the management GUI.

You put this tool in place. It's got a built-in app server. It was light weight, but it works for most scenarios. You want to install a plugin, you click through a web page. You want to upgrade Jenkins itself, click through a web page. You want to add users, click through a web page. I reach a point, where I've had clients that use version control tools I've never heard of, and there's a Jenkins plugin for it. I mean, it came through Sun, and then for a while, Oracle, before it really broke out. It had that big corporate backing, so people were not afraid to contribute the time to write the plugins to support this little niche scenario, or that particular tool.

Now, when your bring up a brand-new Jenkins install, and you browse the available plugins, I want to say it's in the neighborhood of two thousand to three thousand. Most of what you want to do with your build, someone's already written a plugin for it. If they haven't, they've written one similar enough, you can copy.

Josiah Renaudin: I'm a writer and editor. In the writing world, very often, you have people who, they get edits, and they look at it and say, "Yeah, this is better," and they don't really look at, "Why is this better?" or "How do I take this in and make sure I don't make this same mistake in the future? How can I improve my own craft?" In the world of software development, how important is it for a developer to not only spot issues early on, but actually to learn from the experience? Do we often just spot a problem, fix it, and then move on with really not understanding why that problem first started?

Jared Richardson: That is a great question. Here's what most software teams do. They come in the morning, the sync up yesterday's code, and they notice logins not working, or they notice this particular issue is broken, and so they fix it. Most teams assume it was a merge issue, or whatever. When you look at a team that's using continuous integration, and good solid unit tests, defect driven testing, find a bug, write a test, continuous testing. Somebody breaks login, and the developer who broke it gets the email. Why should I work with you, and why should I fix your bugs? If I'm fixing what you accidentally break, you're not learning from that.

I mean, I tell developers, I've worked with tech leads that said, "Oh yeah, the team stops working at 4:00 or 5:00, I work until 6:00 or 7:00. I fix what they broke every night." My response to that is, "You have chosen as a career path to be a software pooper scooper." The more poop you scoop, the more poop they'll dump. I mean, if I'm fixing your mistakes, I'm learning from your mistakes. That's really good for me, but as a team, it's really bad for you, because you're going to keep making the same mistakes.

The best teams I've ever worked with, the people who make the mistake, get the email, learn from it, and quite often, they don't make the mistake again. That's a completely different scenario for having one group maybe cover a product through maintenance, versus the team that wrote it. If you can have the same people who write the code, write the problems, the mistakes, the issues, be the same team that fixes it, they become a much better team. When you have anybody else cover that, you're cheating the team, and you're enabling those long-term dysfunctional behaviors to continue. A good continuous integration system, with a good automated testing suite, gets you that feedback while you still remember what you were coding.

Josiah Renaudin: Absolutely, and you used the word maintenance. Maintenance is really important when someone is trying to consider using continuous testing. How much time can your average development team save when you're using continuous testing to cut back on your code maintenance? I mean, you've got to go down to, "What could this do for my team?" What could continuous testing do for a team to really help improve it?

Jared Richardson: It varies by team. I find almost every team out there could benefit greatly to the tune of upwards of half their time. I've seen teams be more than half their time being spent fixing issues that a CI/CT system would eliminate. The problem is they've become acclimated to the pain. They think it's normal to merge in everybody else's changes and have things break. They don't realize how many things they're breaking. They don't realize how many things their coworkers are breaking.

I first just come in and bring this CI system. Let's just do compiles, and then everybody realizes how often they're breaking the compiles, and you reach a point where the compile stays stable. We've eliminated an entire class of problems. Then we start bringing in some basic unit tests. Again, we eliminate another class of problems.

The time that a CI system, and a continuous deployment/continuous testing system shines though is when you're ready to ship. So many teams will work for 6 months, and they end up taking another three months to harden it all. If your system doesn't break because you have a good test system in place, where the test suites are covering the major scenarios, you can't break it at the second month mark, and then discover it at the fourth month mark, because QA was busy.

You'll know as soon as you break something. Every automated test suite out there records the timing. If something thought used to take thirty seconds now takes seven minutes, you know. You can spot performance issues. You can spot functional issues. When you reach a point where release is a non-event, because it's really just a boring thing, tell me how much time you spent on your last project, where you didn't have that. That's how much time you can save.

Josiah Renaudin: Something that's really become synonymous with you at this point is the GROWS method, which would be interesting to touch on before we end this. What exactly is the GROWS method? Why did you and Andy Hunt create it? To add to that, to make this even a bigger question, why is it relevant to continuous integration?

Jared Richardson: The GROWS method is something Andy Hunt, as you mentioned, and I are working on. It is a new agile methodology, and it's based around several different key concepts, one of which is the drive of this model of skills acquisition. Without going too deep down that path, it says that beginners need steps. Really experts, advanced folks, can start to get into ideas, and concepts, and you have code smells, but beginners need a flat out series of do this, do this, do this, do this. That's why practices like Scrum are so popular, they say meet this many times, answer three questions in this meeting.

As we mapped out what the GROWS methodology would cover, we looked at practices like continuous integration, and we made them core. The drive of this model, stage one, said, "Here are you common practices for your team. Here are your core practices for your developers, for your executives. Tools like continuous integration, we recognize, it doesn't intrude on how the team works. It doesn't come in and say write a test first, which is a valuable practice, but it's sometimes a difficult sell. A tool like CI comes in outside of what they're doing, but still changes behavior by spotting mistakes as soon as they happen.

We have a lot of practices that fit in like that, but CI is one of the core ones. We are making sure that if you're going to call yourself a mature GROWS team, CI going to be core to what you're doing, along with test automation, and continuous testing.

Josiah Renaudin: Jared, more than anything, what really central message do you want to leave with all those who attend your tutorial?

Jared Richardson: Whether you are using continuous integration, continuous deployment, continuous testing, everything you do in life, but especially in a software development lifecycle, everything is a feedback loop. Everything you do, A leads to B, whether you want to call it karma, whether you want to call it continuous integration, A leads to B. As you're writing software with continuous integration as a focal point, let's look at how tight we can make those feedback loops.

The feedback loops between our customers, between our product owners, maybe our Scrum masters, our tech leads, and from the time you touch the keyboard, and you type something, whether you're fixing a bug, whether you're adding a new piece of functionality, how quickly can I get the feedback in your hands that what you wrote is functioning the way you intended for it to?

It might be right, might not be right, may or may not be what the customer wants, but it's what you intended. If we can create that feedback for you, anybody who comes on your team after you, anyone who touches the code, fixes a bug when you're not around, will get that same level of feedback, and will keep the product rock solid. Tighten up those feedback loops, and everything will be a lot smoother sailing.

Josiah Renaudin: Absolutely. Well, thank you so much, Jared. I really appreciate you talking to me today, really insightful. I'm looking forward to hearing more from you at Better Software West.

Jared Richardson: Thanks. I'm looking forward to being there, and hopefully a few of the people listening will come out and say hi.

Jared R.Principal consultant and a member of the core team at Agile Artisans, Jared Richardson is a process coach who works with software teams to help them build excellent software. He sold his first software program in 1991 and has been immersed in software ever since. Jared helped create the GROWS methodology and has authored a number of books, including the best-selling Ship It! A Practical Guide to Successful Software Projects and Career 2.0: Take Control of Your Life. He is a frequent speaker at software conferences and a thought leader in the agile space. Jared lives with his wife and children in North Carolina where—quite by accident—they became backyard chicken farmers.

About the author

Upcoming Events

Apr 28
Jun 02
Sep 22
Oct 13