Automating CM or Application Lifecycle Management

[article]
Summary:
Automation is at the heart of excellence in the field of configuration management. Unless a wider definition of CM is used, that of application lifecycle management, automation will fall far short of the mark. As we enter this still young millennium and look back at the progress of CM, it's clear that the industry has, for the most part, been creating tools to fight fires and to avoid them. Spot solutions originally dealing with version control, change management, problem/issue/defect tracking, etc., have given way to more integration. Yet as a whole, the industry has fallen far short of the mark required for CM, or ALM, automation.

What does automation mean? To some, it's being able to do nightly compiles/builds automatically so that when the team arrives in the morning all is ready. To others, it's the ability to take the test results and automatically verify that all of the requirements have been met. But automation has to go far beyond these examples, useful as they are.

The concept of a unified process is important, so long as it is one that can evolve easily and be molded to the requirements of the organization. A rigid UP or one that requires the organization to line up with its proclamations can actually lead to less automation. Process is important because it basically tells us who is responsible for what, when, and how the responsibilities are to be addressed to ensure that requirements are met. Automation requires well-defined processes. It does not require rigid processes. Processes need to fluid in order to conform to a changing environment, changing standards and changing technology.

There are a few factors which have traditionally inhibited automation:

  • The approach to integration of lifecycle applications
  • The models used for process automation
  • The lack of capability to rapidly evolve the automation of process
  • The complexity of views and terminology

Let me give some examples to clarify.

Integrating Lifecycle Applications
The traditional approach to integrating lifecycle applications was to perform three basic functions:

  • Identify the means that each application provides for receiving and processing external requests
  • Identify the means that each application provides for requesting external information or changes to the information
  • Identify the set of data in each application that needed to be exchanged with other applications

When this was completed, the level of integration capability could be determined and could be assessed in light of the requirements of the integrated applications. The key flaws here are that the assessment is always subjective, and usually tainted by other solutions, and the assessment is done at a single point in time which cannot easily take into account evolving process requirements.

This traditional approach to integration fits into the 2nd generation of CM capabilities. What is needed in a 3rd generation of CM capabilities is really nothing more than common middleware, and an underlying common repository, that can serve both applications. If either application does not sit on the same middleware or repository, integration will be limited. Why? Because specifications will have to be developed for what and how they communicate. Because capabilities in one application won't match those in the other.
Middleware is a big component. It has to have all of the bases covered: how process is described and referenced, data query, search and update super layers that go beyond basic SQL capabilities, GUI specification and implementation capabilities so that users don't have to learn two applications (just two application processes), etc. So, for example, when I say that my ALM solution supports multiple site operation, it doesn't mean that one supports it sort-of and the other even better, or even differently. It means that my solution supports multiple site operation. I enable it and it works. It recovers from network outages in the same way for all applications without complex re-training of my admin staff for each different application.

Models Used for Process Automation
Many believe that process modeling basically ends when I've selected a tool that can express processes cleanly and clearly. In fact, it's not even sufficient that such a tool can export process data that can be interpreted by the application. There are some basic needs.

Processes are fluid, ever changing to adapt to newly uncovered requirements and to better way of working. It's not sufficient that I can model a process. The process needs to evolve easily. I may even need several revisions of a process instantiated at the same time (e.g., the process used for release 1 vs. release 2 vs. release 3).

Smart, but normal process managers will be evolving the process. Not just a single person. Most of them will not want to have to learn yet another application so that they can change the process. Process tools must be intuitive.

All process data needs to be part of the solution. It needs to be in the repository. It needs to change in the same way as other data changes. In a multiple site solution, a change to the process at one site changes it globally. People can migrate from one site to another without having to re-learn how applications work.

So the process models need to include these concepts of fluidity, simplicity and ubiquity. When this happens processes really do evolve successfully. The other thing about process models is that a state-transition diagram does not a process model make. This is just one aspect.

Another component cuts across different applications and deals with role definition. The definition of roles extends down to permissions for state transitions, user interface components and data access. And this part of the model is crucial to ease of use, security and successful workflow. It determines what user interface the user sees, which options on dialog boxes, which menu items, and even which menus. It is recommended that one of the roles even deal with process. Process models should define specific roles, one or more of which can be assigned to each user of the application suite.Yet another component focuses more on the project management application: the task-based process model. It works with the state-transition component and the roles component to spawn tasks to the appropriate in-baskets. Tasks are typically created dynamically as a result of some condition being met. A system integration test task may be spawned based on the creation of a new build. Some process models will allow for recurring tasks, rather than spawning new ones all the time. For example, the system integration test task may be specific for developing a release and may roll in and out of the test team's inbox as new builds are created and subsequently integration tested.

Evolving a Process
How a process evolves is not an exact science. What is important though, is that evolution be supported in a controlled fashion. First of all, it's a good idea to split process into two or three levels of definition. Some parts of the process have to be cast in concrete.

For example, one must not be allowed to change source code without properly approved traceability data (e.g., an approved problem report or feature). Otherwise, there is no traceability for the source code change. This might be referred to as a rule. Perhaps a policy is whether to use features (i.e., product requirements) or assigned tasks (typically a software requirement) as the unit of traceability. Assuming software requirements are spawned from (and traced back to) product requirements, traceability to requirements are covered either way. Perhaps an access capability determines whether or not a function appears in a user's application interface. This might be based more on user maturity or other factors than on hard process rules. So, for example, perhaps only designated senior designers will have the capability to sign-off on a peer review.

When discussing changes to a process, classifying it as a hard rule, a policy or an access capability will set the tone for how critical its implementation is. When a rule is not implemented, there is typically hard data loss. When a policy is implemented, there is typically some data that can be transformed from the non-policy period, or the application can bridge across old and new policy. When an access capability for a particular function is not available, a user typically has to get implicit approval by finding a person who can sign-off on the capability by executing the desired function.

Your customization capabilities should be able to lock step with these levels. Perhaps a rule can only be implemented with special application modification permissions, or perhaps even by changing the application code itself. A policy should be more flexible and easier to implement by a person who owns a particular part of the process: perhaps a project manager, product manager or CM manager. Likely a policy change is implemented through scripting or by modifying some control data. It might also be implemented by adding new options, defaults or fields to data schema and to forms and dialogues. An access capability might be implemented by change role assignments, by changing the definition of a role, or by changing the GUI menu scripting such that the functionality appears for the designated person or role.

Again, if customization is difficult, process evolution will progress slowly or at a great cost. Errors in process implementation or judgment will be more difficult to roll back and, hence, will have a greater impact.
An Example: CM+
In the CM+ architecture, we have rules and we have policy. Some of the rules are hard coded into the CM+ application, while others are scripted or configured. For example, in CM+, a baseline cannot be modified once it is permanently frozen. Our policy is that it is permanently frozen when we say "freeze configuration". However, the hard rule is that it can be re-opened after a freeze configuration operation as long as a subsequent candidate configuration has not yet been opened in the development stream. This allows a change of policy to allow a team to re-freeze baselines if necessary, so long as the next configuration has not yet been aligned. Such a change would supposedly be accompanied by an additional state for a baseline indicating when it is permanently frozen. And references to the baseline would have to be tentative until the permanently frozen designation was achieved. I don't necessarily recommend such a change in policy, and would argue that it really couldn't be called a baseline until permanently frozen, but the fact that CM+ would allow it is important for corporate processes which might require it. At the same time, the fact that CM+ will not allow a change to a permanently frozen baseline is also a very good thing. It allows the definition of true, immutable baselines.

Promotion of a problem report status to "fixed" in CM+ when the associated change is checked in is a policy decision. Some shops want an independent review of the change before giving it the designation "fixed". The default, out-of-the-box behavior assumes most problems are fixed by a single change, and so the designation is good. The same default behavior also has a verification state to which the problem is promoted after the fix has been verified. On the contrary, a feature referenced by a change is not promoted to be implemented when the change is checked in. This is because most features, at least in an agile shop, will require more than one change for implementation.

So what's the difference between these two cases? Why are they not treated the same? The goal is to minimize manual actions and increase data accuracy. If 92% of problems and 24% of features are implemented by a single change, we're helping to achieve the goal. But we can do better, so we take the 8% and say: don't promote the problem to fixed if there's another change against the problem which has not yet been checked in. This gives us another 2% or 3% data accuracy. Policy decisions often deal with increasing data accuracy. It doesn't mean that we don't need a more strict process to improve it further. But it does mean that we can improve it further with fewer resources. Whether or not we do this automated promotion is a policy decision. In CM+ such policy decision would be implemented, typically, as part of a trigger, whether a state transition trigger or a trigger on another type of user action.

Complexity of Views and Terminology
The final area I'd like to discuss is that of complexity. It's very easy to make a simple process complex. In the 1980s, someone introduced the term Artificial Intelligence, or AI. It came across as a technology breakthrough. I couldn't see it. It didn't look different from what we had traditionally done. And it wasn't. Apart from Neural Network technology, there wasn't anything new, but it was a phrase that caught on well. I didn't like it because it made me think that I was missing something. To me this was an added complexity until I figured out that AI was an encapsulation of current techniques under an umbrella - not a breakthrough.

With process it's the same. Don't throw new terms at your audience for no reason. If your corporate culture has always called a problem report a DPAR, continue to do so. Everyone's familiar with the term already. Over the long term you can still gradually adopt industry terminology. But do so carefully.We have bigger problems in terminology in the CM and ALM worlds. The word baseline is used when it's not immutable. The phrase "check out" is used by some even when you're not checking out (i.e. borrowing, reserving, locking,...) a file. A product is often referred to as a project. The concepts of task and change package are sometimes confused. Is a delta report the same as a difference report?

So the task is difficult. Use the most accepted terminology in your environment. Lean it toward the CM industry when the CM industry agrees on a term. Don't invent new terms. For automation to be achieved, everyone must have a common understanding.

Context Views
There's plenty of data in an ALM environment. Automation isn't just performing automatic data transitions. It involves presenting an intelligent set of alternatives in the right order. For example, if I perform a non-object-oriented operation, say a check-in operation, I want the CM tool to popup the most likely change, not just an arbitrary list. Context views can help here. An ALM environment must allow you to identify the product and development stream/release/build that you're focused on. When you do, the tool can narrow down choices for you and present most recently used or otherwise most likely objects as the default.

The goal is to reduce the complexity by setting a context. Within a context, the parallel development world can look like a single stream of development for the most part. When you ask for outstanding problem reports and your view is release 2 of product ABC, you see only those problem reports outstanding for product ABC, release 2. When you look at your source tree, it reflects release 2 of ABC, from both a structural and a revision context perspective.

Summary
Automation is not a simple process in a CM or ALM enviornment. It takes discipline. It requires throwing away old concepts to move forward. You may have to replace branch based promotion with change state based promotion. You may have to introduce new technology. You have to focus on reducing complexity.

CM can be automated to the extent that only Change Management need be performed. Will that put CM managers out of a job? No. It will allow them to focus on process improvement. It will allow them to focus on pushing the state-of-the-art forward. Don't sell yourself short - start with adequate technology. Raise the bar for your tool integrations. Move to a centralized repository, but also to common middleware. This is where CM is headed. 

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.