Integrated Tools Enhance Distributed Development

[article]
Summary:

When I look at the prospect of a distributed development effort, it scares me. So much depends on having the right people and good communicators, all in the right places. It also depends on the successful merging of cultures, but more and more distributed development is taking place.

So how do you increase your chances for success? I'd like to focus on four key areas:

  • Process and management
  • Communication and meetings
  • Reviews and approvals
  • Tools and data repository

Why only these areas? Because I feel the rest of it is not a whole lot different than for a single site. Ideally, the goal should be to make these four areas behave as if the distributed development were being done in a single site.

Process and Management
Management deals with process and having managers that are not territorial and can communicate easily is a key. Having effective, well-defined processes is even more effective. Good management happens when good processes are defined and followed. The difference between a poor and a great manager is far less when there are good processes and good tools to support them.

Processes may be tailored to time zone differences, but they should generally be universal.  They should apply across all sites involved in the development. Universal processes make management easier.

Ideally, your process will work equally well for a single site as for distributed development. A good tool will help you to support and enforce your process. If your process is defined by repository data, you should be able to update your process at all sites at the same time through a normal data update transaction, depending on the type of repository solution you have in mind.

Communication and Meetings
Communication across distance is essential, even if it’s just to put a voice to a name. If you're doing distributed development these days, for sure you will want some video-conferencing capability, even if it's just from your desktop. Putting a face to a name is better than just a voice. It's important to hear someone's tone. But watching someone's reaction is also a huge part of communication. If you don't believe me, ask your spouse. For this reason, I'm really surprised video phones never caught on - maybe after the VOIP revolution.Good telecommunications tools are a big help, but your repository and process tools can also help immensely with communication. If all of your data and your processes are defined through your tools, your communications and meetings can focus on non-compliances and variations. Reports and status information is available before the formal communication begins. Watch your tool selection carefully - it should be the hub of your communications. Track meetings, or store the video and/or messenger clips directly in your tool. Being absent from a meeting is a lot worse across distance, so do all you can to let your participants time shift their participation.

Reviews and Approvals
Reviews and approvals are a special form of communication. Especially where there is a significant time zone shift, your tools must provide help here. To-do lists, electronic approval, document repositories and change-package based code reviews help to ensure that a review can be done as necessary - even without holding a meeting. Written reviews are always recommended - they are more objective, leave an audit trail, and provide a checklist for post review actions. Even better, they work well within a distributed development constraint. When a change-package based delta can be generated from your CM tool, a single change identifier can be used to identify the code review task. Code reviews can be done on-line by anyone at any time.

Good tools should not eliminate oral communication. Some things will always have to be discussed, even if it’s just to clear up the misinterpretations read into the tone of a less objective review. Significant issues, or issues that go back and forth a few times, should be resolved through a meeting or tele-conference.

Tools and Data Repository
Tools interact with the data repository (or repositories). Looking at the previous three areas, it is clear that tools play a critical role in distributed development. The key, from my perspective, is ensuring that all sites, and the tools at each site, have access to all of the data, not just a portion. Data, along with good tools, is the key to making good decisions. OK, good judgment and experience may play a part as well.

There are three basic models for distributed development when it comes to data:

1. Central repository with distributed access, through the web or otherwise
2. Distributed repository with appropriate pieces at appropriate sites
3. Replicated repository with local access to the local siteI strongly recommend the third model, but if your tool sets require a number of different repositories, the recommendation is less strong, as it's not easy to build distributed solutions on tools that don't share a common repository. Also, it's risky.

A central repository has its place, for example, if there are only a few users outside the main site. Solutions involving a thin client (e.g., remote desktop), with workspaces hosted at the main site can work reasonably well. But move the workspace (or the tools), out to the remote site and some delay problems appear. Try doing a workspace differences operation against your repository over a 100 Kbps link as opposed to a 100 Mbps LAN. Not a big problem if your workspace only has a few megabytes in it.

A distributed repository has applications where it is necessary to keep data away from some of the sites. This is the case often in a contractor/sub-contractor model, where the sub-contractor is only supposed to see a specific piece of the puzzle. And it's difficult to decide how to partition the data. What data goes where? How often is it synchronized back to the main site? How do I capture a backup which is consistent? More often, though, we're dealing with organizations that want to share information across sites. Each site needs to make timely decisions, even when the other sites have gone to sleep for the night. In some organizations, the user's workspace is on his/her laptop and the user is moving from site to site.

A replicated repository strategy meets these needs. The repository looks the same from any site. The tools work the same at every site. At a minimum, a master site (or a quorum of sites) is required to sequence transactions so that all sites can ensure that they are processed in a consistent order, with the same results.

A replicated repository has other benefits as well. Each site acts as a backup of all of the other sites. This also forms an important part of any disaster recovery planning. And in the case of a disk crash, or any other interruption of service to the local repository, it's easy enough for a user to connect to a remote repository, even if at a slower speed.

Choice of Tools
When it comes to talking distributed development, there's too much focus on the version control repository. That's important, sure, but it doesn't contain the key information. It's the management information, not the code base, that's going to make distributed development work. It's the process and the communication channels. A version control repository does little to help here.

Management data spans from requirements through test data, from documents through customer tracking data. It's not easy to do this if you're planning on gluing a bunch of tools together, especially if they rely on different repositories.Editors, compilers, etc., that is, the personal tools, are not so much an issue with distributed development. The key reason this happens is that they generally don't share corporate data, but instead work within the workspace or sandbox.

Other Cautions
So what else is there to look out for?

Well, first of all, is there a lot of administration to setting up a new site? If so, something can and generally will go wrong. Not only do you want to have low administration, you also want flexible timing. Can I create a synchronization point on day 1 and not deploy the site until day 14 when I'm physically there? Is it easy to add a new site a few weeks or even months down the road without having to create a new synchronization point? And coordinating complex administration across long distances can be tricky.

Next, does the solution continuously verify synchronization of data? What happens when your repository or tools are upgraded? If someone makes an upgrade mistake at one site, the data might drift out of synch. How soon will you find out? And even more important, how easy is it to recover? It's likely easier if you find out sooner. Because you're dealing with multiple sites, an automatic recovery feature is sure a bonus.

Also consider whether or not your solution requires that each update/transaction is processed at the master prior to slave sites. If so, a large transaction can impose significant delays to the slave sites. The master should only have to order the transactions, and perhaps help ensure they are distributed to all sites for asynchronous execution at each site.

Another factor: How difficult is it to upgrade your repository and tools? If new release upgrades are painful, then a replicated data solution will have painful periods. This is complicated further if your solution is a composite of tools, even if they run on a single repository.

The Payoff
If you identify a single-repository, end-to-end integrated solution with low administration and automatic problem detection and correction, what you'll notice is that distributed development is easier than you thought. Yes you still have to pay attention to management and processes, but your solution gives you significant support here. Yes you still have to ensure you have good communication, but again your solution will give you great support, especially if it supports the review process and electronic approval.If you have reasonable bandwidth between sites, you may also find that you can connect to a remote site without realizing that you're not connected locally.

A warning though:  don't take technology at face value, especially when it's the face of a salesman. Bring it in to evaluate your distributed development on your local network. If this is a difficult process in itself, then beware. Test out some real use cases, from all sites. Pay attention to administrative needs, to scalability and to reliability. Force errors and check out the recovery procedures. Load in 10,000 files and see what happens.

There are always going to be issues:  very large files across slow links, network outages, etc. But in my experience, distributed development with a single repository and an integrated management suite can be pleasant, and is well within the reach of today's technology.

The measure of success is always going to be: How different is this from a single site scenario?

If you need a recommendation, don't hesitate to ask. I've been using this technology for years and would be more than willing to discuss it with you. Even though I'm biased, that doesn't mean I can't be objective. I'd also welcome your own recommendations and comments as replies to this article.

P.S. Have you taken the time look at the Spotlight Reviews? 

About the author

StickyMinds is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.