Thinking about interacting with the customer at the start of the project? Who would argue against that? Well, it depends on what you call it. It also depends on whether you then do it without the benefit of the rest of the project team. Here, Ulrika Park helps us see what an agile approach to thinking about the requirements might look like.
In the beginning of software development and requirements, there was big design up front.
This way of approaching requirements is still the most common way I see organizations work when it comes to deciding what features software projects should deliver.
The concept of big design up front represents both the way requirements analysts (or other people on the client side) try to design every feature up front down to the smallest detail, and the way software and business architects try to design all parts of the architectural or domain model before any developers have started.
In this previous way of working, after a formal handoff to developers when a project starts, developers are expected to execute the pile of design documents and translate them into code, no matter what they discover or misinterpret during the way.
Now, we have agile.
The agile methods, such as Scrum and XP, put a huge focus on removing the problem with big design up front. Instead of organizing roles in a software project to work separately from each other at different times, only communicating by one-way documentation, these methods emphazise the necessity for designers (business analysts, interaction designers, domain experts, and architects), developers, and testers to work together—at the same time, within the same team, in the same project.
This way of addressing design work, which helps to avoid misconceptions and raise productivity, is nothing new in other product and service development businesses, so IT seems to be catching up with more mature industries.
In projects that claim to be agile, requirements are expected to be a collection of lightly expressed user stories gathered in a backlog for the team and customer to iteratively work on, one story at a time.
While agile may have taken away the ”evil phenomenon” of big design up front, I think we have gone from one extreme to another. Instead of doing all design up front, people responsible for agile projects are now doing no design up front, or rather no thinking up front.
A study by Craig Larman in 2005  showed that only about 20 percent of requested features developed in software projects are actually used. Big design up front was part of the reason. Trying to figure out all details up front either makes designers, business analysts, and architects overdo it and define a lot of functions no one actually wants, or makes the development team overdo it because they don’t know the scope or priority.
So, by removing big design up front and instead creating an agile backlog with loosely defined requirements and design from the start, should you get better results?
I strongly believe from what I’ve seen in different agile projects during my fifteen years in the software industry that this is not true. We do get better results by scoping how much is delivered and having access to a customer representative who can prioritize. But by removing allthinking up front, we are still building features that are not used—features that are more or less randomly selected by the product owner or client and are not necessarily of much value for the user. Project teams don’t understand what problem they’re working to solve; they’re just enabled to solve it quicker.
My recommendation to all software project teams out there is that whether you use an agile method or not, start doing more thinking up front.
This is different from big design up front.
What we should do, instead of trying to do all design up front or removing all thinking up front, is have a well-founded understanding of the problem our software product or service is supposed to solve before trying to solve it. I have had good experience from some assignments where we actually did this.
I’ll give you a short summary of one agile prestudy I was a part of before any money was invested in the actual technology or development work.
In this instance, the client’s department was about to do either a costly upgrade of a tool for PDF publishing or replace it with another tool (which is always costly, right?). The client needed to know what option he should choose in the short term while also getting some idea of a long-term solution.
In about a three-month period I did my best to help the client focus on understanding the problem rather than documenting all possible function specifications. Here are steps that I took to get results.
1. Study the History
Conduct interviews with users and current stakeholders to get answers about the following questions.
- What problem did the current publishing tool solve?
- Why was this tool chosen?
- When was it installed, and by whom? Is that person still at the company?
- Why do we really need to upgrade now? Why can’t we go on with the current model?
- Visualize the current solution with present user scenarios.
2. Statistics and Facts
Gather statistics from web, BI, and CRM analysts for more data.
- Who is currently using the information the tool produces?
- What do we know about them? How often do they use it? What target group do they (mostly) belong to? Is this a prioritized target group? How many new registrants do we have?
- Why do they use this tool?
- What other media are available for them to get the same information?
3. Try It Yourself
One way of understanding an unfamiliar tool is to use it yourself.
- Register as a customer or reader to get the experience your users get.
- Try out different user scenarios and user roles.
4. Go See!
In all of the requirement work I have done during my career in software, this is the one that always gives the most results when it comes to understanding a problem.
Go for study visits on ”genba,” a Japanese term meaning ”the real place”—or where all the action happens. In our case, one genba was where the publishers worked. To be at their desks, sitting next to them, observing how they used the current publishing tool provided a lot of context about the problem, such as the answers gleaned from the following questions:
- What challenges do the users have using the tool today?
- What do they seem to appreciate about the current solution and tool?
- Who are they actually interacting with?
- What other tools are they using to produce the information? How are they using them?
5. Environmental Surroundings Analysis
Gather more facts to have a picture of the problem domain. This can be done extensively by process analysis, domain analysis, etc. In this prestudy, I did the following basic items, which are the following I am recommending to you:
- Do concept modeling with a diversified group of stakeholders and users.
- Visit seminars in the market domain.
- Get demos of similar tools and the upgrades of your current tool to get a broad overview. (Remember, a demo gives very shallow insight into how a tool works.)
- Do some marketing analysis with the help of marketers. What are the trends when it comes to publishing this kind of content?
Results from the Problem Study
By focusing on understanding the problem, the result of our prestudy was that we decided not to do the expensive upgrade or replace one tool with another. We realized that the problem had changed since the publishing tool was first installed many years ago. The original purpose of the project was to replace or upgrade a tool for publishing PDFs with a specific type of content. By focusing to understand the problem rather than the tool to be replaced, we gained valuable insight. We realized that a possible solution of the problem could potentially not involve much PDF publishing at all. The short-term solution we chose was to implement a much simpler administrative tool rather than buying the upgrade or replacing the current tool with another complex tool that has the capabilities to publish big PDFs.
The point of doing all the fact gathering is to give you a direction toward a future solution.
By focusing 80 percent of your efforts, time, and money on describing the present solution of the problem with the current application and only 20 percent on describing the future solution, you will have a good chance moving in the right direction. When you can see and clearly describe the system, solution, or business process you have right now, insights for future solution will emerge. Many business analysts and requirement people do exactly the opposite. I have too, until I experienced the effect of putting my efforts into actually describing the present.
Pia Gideon, a board member of several companies, once described this truth to me: “Just bring out a map of your city. You want to go to point B. But if you have no idea or can spell out clearly where your starting point is—where point A is on the map—how can you ever make a reality based plan on how to go to B and get others to understand it?”
Often when it comes to systems solutions—especially more complicated ones—point A on the map is very unclear. We often don’t know exactly what our current applications do or even what problem they originally were created or bought in to solve. If we, as business and requirements people, start focusing our efforts on identifying where we are today and know what the current solution does for whom, when, how, and why; my experience tells me that you will be more likely to target the product features that actually will be used.
The payoff for that effort is much higher than trying to analyze the future solution in detail or just jump into creating a rough backlog of user stories.
It takes a lot of courage to change the way we work with requirements and design. Often I’ve met people with skepticism and a fear of wasting time, when focusing on the things mentioned in this article.
The bottom line for everyone who works with software development, whether as a client, business analyst, developer, or tester, is that you must do some more thinking up front to be able to solve the right problems—for customers, end users, and the business.
 Agile and iterative development,Craig Larman, 2005