Some lessons are so important that we get opportunities to learn them again and again. In this week's column, Payson Hall attends a project meeting where he relearns an important communication lesson about the meaning of words.
My role as a project consultant draws heavily upon my interpersonal communication skills. While communication improves with attention and practice, I recently rediscovered the importance of establishing a common vocabulary and the danger of assuming that one already exists.
I was consulting on a project that involved the implementation of a sophisticated middle-ware layer to support an application migration from a mainframe to a Web-server environment. The development portion of the project was a bit behind schedule, and I was concerned about potential performance issues with unproven aspects of the architecture being implemented.
The system hadn't been integrated yet, let alone performance tested, but I didn't want to put the team on the spot by calling this out directly in the project meeting. Instead, I tried to give them an opportunity to naturally explain the situation by asking, "When do you currently envision performance testing will occur, and how long do you anticipate it will take?"
The architect's response stunned me. "We've done performance testing, and everything went fine," he said.
I wasn't trying to trap him, but his answer made me very uncomfortable. I knew that key parts of the system were not yet developed and were therefore unavailable for testing. I also knew that there were unresolved questions about the system's performance targets. He couldn't be telling the truth!
I probed more specifically about how testing was progressing without key bits of infrastructure being complete. I was told "full" performance testing couldn't be done until those parts of the system were available, but in the interim several team members had tested the throughput of some key components independently using drivers on their workstations and the components all seemed to perform "reasonably well."
What surprised me during his explanation was that the architect seemed to be offering this clarification in good faith, though I could easily interpret what I was hearing as a very misleading, weasel-like answer to my question-if not an outright lie. It sounded to me like he was describing unit testing or, more generously, limited integration testing. I didn't want to quibble with the architect in public in what might be mistaken as an attempt to challenge him or undermine his credibility, so I decided to follow up offline.
I was a bit gruff when I approached my friend the team leader over coffee later. "What the hell was he talking about?" I asked. "We both know that there hasn't been any performance testing on this project!"
"Depends upon your definition of 'performance testing'." She answered earnestly. "I don't think he was lying to you. I don't think people in this organization mean the same thing that you do when they talk about 'performance testing'."
She was right. I had forgotten that this group had no experience building applications as sophisticated as the one this project was attempting. While this organization had experience developing and deploying technical solutions, that experience was limited to building small, standalone applications and deploying well-established technologies. It was not only possible but likely that "performance testing" had been used historically in this organization to describe a very different activity than it describes in software engineering organizations that routinely build complex systems. The fact that my definition of performance testing differed from the team's definition didn't mean they were misleading me. It did not even mean they were wrong necessarily. It meant we were not operating with a common vocabulary.
Achieving a shared objective-particularly a complex one-requires that people agree on the definitions of key terms related to achieving the objective. The next time you cannot believe what someone is telling you