Lisa Crispin explains the concept of "group hugs," in which the whole team, or a subset of it, joins in for testing. Consider group hugs if you need to explore how your software behaves with concurrent users.
When our team is preparing a major new release, we sometimes organize “group hugs,” in which the whole team, or a subset of it, joins in for testing. Sometimes only testing team members are involved and sometimes the entire product team joins in or something in between; regardless of who joins, the exercise is always useful.
Concurrency Group Hug: Testers Only
Our newest set of features are about real-time updates, in which one person can see the updates as another types them. With a new feature to automatically save some changes, testing concurrent changes was a must. We’d tested quite a lot individually and our team has been “dog fooding” the feature in production (it’s not turned on for anyone else). Only we testers were involved in this group hug, time-boxed to an hour.
We prepared some test charters and scenarios in advance (mostly, I must confess, by my teammates). These usually started with a persona, doing something our users really do. For example, we envision designers working on a new feature, each adding two file attachments at the same time. Or we envision team members in an iteration-planning meeting, simultaneously updating a story description. The scenarios also captured worst-case situations: I am editing a story while someone else deletes it.
We each chose a different browser and started self-organizing. Sometimes all three of us coordinated to act out a scenario and sometimes a pair did one while the third person pursued an interesting avenue. There was plenty of room for creativity. I was using two machines with two different users to add to the concurrency angle. We tested using shortcut keys as well as other “unexpected” things.
We typed notes on the bugs we found and other observations in a shared Google document. Seeing a bug that one person found often inspired me to think up another test to try. Some bugs we found weren’t related to the theme we were testing, but hey, it’s always a nice bonus.
Here’s a sampling of issues that would be harder to find if you were to test individually:
- Changing an epic label when someone else has the epic open, the other user sees the new user, but the user who changes sees an error and the label blanks.
- Starting or moving a story with someone else updating the description causes overwriting and a disappearing history.
One interesting discovery was that we couldn’t reliably reproduce some problems, which pointed to timing issues. This was an area for us to explore more later to see if there was an issue with how the feature was implemented or if we just needed better ways to test.
The group hug confirmed our suspicions that the feature wasn’t ready for prime time. We found several significant bugs and had feedback about design, such as when certain buttons should appear or change color. Our team needed to do some redesign, coding changes, and more exploring before we could consider releasing the feature for beta test.
During the group hug, we realized we still had questions about the desired behavior of the autosave feature. This led to further discussions within the whole team. Within a few days, design improvements and development stories were underway. We’ll plan another group hug, maybe involving more of the team, when those are complete.
The iOS Group Hug
Recently we released some new features to our iOS app. We asked the team for volunteers for a group hug. Developers, testers, and marketing folks joined in. We used a shared Google doc for the session, where we noted which device and version each participant was using and already-known problems.
Our team is geographically distributed, so we used a video-conference meeting to communicate during the testing. People in each of our office locations gathered in one room, while remote people joined individually. We find it’s helpful to be able to talk to each other. For example, if more than one person finds the same bug, we can avoid duplication in reporting it. Also, as we talk through what we’ve tried, it gives other team members ideas for interesting tests.
During the group hug, we found some new bugs as well as some usability issues.
But generally, the feedback was positive, and we were able to release within a few days.
Involving multiple team members in one testing session is expensive. We’ve only needed to do it for major, risky new features where concurrency is critical. Generally, one group hug is enough to provide necessary feedback.
The Place for Group Hugs
We build quality into our product with test-driven development, acceptance test-driven development, and constant pairing and collaboration. We have multiple suites of automated regression tests providing continual feedback in our CI. And we spend lots of time doing exploratory testing on each new feature and release. The group hugs are a quick way to get information that we can’t get in our normal process.
Consider group hugs if you need to explore how your software behaves with concurrent users. Use them judiciously. Add only as much structure as you need. Sharing a document for everyone to record their results is helpful, but in some cases it’s too much heavy weight. The same goes for preparing charters in advance. You’ll learn from each one so you can improve your next group testing session and maximize its value.