By Aaron Ruhnow
I am the technical lead (and now also ScrumMaster) on a development team that spent most of 2006 implementing various agile practices into our daily routine. We had traditionally followed a waterfall process, so it was a little difficult at first to get the team members (and management) to digest some of the new agile methods (Two- week iterations? Developers writing tests? No Gantt charts?)—we truly were stepping through the looking glass.
With all these new practices came a lot of uncertainty. We had new tools—new to us, at least—such as FitNesse and nUnit. We had new practices, including pattern-based design, TDD, and automated builds. We had new processes and even had a strange, new rhythm. Many of us were used to being assigned a set of tasks and then spending weeks writing requirements. Some time later we would code them. (We may or may not have been involved in estimating how long these tasks would take to complete.) Making the shift to short iterations was quite a change. Having the developers themselves write up and estimate the tasks in one half- to two-day increments was also a shift, but was a bit easier to digest because the developers quickly found they could make more accurate estimates this way. It often was difficult to focus on the essentials in the midst of all of this newness.
To gain this focus, the team started to use a task board and task cards to track progress within each iteration. However, we often struggled to understand when a task, and then a story, was really done. We had learned of the “done, done, done” mantra (coded, tested, approved by the product owner), but that wasn’t specific enough for us. We needed more guidance to know when we were actually there. For instance, in our daily standups, a typical discussion would go something like this:
Dev 1: “I finished coding the ‘Copy’ function for the story.”
Dev 2: “Did you unit test it?”
Dev 1: “Uh…I think I wrote a few.”
Dev 3: “Are the Fit tests done?”
Dev 1: “Oh yeah, I forgot about Fit.”
We needed a better definition—one we could all agree on and understand.
About that time, I attended a Certified ScrumMaster class. During the class, another student mentioned that her team had a list of criteria defining done. Intrigued, I asked if she could email it to me. She told me they didn’t have it written down; it was just understood within the team. (Guess they were already in “Done Nirvana.”) After some more prodding, she did agree to write them down just for me. From her list we created the first draft of our form of done:
A story is complete when:
- Coded/implemented
- Peer reviewed (pair programming counts as peer review)
- Code is run against current version in source control
- Code is commented in source control and checked in
- Code is commented with VB Commenter on Public/Friend methods
- Story/use case manual test plan updated
- Fit test written (with help of SQA person)
- UML diagram updated
- Unit tests written and passing
- 90 percent code coverage achieved
- Build and package changes are communicated to build master (i.e. introducing a new file or something)
- Task list hours are updated and task is closed out
- All to-do items in code are completed
We changed her list somewhat to emphasize several coding qualities that needed reinforcement within our team. During iteration planning and task generation, we now make separate task cards for steps like writing unit tests (before coding, of course), making Fit tests, and so on.
The list is still posted in our project room, but I guess we, too, have now reached “Done Nirvana” because the definition of done is now in our heads and for the most part put into individual task cards. The list now makes good wallpaper—and serves the role of looking good for managers that happen to enter our project room. I suppose it may also be helpful for newbies.
Maybe our list, as my fellow student’s did, can start your team down the path to “Done Nirvana.” But for now, this story is done.