Scrum Tool Home

Online Scrum Software Development Blog

Digg This! Post on Facebook! Post on Yahoo! Post on Reddit Post on Delicious Stumble This! Tweet This! Post on Google

Tuesday, September 7, 2010

Scrum in a Fixed-Date Environment

By Paul Given

I recently worked as the project leader (ScrumMaster) on a project trying to transform its ongoing work stream. Historically, our teams were structured very tightly along roles in a typical waterfall fashion. We produced two software releases a year that aligned with major releases of software from partner financial associations. These two releases spanned up to twelve different systems and development teams. The releases had hard dates. Requirements and test data trickled in and changed until thirty days prior to the go live date. These last minute changes and difficulty in managing the test environments were causing tremendous spikes in hours expended at the end of our projects. Something had to change.

Our situation

Our workstream was full of repetitions and disconnects that contributed to our time crunch issues. The project lifecycle usually began when the business systems analyst (BSA) received documents from a partner financial association. The BSA would review these documents with the business customer to create business requirements. The BSA would then review the business requirements with the design leads to create systems requirements.

In turn, the design leads would create both high-level and detailed designs. The developers would subsequently develop and write unit tests according to the detailed design. The design lead would trace the unit tests to the design, and then trace the design to the system requirements. The BSA would trace system requirements to business requirements, and then trace business requirements to the association documents.

Meanwhile, the testing team would write system and end-to-end test cases. The testing team would trace test cases back to system requirements and then to business requirements. Eventually, the developer would turn his code over to testing.

Without fail, two weeks before go live, a tester would find an issue—and no one would know how to follow the tracing to determine the cause of the issue. There would be a mad scramble to fix the problem, hence the spike in hours. Meanwhile, our business customers couldn’t validate results because they had not had any contact with the project since the BSA reviewed requirements with them. All along the way, the project manager hounded everybody for document approvals.

Solution

You can see why we needed to try something different, something leaner. We scrapped our old process and applied Scrum. During the first thirty days of the project we completed the following tasks:

  1. Reviewed documents from the financial associations as a team (two BSAs, seven design leads, three test leads, and three customers)
  2. Developed the system requirements as a team and integrated these requirements into the association documents.
  3. Built a systems testing environment and an end-to-end testing environment.
  4. Using the system requirements (and eliminating the high-level and detailed designs), we defined the necessary changes to system interfaces, built the stubbed/shell version of the systems interfaces, and released them to the system test environment.

After the first thirty days, we divided the remaining system requirements into four sets of iterations. An iteration was considered “done” when the code was turned over to the systems and end-to-end test environments.

Outcome

The work was finished and systems testing completed forty-five to sixty days prior to the release, at a sustainable pace and with a high quality.

At that point, the associations announced a significant interface change. Still using Scrum, the teams handled the interface change, the associated business functions, and the testing in thirty days and were ready to go when test data was available from the associations.

Analysis

The Scrum process we instituted worked well. Several factors contributed to our success: team collaboration, multi-level Scrum, and visual tools.

While the teams were not co-located, we were able to accomplish team collaboration through twice-weekly Scrums between the teams. We let teams manage their own work (some offshore, some in-house, some vendor-supplied changes). We set aside time for inter-team issues twice a week. Because they were continually asked to validate functions in the end-to-end test environment, the business customer was engaged throughout the process, which helped eliminate any unnecessary work.

The team collaboration also helped engage technical resources into the requirements gathering process early. This involvement eliminated the need for all of the effort we used to expend on tracing requirements to everything else. The team was engaged in the goals and needs of the business, allowing them to offer solutions to how to get features done easily. Finally, the impacts of any changes were better understood by all.

Another key factor in our success was multi-level Scrum. Specifically, defining the interfaces by which a team could work “behind” allowing teams to work independently. This ability helped eliminate the tension between “decide as late as possible” and “define it and build it as quickly as possible.” Multi-level scrums also allowed the business to be on call for issues providing insight without having to be present 100 percent of the time.

The final key factor in our success was our use of Scrum’s visual tools. We used burndown charts to show the feature milestones being produced (with our common definition of “done”). These charts showed a sustainable but focused effort. The test status dashboards were helpful in giving business a good indicator of quality and progress.

Of course, as in all endeavors, there is still room for improvement. We’d like to move to daily Scrums. We need to find a way to link the offshore and vendor system to the iterative process to generate less confusion on who is doing what in which sprint/iteration. We should create even more transparency by increasing our use of visual tools by individual teams. Finally, we want to find a way to better use automated tests. Doing this would allow testing to be repeated easily and avoid timely regression testing with each sprint/iteration.

Overall, though, we have found that Scrum worked well—even in a fixed-date, fluid requirement environment like ours.