Joel Robinson wrote:Our team uses 2 week sprints and so far the pattern has been that more features get implemented than get fully tested & signed off as shippable.
Joel Robinson wrote: Part of this was a staffing issue which we've been actively addressing but even with an equal ratio of programmers and testers, it's still a challenge to keep up. This is especially challenging if the features aren't testable along the way & only become testable at the end of a sprint.
Joel Robinson wrote: Some solutions that have been tossed around include:
--status quo but try harder (LOL, yeah, not a good plan)
--add test stories/tasks to later sprints so that QA work starts after the feature is code complete (this is probably what we will do, experiments so far have been pretty successful)
--run entirely separate development and QA sprints (this solves the timing issue but I fundamentally don't like having dev/qa separated like this)
Joel Robinson wrote: One problem I don't have a good answer to is what to do about bug fixes. If we're doing simultaneous programming & testing in the same sprint, it's usually fairly manageable to identify story-blocking bugs that need to be fixed ASAP and most of these have been manageable without throwing off time estimates too wildly. However, if we decouple programming & QA efforts and because the # of bugs the QA team will find is unknown, the programmers are hesitant to stop current sprint work to fix bugs from previous sprints for fear of getting off-track on current work. Not only that, the mental context-switching between current feature work and bug fixing has an efficiency impact. Of course we can implement policies about the priority of bug fixing vs. current feature work but I'm looking for some insight from others about processes that have worked with less reliance on heavy-handed policies.