File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
http://aspose.com/file-tools
The moose likes Agile and Other Processes and the fly likes How to incorporate QA into an agile project? Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Soft Skills this week in the Jobs Discussion forum!
JavaRanch » Java Forums » Engineering » Agile and Other Processes
Bookmark "How to incorporate QA into an agile project?" Watch "How to incorporate QA into an agile project?" New topic
Author

How to incorporate QA into an agile project?

Joel Robinson
Greenhorn

Joined: Jul 02, 2010
Posts: 2
I'd like to get some real-world examples, both successful and not, of how people have incorporated QA into an agile process.

Our team uses 2 week sprints and so far the pattern has been that more features get implemented than get fully tested & signed off as shippable. Part of this was a staffing issue which we've been actively addressing but even with an equal ratio of programmers and testers, it's still a challenge to keep up. This is especially challenging if the features aren't testable along the way & only become testable at the end of a sprint. Some solutions that have been tossed around include:
--status quo but try harder (LOL, yeah, not a good plan)
--add test stories/tasks to later sprints so that QA work starts after the feature is code complete (this is probably what we will do, experiments so far have been pretty successful)
--run entirely separate development and QA sprints (this solves the timing issue but I fundamentally don't like having dev/qa separated like this)

One problem I don't have a good answer to is what to do about bug fixes. If we're doing simultaneous programming & testing in the same sprint, it's usually fairly manageable to identify story-blocking bugs that need to be fixed ASAP and most of these have been manageable without throwing off time estimates too wildly. However, if we decouple programming & QA efforts and because the # of bugs the QA team will find is unknown, the programmers are hesitant to stop current sprint work to fix bugs from previous sprints for fear of getting off-track on current work. Not only that, the mental context-switching between current feature work and bug fixing has an efficiency impact. Of course we can implement policies about the priority of bug fixing vs. current feature work but I'm looking for some insight from others about processes that have worked with less reliance on heavy-handed policies.

Thanks in advance for your input!

Joel
Tim Ottinger
author
Ranch Hand

Joined: Jan 26, 2011
Posts: 46

Forgive a long answer, because I have a lot of passion about this.

The quick summary is that this succeeds if you can substantially change how the team works.
It can fail for political reasons. It can fail because the answer _feels_ wrong, even though it's right.
I've seen it work when a company reached appropriate levels of frustration and desperation to try change.

Joel Robinson wrote:Our team uses 2 week sprints and so far the pattern has been that more features get implemented than get fully tested & signed off as shippable.


You know that those don't count as "done", right? There is only one team, which includes QA, coding, product manager/analyst/customer, etc. Anything that isn't "done done" isn't "done" at all. Not only is that Agile mantra, but it's also basic Theory of Contraints. Before I could get to the next step as a coach, I would be wondering why the developers are able to go do more work when their last sprint isn't done. I don't know there is any value to the customer in having pent-up code that can't be released.

How is the testing being done? What split of automated to manual testing? Are testers automating tests or performing to script? How many hours per day are they spending alongside programmers testing things as they are being finished, as opposed to being somewhere else trying to catch up?

Is QA part of the process, or pretty much caught up in tail-end QC? And are defects being turned around quickly?

My first instinct says that programmers are not doing enough to make testing easy, not automating enough testing, and not involving you early enough in the process. Likewise, I suspect you are not up at the head of the process with the customer/analyst/product-owner but rather receiving product to test at the tail end, or things would be going differently.

Finally, am I right to assume you have a legacy product that was build originally in a non-agile way? That automated code coverage is low?

Joel Robinson wrote: Part of this was a staffing issue which we've been actively addressing but even with an equal ratio of programmers and testers, it's still a challenge to keep up. This is especially challenging if the features aren't testable along the way & only become testable at the end of a sprint.


Joel, when you are at the 50% mark of a sprint, on the first friday, are at least 40% of the stories finished and testable? You should not be having all the features coded at the end of the sprint unless the developers are doing something nutty, like each one taking a full load of work at the start of the sprint. It sounds like your stories are too big, and maybe the programmers aren't pairing. I could be wrong, but that's how it sounds.

Joel Robinson wrote: Some solutions that have been tossed around include:
--status quo but try harder (LOL, yeah, not a good plan)
--add test stories/tasks to later sprints so that QA work starts after the feature is code complete (this is probably what we will do, experiments so far have been pretty successful)
--run entirely separate development and QA sprints (this solves the timing issue but I fundamentally don't like having dev/qa separated like this)


"Bring in a coach" is a good idea. If you bring in someone with a lot of Lean/TOC experience, they will tell you that the smartest thing to do is recognize that the bottleneck is in testing. This is some hard news to hear, but here you go: http://www.ciras.iastate.edu/publications/CIRASNews/winter98/toc.htm

Joel Robinson wrote: One problem I don't have a good answer to is what to do about bug fixes. If we're doing simultaneous programming & testing in the same sprint, it's usually fairly manageable to identify story-blocking bugs that need to be fixed ASAP and most of these have been manageable without throwing off time estimates too wildly. However, if we decouple programming & QA efforts and because the # of bugs the QA team will find is unknown, the programmers are hesitant to stop current sprint work to fix bugs from previous sprints for fear of getting off-track on current work. Not only that, the mental context-switching between current feature work and bug fixing has an efficiency impact. Of course we can implement policies about the priority of bug fixing vs. current feature work but I'm looking for some insight from others about processes that have worked with less reliance on heavy-handed policies.


Some teams (rightly I think) will not take on any new features if defects are outstanding. I think it's a good practice, because stockpiling defects doesn't do the customer any good. It is political suicide in companies where customers have been overpromised (there is a death spiral in that), but worth pushing for. If you have a pile of defects, and they've piled up features, and nothing is hitting the door... forgive me if this sounds like a no-brainer.

You should know that this is always hard. I've tried sometimes unsuccessfully to turn around teams with this set of problems. There is a real pressure from outside forces to "get things done", and that often causes groups to "claim done" on things that are not, and report them as if they were. Up the hierarchy, there are people vested in believing that things are really getting done at a higher rate than they are. It takes a lot of political coin (respect) to sell the fact that things are not in fact done, and are not getting done. It takes a bit more to keep this situation from turning into a "blame the victim" situation. Sometimes the problem becomes "how to _make_ qa work harder" when this is typically not the problem. The problem is how to give QA more capacity to get things done, and to stop stockpiling untested features. When I fail is when people don't want to hear that, and refuse to consider "going slower" (actually, going no slower at all if they'd hear it) as a solution.

Frankly, if you are manually testing then every test cycle will include every test you've ever done until now (full cost) plus the cost of all the tests that are added because of new features. This is in the short cycle (the critical loop) which is more O(n^2) if you know what I'm getting at. That means it doesn't matter how many testers there are because there can never be enough. If the inflow is greater than the outflow then the queuing is infinite.

You are working on "need more time" which really has no give to it at all. What you need to work on is "need less to do."

In agile practice, we set up a system where those specifying features do so in automated tests w/ help of QA. Programmers program to make these tests pass using Test-first or test-driven development. When repairing a bug, a programmer first writes a test that exposes the defect and then writes code to cure it. All of the automated tests are run all of the time, before checking code into the version control and after. This way, most defects are caught by programmers before they can move downstream. In addition, programmers pair so that it is harder for defects to escape the editing phase. These automated tests run in seconds or minutes, and don't have to be repeated by human QA personnel.

At such a time QA spends time specifying and helping the development group. Manual testing becomes creative work: finding ways to break things, looking for ways to misunderstand them, etc.
Ultimately only automated testing can scale, and ultimately only humans can find new, untested defects.

We do see situations where automated testing offsets human testing costs substantially. James Grenning tells about a company where a company had a formula by which it knew how many defects to expect. The company held up a product from release because they only found a fraction of that number of defects. Eventually they had to relent and let the release go out, because agile's reliance on automated tests set the defect level far below anything they had ever witnessed before.

I guess I didn't really plug the book in this answer. I'm sorry for that and sorry that your position isn't politically pleasant.

Tim Ottinger
Jeff Langr
author
Ranch Hand

Joined: May 14, 2003
Posts: 762
Greetings Joel,

Tim says, "when you are at the 50% mark of a sprint, on the first Friday, are at least 40% of the stories finished and testable?" Good question.

A classic application of agile, one that results in no end of quandaries, is that the team works on many stories at once, typically one story per each developer (or two). That's not collaborative and not the best way to approach agile.

An ideal approach:

  • The customer works with QA to help flesh out a feature. QA helps define (not implement, yet) acceptance tests, and may work with the developers to do so.
  • The entire team begins work on the feature. They may need to communicate with QA to refine tests during this time. Both parties need to remain close.
  • The team gets the feature working, ideally within a day or two (that, of course, means that the features need to be very small)
  • No one moves on to the next feature until the current one is truly complete ("done done"--no defects, all tests passing, tests cover all requested behaviors)


  • Obviously this ideal is not always possible, particularly with larger teams. It is considered a little extreme: It will initially slow you down, and may appear to outside observers that your team has no clue what it's doing. In reality, that's probably true with either approach--it's just that your current way of working makes it seem like positive forward progress is always being made. The best thing here is that the problems are exposed, giving you an opportunity to correct them.

    Sure, you may need to have two stories in process, and maybe even a third at times. But the idea is to always push for less and to produce truly completed work sooner. The same goes for story size--most of the time, you can break stories up (and we have a card for that, too, in Agile in a Flash, as well as a couple cards on lean manufacturing concepts).

    Long term, however, minimized work in process will help you keep on a consistent pace, will minimize risk and slippage, will help bring you together as a team, will keep your daily standups meaningful and lively, will make it easier to estimate how large a feature is, and so on.

    I've tried/seen the other arrangements. Slipping iterations creates more chaos as defects come back to haunt to the development team. Since defects often take longer to fix the later they are discovered, the overall time to build the product increases as a result. Your overall cycle really becomes three iterations that start to overlap each other: initial development iteration, testing iteration, fixing iteration. Either that, or every iteration's plan gets corrupted as you report defects and interrupt the developers (and interruptions themselves increase overall time).

    One other mentality to have: View the tests as a form of specification, or at least a form of documentation, on the system capabilities. If no tests exist for a feature, the development team should not build that feature. Thus tests have to be defined (scripted, minimally) first--that's the "test-driven" part of this approach. If you can get into that rhythm, you will have no worries about falling behind on acceptance testing. It will also keep the development team from guessing how a feature should work--something that also generates waste in the form of occasional rework (when they misunderstood a verbal description, for example).

    Regards,
    Jeff


    Books: Agile Java, Modern C++ Programming with TDD, Essential Java Style, Agile in a Flash. Contributor, Clean Code.
     
    I agree. Here's the link: http://aspose.com/file-tools
     
    subject: How to incorporate QA into an agile project?