What, in your opinion, is the best way to engage the QA team in a project using an Agile process? Is there a process that works best if your company has a separate QA team? If there is a minimum of documentation in the project, how does the QA team work out the test cases?
Ideally, you won't have a separate QA team, but QA expertise integrated into your development team. In an Agile team, they will typically help the customer side to define their requirements and express them in executable acceptance tests.
If you have a separate QA team, the goal of the development team should be to ship a bug free system to QA.
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Great question. I'm a little leary of the word "best," as every team is different and you should achieve better results by trying things and adapting to your actual project than I will sitting here in my chair.
That said, I have been experimenting for several years with a variety of approaches to integrating QA and testers into agile teams, and I've found something that seems to work pretty well. This is the approach we put into the book. Warning: this post is a bit long and I'm afraid it just scratches the surface.
First, the goal of the team should be to produce software with no bugs. This is an ideal, but successful XP teams have come close to this ideal. It's entirely feasible to get down to a few new bugs per month.
To achieve this goal, the whole team--testers, customers, and programmers alike--need to take responsibility for quality. The goal is to reduce the number of bugs that escape the team rather than the number of bugs found by testers. There are a number of practices to help you get there, covered in the "No Bugs" section of our book. Briefly, they are:
Practices to reduce programmer error:
Test-Driven Development - note that this is done by programmers and, when done well, results in a comprehensive regression test suite.
80% of the defects are found in 20% of modules, according to Boehm. These practices help eliminate technical debt and eradicate bug breeding grounds:
Incremental Design & Architecture
Reflective Design & Refactoring
Collective Code Ownership
Even when programmers work perfectly, bugs can result from programmer misunderstandings. These practices help prevent requirements defects:
Active Customer Review
"Done Done" checklist
So these are all things that the whole team is doing to prevent bugs rather than find them. Testers bring a unique combination of customer-centricity and detail-oriented technical knowledge to the table. They provide both requirements-oriented and technically-oriented assistance.
On the requirements side, they help customers articulate their requirements and they identify gaps in the customers' statements. They help create customer tests, which are automated examples of how the business domain works.
On the technical side, they act as technical investigators for the team. They help provide answers to questions like, "How scalable is our software?" , "Is the software fast enough?", and "Is our software stable under load?" These questions are answered by creating long-running automated tests.
All of these practices together should lead to a team that produces nearly bug-free code. To check if this is true, testers also engage in exploratory testing, a technique for rapidly creating and executing test plans. Each test plan builds on the knowledge gained from the previous. The purpose of exploratory testing isn't to create a regression suite (TDD provides that) but to rapidly explore and test a wide variety of scenarios.
Exploratory testing is very good at identifying unexpected problems. The purpose of exploratory testing in this process is not to find bugs, though, but to determine whether the team's process is working. When a bug is found, it's fixed, of course, but the team also conducts root-cause analysis to determine how the bug slipped through the safety net provided by the practices. Then they adjust their practice using tools like retrospectives to remove the hole.
When this is working well, the team no longer requires manual testing before release. It has practices that lead to nearly bug-free software and it has confidence (thanks to exploratory testing) that those practices are working. As a result, any build can be pushed to production without further testing.
For people who haven't experienced this level of confidence, the idea of pushing software to production without further testing seems very scary. Remember, though, the team is testing, and testing thoroughly. Mature teams will have thousands of unit and integration tests and hundreds of customer tests. When done well, these tests run very quickly, so that the software can be built, tested, and deployed in less than ten minutes. (!)
To sum up, testers have an integral role on the team, but it's a very different role than seen in phase-based processes. Testing is done concurrently with development, not just before release or the end of the iteration, and it's quite thorough and rigorous.
James Shore, coauthor of <a href="http://www.amazon.com/Art-Agile-Development-James-Shore/dp/0596527675" target="_blank" rel="nofollow">The Art of Agile Development</a>. Website and blog at <a href="http://www.jamesshore.com" target="_blank" rel="nofollow">http://www.jamesshore.com</A> .
A few years ago my team had a troubled relationship with a separate QA group, so we set a goal to put them out of business. If they found no bugs for a few releases, surely we could make them go away. We didn't quite make it, but we did cut down defects by better developer testing.
Then a more cooperative manager came along and we brought them in as full team members. A big step was a involving QA early in the story process. The customer, developer and QA sat down together to write the test cases. In college it's cheating to steal the questions before the test, but in software development it's a great way to make sure everybody understands the requirements.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
posted 12 years ago
Thanks everyone for the thoughtful answers. As a consultant I work in various companies, and have found that many have a separate QA team and want to do system testing after the iteration has completed. These teams also stated that they wanted to develop software in an agile manner.
I have found that the need to do full system testing and regression testing by QA at the end of an iteration greatly interferes with the agile process.
I like the ideas that have been expressed here and I will try to take them to my next project!
Onion rings are vegetable donuts. Taste this tiny ad:
Sauce Labs - World's Largest Continuous Testing Cloud for Websites and Mobile Apps