Win a copy of Micro Frontends in Action this week in the Server-Side JavaScript and NodeJS forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Bear Bibeault
  • Junilu Lacar
  • Jeanne Boyarsky
  • Tim Cooke
  • Henry Wong
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • salvin francis
  • Frits Walraven
  • Scott Selikoff
  • Piet Souris
  • Carey Brown

Question for Authors - How does QA fit in

Posts: 5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
What, in your opinion, is the best way to engage the QA team in a project using an Agile process? Is there a process that works best if your company has a separate QA team? If there is a minimum of documentation in the project, how does the QA team work out the test cases?
Posts: 14112
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ideally, you won't have a separate QA team, but QA expertise integrated into your development team. In an Agile team, they will typically help the customer side to define their requirements and express them in executable acceptance tests.

If you have a separate QA team, the goal of the development team should be to ship a bug free system to QA.
Posts: 46
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Cathy,

Great question. I'm a little leary of the word "best," as every team is different and you should achieve better results by trying things and adapting to your actual project than I will sitting here in my chair.

That said, I have been experimenting for several years with a variety of approaches to integrating QA and testers into agile teams, and I've found something that seems to work pretty well. This is the approach we put into the book. Warning: this post is a bit long and I'm afraid it just scratches the surface.

First, the goal of the team should be to produce software with no bugs. This is an ideal, but successful XP teams have come close to this ideal. It's entirely feasible to get down to a few new bugs per month.

To achieve this goal, the whole team--testers, customers, and programmers alike--need to take responsibility for quality. The goal is to reduce the number of bugs that escape the team rather than the number of bugs found by testers. There are a number of practices to help you get there, covered in the "No Bugs" section of our book. Briefly, they are:

Practices to reduce programmer error:

  • Test-Driven Development - note that this is done by programmers and, when done well, results in a comprehensive regression test suite.
  • Pair Programming
  • Energized Work
  • Coding Standards

  • 80% of the defects are found in 20% of modules, according to Boehm. These practices help eliminate technical debt and eradicate bug breeding grounds:

  • Simple Design
  • Incremental Design & Architecture
  • Continuous Design
  • Reflective Design & Refactoring
  • Slack
  • Test-Driven Development
  • Pair Programming
  • Collective Code Ownership

  • Even when programmers work perfectly, bugs can result from programmer misunderstandings. These practices help prevent requirements defects:

  • On-Site Customers
  • Active Customer Review
  • Customer Tests
  • Iteration Demos
  • "Done Done" checklist

  • So these are all things that the whole team is doing to prevent bugs rather than find them. Testers bring a unique combination of customer-centricity and detail-oriented technical knowledge to the table. They provide both requirements-oriented and technically-oriented assistance.

    On the requirements side, they help customers articulate their requirements and they identify gaps in the customers' statements. They help create customer tests, which are automated examples of how the business domain works.

    On the technical side, they act as technical investigators for the team. They help provide answers to questions like, "How scalable is our software?" , "Is the software fast enough?", and "Is our software stable under load?" These questions are answered by creating long-running automated tests.

    All of these practices together should lead to a team that produces nearly bug-free code. To check if this is true, testers also engage in exploratory testing, a technique for rapidly creating and executing test plans. Each test plan builds on the knowledge gained from the previous. The purpose of exploratory testing isn't to create a regression suite (TDD provides that) but to rapidly explore and test a wide variety of scenarios.

    Exploratory testing is very good at identifying unexpected problems. The purpose of exploratory testing in this process is not to find bugs, though, but to determine whether the team's process is working. When a bug is found, it's fixed, of course, but the team also conducts root-cause analysis to determine how the bug slipped through the safety net provided by the practices. Then they adjust their practice using tools like retrospectives to remove the hole.

    When this is working well, the team no longer requires manual testing before release. It has practices that lead to nearly bug-free software and it has confidence (thanks to exploratory testing) that those practices are working. As a result, any build can be pushed to production without further testing.

    For people who haven't experienced this level of confidence, the idea of pushing software to production without further testing seems very scary. Remember, though, the team is testing, and testing thoroughly. Mature teams will have thousands of unit and integration tests and hundreds of customer tests. When done well, these tests run very quickly, so that the software can be built, tested, and deployed in less than ten minutes. (!)

    To sum up, testers have an integral role on the team, but it's a very different role than seen in phase-based processes. Testing is done concurrently with development, not just before release or the end of the iteration, and it's quite thorough and rigorous.
    (instanceof Sidekick)
    Posts: 8791
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    A few years ago my team had a troubled relationship with a separate QA group, so we set a goal to put them out of business. If they found no bugs for a few releases, surely we could make them go away. We didn't quite make it, but we did cut down defects by better developer testing.

    Then a more cooperative manager came along and we brought them in as full team members. A big step was a involving QA early in the story process. The customer, developer and QA sat down together to write the test cases. In college it's cheating to steal the questions before the test, but in software development it's a great way to make sure everybody understands the requirements.
    Cathy Bryant
    Posts: 5
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Thanks everyone for the thoughtful answers. As a consultant I work in various companies, and have found that many have a separate QA team and want to do system testing after the iteration has completed. These teams also stated that they wanted to develop software in an agile manner.

    I have found that the need to do full system testing and regression testing by QA at the end of an iteration greatly interferes with the agile process.

    I like the ideas that have been expressed here and I will try to take them to my next project!
    A "dutch baby" is not a baby. But this tiny ad is baby sized:
    Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
      Bookmark Topic Watch Topic
    • New Topic