This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
From the preface of Release IT that Michael posted:
Too often, project teams aim to pass QA's tests, instead of aiming for life in Production (with a capitol P).
I think this is an interesting point. Although it begs the question of just what are we testing in QA. Does this point refer to QA testing that is pure functional testing? Some organizations do performance testing, scalability testing, security validation/audits etc while in QA. While I'm sure this doesn't catch everything, it gets somewhat closer.
What other things would people like to see in requirements and QA rather than letting them slip through?
I would love to see more production-readiness work done in QA!
It's been my experience that QA tends to be limited to functional testing. When there is any amount of stress or performance testing, I haven't seen the QA group performing these tests. You may have been fortunate to work with more sophisticated QA groups than I have; my experience has been that QA groups tend to be oriented around clicking through applications. (I've rarely even found groups that believed in automated regression testing.)
There are also a whole class of problems that are difficult or impossible to observe in QA. I discuss a number of these in the book, along with some recommended strategies for avoiding them.
For example, QA environments tend to have just one or two servers. This is enough to demonstrate that the application functions when deployed redundantly. It does not demonstrate, however, what happens when those servers access a shared resource like a lock server, cluster manager, mailer daemon, etc. When you roll out to production with ten or twenty application servers, all accessing the same shared resource, the increased burden can put stability or performance at risk.
This is like the IT version of the square-cube law that explains why we won't see ant the size of a school bus.
It's impractical to build a QA environment with ten or twenty servers, so this type of problem (which is an example of the "Scaling Effects" antipattern) needs to be examined, analyzed, and designed out. It can't practically be tested out of the system.
So, to sum up, I'd say that I'd love to see QA taking a more active role in ensuring production-readiness. In addition, there are whole classes of problems that have to be addressed through good design and architecture because they are difficult or impossible to find in QA.
Michael T. Nygard<br /><a href="http://www.michaelnygard.com/" target="_blank" rel="nofollow">http://www.michaelnygard.com/</a><br /> <br />Release It! Design and Deploy Production Ready Software<br /><a href="http://pragmaticprogrammer.com/titles/mnee/index.html" target="_blank" rel="nofollow">http://pragmaticprogrammer.com/titles/mnee/index.html</a>
author & internet detective
Our QA group just does functional testing too. However we do a stress/load test in QA before/after/in between that. So it does happen. You raise a good point about it being impossible to setup QA to match PD in terms of server count and real life scenarios.
I think there might be some testing != QA going on here too. In the QA literature, they talk about QA being able to make the go/no-go decision and to look at things beyond functionality. Then in the real world, most "QA" groups are functional test groups.
And of course if you are being measured on functional defects, that's where the focus goes from a development point of view. As a developer, I've taken to logging some of these things in our defect list. (security defects, usability issues, etc) At least that gets them discussed and on the radar.
Joined: Jan 03, 2007
In fact, the best definition of QA is "defect prevention", right? (C.f. literature related to statistical process control, TQM, Lean manufacturing, Toyota Production System, etc.) Defect discovery is sort of the most basic level of maturity for quality assurance.
As we've seen with the move to unit testing, engineering quality into the systems isn't the exclusive province of the designated QA group. Part of what I want to encourage with my book is to help build a quality-oriented mindset in the earliest stages. That way, we can avoid defects, especially the ones that are really hard to find or damaging to experience, rather than just finding them.
author & internet detective
Originally posted by Michael Nygard: In fact, the best definition of QA is "defect prevention", right?
to help build a quality-oriented mindset in the earliest stages. That way, we can avoid defects, especially the ones that are really hard to find or damaging to experience, rather than just finding them.
That's a good way to look at it. I need to keep this in mind more when I have my developer hat on.
In fact, the best definition of QA is "defect prevention", right?
That's an interesting discussion. In our old waterfall model we had QA at the end and we didn't show any more builds. All QA could do was catalog defects and all the customer could do was live with them or defer deployment. I don't see any "prevention" there.
We still show QA, UAT and Training at the end, but builds continue right up until we commit an EAR to the deployment guys. At least we publicly admit that we might fix something.
Most recently we've started to get the QA folks partnered with the BSA and developer for the whole story lifecycle. All three meet very early to plan the test cases and again to review them. With this in place QA has found zero or one or two defects before the last few releases. Now we're talking prevention.
But when defects stay that low for long enough, someone will ask why we have a QA phase on the schedule. And then ask what QA folks are contributing to those meetings. And then ... I'd be studying up on Java or searching for another really sick project if I was them.
BTW: Read James Bach for a while to see his opinion of automated QA testing. I'm inclined to believe him regarding professional testers but haven't quite reconciled his opinions with the value Agile folks find in test first and regression testing.
A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi