Our team is transitioning to agile development, from a waterfall-type methodology. Our team is composed of experienced manual testers and a periodic influx of interns with programming training but without testing experience.
Our management and other internal consumers are used to receiving typical QE metrics such as number of test cases, % test cases run, % test cases passed and so forth.
These types of metrics seem less useful for the kind of testing which we will need to perform in an agile environment, other than as a way to keep tabs on the existing automated regression.
How do we most effectively measure (and present metrics) for the kind of testing that needs to be performed in a Agile environment? Can you cite some examples?
We have a section on metrics. Some of the ones you have mentioned still exist in agile projects, but we encourage teams to really think about the purpose of the metrics they are gathering and what they are hoping to change as a result. One example is counting the number of automated tests. When a team is first starting their automation efforts, this is a very powerful metric but as the trend keeps going up and up (hopefully). After a while, it loses its affect, so you stop. Make sure the metrics you use are important for the whole project team.
Of course there might be metrics that upper management requests, but I suggest looking at what they are being used for. You may want to request a change.
Burndown charts (check out Scum websites for example) are great ways to watch story completeness (includes testing).
I've heard of a team that reached 100% code coverage with their unit tests - but the tests didn't actually test anything, they didn't have a single assert in it!
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Joined: Jan 25, 2009
Hopefully that example is an exception - it sounds a lot like an urban myth. Teams should have self monitoring controls to prevent that kind of misuse of metrics.
Michael is correct that metrics are often misused, and I agree that they need to measuring something which the team finds useful. Number for the sake of numbers is not useful, but watching trends can be rewarding (if they are going the right way), and should be a trigger for change if they are not. Numbers are only a piece of the puzzle and as Michael points out, if not understood can cause damage within a team.
As I read his blog, I get the feeling that all control metrics are bad, and I don't necessarily agree with that. There are many reasons to use metrics - some good, some bad. It's all in how you use them.
Joined: Jul 11, 2001
Well, it's quite easy to show that, in an environment where you can't measure everything that's critical for success, using metrics in a motivational way will invariably lead to dysfunction (if they are effective at all).
There is also some strong evidence that extrinsic motivation is quite effective at destroying intrinsic motivation - and the latter is much more effective and sustainable.
So, as far as I can tell, motivational metrics (and I guess that's what was meant by control metrics) *are* a bad idea.