How do you refactor test code? I don't think that the double-entry book-keeping analogy holds up under scrutiny because the tests and the code are not equal and opposite. The code implements the tests it was not written to test them. For example, if you delete a test by mistake nothing will tell you that it's gone. When refactoring test code (which you need to do frequently) how do you prevent tests from being mistakenly deleted? David
Originally posted by David Peterson: When refactoring test code (which you need to do frequently) how do you prevent tests from being mistakenly deleted?
You are right that the double entry bookkeeping analogy fails in this regard. But there are some things you can do to prevent this mistake from happening: - refactor in very small steps. The smaller the steps, the easier it is to keep track of what to do to conserve test coverage. - work in a pair. Often I find it to be easier to keep track of a refactoring when not sitting at the keyboard. - use coverage measurement tools. They are available in many different forms. Use them with caution, though - it's not necessarily 100% coverage what you want to achieve. - probably most important: reflect on the mistakes you make and learn from them. Does that help?
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Joined: Oct 14, 2001
Originally posted by Ilja Preuss:
Does that help?
That's good advice. Thanks. Do you ever write tests for your tests? I've been thinking that maybe I should because I find that I refactor the tests just as much as I refactor the code, but I don't have quite the same safety net. Perhaps I worry too much. David
Joined: Jul 11, 2001
Originally posted by David Peterson: That's good advice. Thanks.
You're welcome! Two more points: - even if a coverage metric shows you 100% coverage, that doesn't mean that you wrote the right tests... So I would advice you to use these tools to let them point you to interesting parts of your code - but don't trust them! - unit tests generally should be simpler than the code they are testing. If you regularly get them wrong, you should check your testing strategy.
Do you ever write tests for your tests?
No. Well, if you are doing full XP, yes, indirectly. You are writing Acceptance Tests, defined by the customer. So if a bug slips through your unit tests, you have an additional safety net. *I* am still struggling with establishing unit tests at my workspace, though...
david Peterson wrote: Do you ever write tests for your tests? Not at the moment, but I have been toying with trying something like Jester. It's smarter than just code coverage, it's a "mutation" tester, it changes chunks of your code to see if a test fails. If no test fails, you get an error !