aspose file tools*
The moose likes Agile and Other Processes and the fly likes test first Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Engineering » Agile and Other Processes
Bookmark "test first" Watch "test first" New topic
Author

test first

paul wheaton
Trailboss

Joined: Dec 14, 1998
Posts: 20730
    ∞

I'm a few months into XP now. I'm trying to do more test first stuff, but ....
Test first makes a lot of sense. You know what you need to do, so you write the tests
for this thing that does not exist yet. The tests fail. You write the code that makes
the tests pass.
For tiny brain dead things, this always works without a hitch.
For anything significant, about 80% of my tests get thrown out because I decided to do it
a different way. Some people on my team choose to leave their code the "crappy way"
because its too much work to rewrite the tests.
I find that when I'm left on my own, I still write stuff the old fashioned way, and when
I feel good about the way things are looking, I will then write tests.
It seems a lot of respectable folks chant "test first", so I thought I would come here
and try to find the vibe on this. Do I just need more experience or are there times that
the tests can wait a bit?


permaculture Wood Burning Stoves 2.0 - 4-DVD set
Frank Carver
Sheriff

Joined: Jan 07, 1999
Posts: 6920
I guess everyone has their own experiences, but for me I have noticed that the more I do "test first" (and particularly Beck's "test driven development", TDD) the quicker I develop things and the "better" my code gets (in that "lots of small stuff" way).
There are a few "mindset" things, of course. And there are some tools which make things easier.
The first mindset issue is that you have to go into it prepared to refactor all the time. Refactoring is one of those techniques which gets disproportionately easier the more frequently you do it, as the changes to be made are typically simpler and more localized.
The "tests" in test first really serve two aims: (1) to let you know when something works, so you can stop adding new code and (2) to keep an eye on the code while you refactor and make sure you don't inadvertently break it again.
One of the biggest misunderstandings with test-first occurs if you forget about the second goal, and stop coding as soon as you get the test working. A bit of thought would tell you that merely having a test says nothing about code quality or maintainability, just whether it passes or fails some tests. Full TDD emphasises the second goal much more, by recommending that you write any old code at all, even cut-n-paste (!), to get a test working as soon as possible, then refactor until you have eliminated any "duplication" in the code or the tests.
This approach would likely help in your situation. Encouraging refactoring to be an essential rather than optional step for every test/code cycle, and treating the tests as an equal partner in the refactoring should mean that any code API change should only really affect one place in the tests. If it affects more than one, that's duplication - take a moment or two to refactor the tests while the system still works as before, and then make the change.
As for tools:
  • a good, quick, test harness is essential (JUnit is the most popular, of course, but plenty of other approaches would work too).
  • A version control system which lets you frequently checkpoint the state of a whole working system and doesn't get in the way of refactoring is also a big help. I still don't know of one which is ideally suited, but several interesting attempts are in the works, and some IDEs seem to provide a just about acceptable wrapper round the likes of CVS.
  • An editor or IDE which helps with refactoring will make things smoother and (once you are up to speed) let you concentrate on the impact of changes rather than the mechanics of making them.


  • Thoughts ?


    Read about me at frankcarver.me ~ Raspberry Alpha Omega ~ Frank's Punchbarrel Blog
    Ilja Preuss
    author
    Sheriff

    Joined: Jul 11, 2001
    Posts: 14112
    Frank, I agree.
    Paul, can you post an example of a test that later got "thrown out because you decided to do it a different way"? I think that would help much to discuss the issue. Thanks!


    The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
    Shubhrajit Chatterjee
    Ranch Hand

    Joined: Aug 23, 2001
    Posts: 356
    In our company ... we have our own defined process models ... we are a mature organization at CMM 5 ... and all models follow the philosophy that the test cases must be designed independent of the code ... specially the unit test cases. The system/integration test cases may be designed later ... taking a cue from the design of the system ... and I feel that this is the kind of logic that is corroborated by Pressman too...
    My arguements for independent test cases ...
    1. Testing serves the purpose of validation of the design ... so test cases can be prepared when the design of the system is set.
    2. If we design the test cases after we code, it will be a tendency to orient our test cases unconciously towards our code ... I am not saying that everyone does it ... but there is a definite chance and it will be a bad management practice .. it might leave a loophole through which projects might fail.
    3. Refactoring the code for maintainability is a very good idea ... I agree ... but we might also think that doing things properly the right way ... may also be another option ...
    4. Finally ... from a developers point of view, there is nothing wrong in developing test cases late ... but it is a bad management practice .. if you do it ... proceed with caution
    [ August 19, 2002: Message edited by: Shubhrajit Chatterjee ]

    Shubhrajit
    Frank Carver
    Sheriff

    Joined: Jan 07, 1999
    Posts: 6920
    Interesting response. Your CMM 5 seems quite different to the XP-like approaches I tend to use these days. I'm intrigued to compare how they differ and their strengths and weaknesses.
    Addressing your points:
    1. Testing serves the purpose of validation of the design ... so test cases can be prepared when the design of the system is set.
    In general, I agree. Except that you seem to imply that the whole system should be designed before any test cases are written. Can you clear up at what point(s) in your process and/or at what scale your design and unit testing happens? Do you design everything before you write/run any of your test cases? Do you design before you write all your code? Do you interleave writing tests, design sessions and coding, and if so, typically how often (several times an hour, day, week, month etc.?) and at at what level do you write/run tests, design and code before moving on to the next (each line, method, class, package, API, product ...) ?
    2. If we design the test cases after we code, it will be a tendency to orient our test cases unconciously towards our code
    I definately agree with this. But I think that the same applies to the design/test interface. I tend to recommend writing each test before any design for that feature is done, and using that test to "drive" the design to meet it.
    3. Refactoring the code for maintainability is a very good idea ... I agree ... but we might also think that doing things properly the right way ... may also be another option ...
    How do you know/check that you have done your code "properly the right way". This is one of the largest problems I have faced in the past. There are probably thousands or millions of different ways of writing code to pass any given test case. What metrics do you apply to your code to help you know whether the one you have is acceptably "right"? You also seem to imply that your process has some sort of proactive way of ensuring such code quality. I'd love to know more about this.
    4. Finally ... from a developers point of view, there is nothing wrong in developing test cases late ... but it is a bad management practice .. if you do it ... proceed with caution
    Again, essentially I agree, except more so! I find that writing tests first is invaluable in driving the design and helping to ensure that the final product is not full of guesses at the future which are never properly used or tested.
    The only situations where I will routinely write tests after design and coding are:
  • to put a unit test "box" around some third party or legacy code which I don't trust or don't understand.
  • when a fault is discovered. In this case I first addand run a new test case which fails because of the fault, then code a fix and re-run all the tests to make sure I've fixed the new fault and not broken anything else, then refactor to make sure the new system is as simple as possible (and still passes all the tests after each change)

  • Ilja Preuss
    author
    Sheriff

    Joined: Jul 11, 2001
    Posts: 14112
    Originally posted by Shubhrajit Chatterjee:

    Thanks for joining the discussion!

    In our company ... we have our own defined process models ... we are a mature organization at CMM 5 ... and all models follow the philosophy that the test cases must be designed independent of the code ... specially the unit test cases. The system/integration test cases may be designed later ... taking a cue from the design of the system ...

    That certainly is one way to do it, yes.
    and I feel that this is the kind of logic that is corroborated by Pressman too...

    Are you referring to http://www.rspa.com/ ? From that webpage you are probably right.

    My arguements for independent test cases ...
    1. Testing serves the purpose of validation of the design ... so test cases can be prepared when the design of the system is set.

    That is one way to do it, but not the way of test first/test driven. When working test first, testing serves the purpose of *forming* the design. The design isn't set in advance, it is an ongoing activity while coding.

    2. If we design the test cases after we code, it will be a tendency to orient our test cases unconciously towards our code ... I am not saying that everyone does it ... but there is a definite chance and it will be a bad management practice .. it might leave a loophole through which projects might fail.

    OTOH you also can't write all-embracing testsuites without intimate knowledge of the tested code, as things like special cases and boundary conditions might heavily depend on implementation details.
    Therefore you will need both types of tests: black- and white box. In XP, black box are the Acceptance Tests defined by the Customer in advance; the tests written by the developers in a test first manner can be thought of as kind of white box.

    3. Refactoring the code for maintainability is a very good idea ... I agree ... but we might also think that doing things properly the right way ... may also be another option ...

    Yes, you might think so.
    There are many people who tried both ways and to their own surprise discovered that working test first - that is, writing a small test, making it pass as quickly as possible and then refactoring to the best possible design, doing this in very small increments of a couple of minutes each - leads them to better designs faster.
    There are some forces at work here that make test first development so efficient, which we might discuss in detail if you are interested.

    4. Finally ... from a developers point of view, there is nothing wrong in developing test cases late ...

    IMO there *is* *much* wrong form a developers point of view with late testing - it delays very critical feedback.
    Regards, Ilja
    Shubhrajit Chatterjee
    Ranch Hand

    Joined: Aug 23, 2001
    Posts: 356
    Originally posted by Frank Carver:
    Interesting response. Your CMM 5 seems quite different to the XP-like approaches I tend to use these days. I'm intrigued to compare how they differ and their strengths and weaknesses.
    Addressing your points:
    1. Testing serves the purpose of validation of the design ... so test cases can be prepared when the design of the system is set.
    In general, I agree. Except that you seem to imply that the whole system should be designed before any test cases are written. Can you clear up at what point(s) in your process and/or at what scale your design and unit testing happens? Do you design everything before you write/run any of your test cases? Do you design before you write all your code? Do you interleave writing tests, design sessions and coding, and if so, typically how often (several times an hour, day, week, month etc.?) and at at what level do you write/run tests, design and code before moving on to the next (each line, method, class, package, API, product ...) ?

    1. When you are following a non recursive model, your design is complete before development. In case there is a design change, it is generally called a design BUG. To fix it corresponding changes are made in the test cases and any other related documents/work products.
    2. Design scale depends on the nature of the project, and cannot be generarlized. We have a set of process models to address each type of project. On a java project , unit testing happens at the scale of one class or functionality. In case of COBOL, it may be in the scale of one program. Developers do their unoffficial to check functionality during coding , but the official set of test cases are run at the end of coding of the module.
    3. Automated tests are not yet a part of our process models, though I think they should be. Definitely it should happen soon. Our senior process architects are also looking at XP like processes to make them a part of our process model. We do have a recursive model in place, but it has a small constraint ... the high level design is not allowed to recurse.
    4. I agree with you that the detailed design of the system is an ideal case, but with sufficient knowledge of the business domain,user profile,technology, we can move close towards ideal scenario

    2. If we design the test cases after we code, it will be a tendency to orient our test cases unconciously towards our code
    I definately agree with this. But I think that the same applies to the design/test interface. I tend to recommend writing each test before any design for that feature is done, and using that test to "drive" the design to meet it.
    I see your point. I personally feel the same, but it is not possible to get ALL the validation rules from the requirements alone

    3. Refactoring the code for maintainability is a very good idea ... I agree ... but we might also think that doing things properly the right way ... may also be another option ...
    How do you know/check that you have done your code "properly the right way". This is one of the largest problems I have faced in the past. There are probably thousands or millions of different ways of writing code to pass any given test case. What metrics do you apply to your code to help you know whether the one you have is acceptably "right"? You also seem to imply that your process has some sort of proactive way of ensuring such code quality. I'd love to know more about this.
    This is the cureent goal of modern quality management. Modern quality paradigms like 6 sigma etc.. try to adress this issue. I am not saying that I do things for the first time. I am just stating that we should try to do things the right way, and refactor when it is not possible due to any constraints.
    4. Finally ... from a developers point of view, there is nothing wrong in developing test cases late ... but it is a bad management practice .. if you do it ... proceed with caution
    Again, essentially I agree, except more so! I find that writing tests first is invaluable in driving the design and helping to ensure that the final product is not full of guesses at the future which are never properly used or tested.
    The only situations where I will routinely write tests after design and coding are:
  • to put a unit test "box" around some third party or legacy code which I don't trust or don't understand.
  • when a fault is discovered. In this case I first addand run a new test case which fails because of the fault, then code a fix and re-run all the tests to make sure I've fixed the new fault and not broken anything else, then refactor to make sure the new system is as simple as possible (and still passes all the tests after each change)


  • I like your idea
    Shubhrajit Chatterjee
    Ranch Hand

    Joined: Aug 23, 2001
    Posts: 356
    Thanks for joining the discussion!
    My pleasure sir
    -----------------------------------------------------------------------------------------------------
    In our company ... we have our own defined process models ... we are a mature organization at CMM 5 ... and all models follow the philosophy that the test cases must be designed independent of the code ... specially the unit test cases. The system/integration test cases may be designed later ... taking a cue from the design of the system ...
    ------------------------------------------------------------------------------------------------------
    That certainly is one way to do it, yes.
    --------------------------------------------------------------------------------
    and I feel that this is the kind of logic that is corroborated by Pressman too...
    --------------------------------------------------------------------------------
    Are you referring to http://www.rspa.com/ ? From that webpage you are probably right.

    Yes, Roger Pressman is very highly esteemed in all major software development companies in India

    --------------------------------------------------------------------------------
    My arguements for independent test cases ...
    1. Testing serves the purpose of validation of the design ... so test cases can be prepared when the design of the system is set.

    --------------------------------------------------------------------------------
    That is one way to do it, but not the way of test first/test driven. When working test first, testing serves the purpose of *forming* the design. The design isn't set in advance, it is an ongoing activity while coding.
    Why should it be so? We can always have a design and a testing artifact. If we need to revisit design while coding, then it is a design BUG. It is normal that it should be done a certain no of times, but beyond a certain point it becomes a cause of concern.
    At CMM 4 or 5 companies there are benchmarks for defect density, based on metrics collected organization wide. If the defect density goes outside upper benchmark, then it is a cause of concern.

    --------------------------------------------------------------------------------
    2. If we design the test cases after we code, it will be a tendency to orient our test cases unconciously towards our code ... I am not saying that everyone does it ... but there is a definite chance and it will be a bad management practice .. it might leave a loophole through which projects might fail.
    --------------------------------------------------------------------------------
    OTOH you also can't write all-embracing testsuites without intimate knowledge of the tested code, as things like special cases and boundary conditions might heavily depend on implementation details.
    Therefore you will need both types of tests: black- and white box. In XP, black box are the Acceptance Tests defined by the Customer in advance; the tests written by the developers in a test first manner can be thought of as kind of white box.
    Nice point. I should have thought avout this too. I agree with you.
    --------------------------------------------------------------------------------
    3. Refactoring the code for maintainability is a very good idea ... I agree ... but we might also think that doing things properly the right way ... may also be another option ...
    --------------------------------------------------------------------------------
    Yes, you might think so.
    There are many people who tried both ways and to their own surprise discovered that working test first - that is, writing a small test, making it pass as quickly as possible and then refactoring to the best possible design, doing this in very small increments of a couple of minutes each - leads them to better designs faster.
    There are some forces at work here that make test first development so efficient, which we might discuss in detail if you are interested.
    At the lowest level it does not matter. I am not saying thst you write a code without syntax errors,performance bottlenecks at the coding level. It is silly. I mean't that when you complete your coding stage and put out your work product (code) for the subsequent stage it should not need any refactoring ideally, and any refactoring should be considered a bug.

    quote:
    --------------------------------------------------------------------------------
    4. Finally ... from a developers point of view, there is nothing wrong in developing test cases late ...
    --------------------------------------------------------------------------------
    IMO there *is* *much* wrong form a developers point of view with late testing - it delays very critical feedback.
    We developers think that we will do thinks right! In fact I have known developers who believe that independent testing is a waste of time !!
    Ilja Preuss
    author
    Sheriff

    Joined: Jul 11, 2001
    Posts: 14112
    Originally posted by Shubhrajit Chatterjee:
    That is one way to do it, but not the way of test first/test driven. When working test first, testing serves the purpose of *forming* the design. The design isn't set in advance, it is an ongoing activity while coding.
    Why should it be so? We can always have a design and a testing artifact. If we need to revisit design while coding, then it is a design BUG. It is normal that it should be done a certain no of times, but beyond a certain point it becomes a cause of concern.

    The point is, the best way to learn about the quality of a design is to try to implement it - no amount of abstract thinking and reasoning can make up for that. Therefore XP strives to shorten the feedback, starting as early as possible with coding the design, so that you can put what you learned back into the design.
    Martin Fowler wrote a good article on the subject: http://martinfowler.com/articles/designDead.html

    At CMM 4 or 5 companies there are benchmarks for defect density, based on metrics collected organization wide. If the defect density goes outside upper benchmark, then it is a cause of concern.


    I am not clear about the connection of defect density to design - could you please elaborate?


    At the lowest level it does not matter. I am not saying thst you write a code without syntax errors,performance bottlenecks at the coding level. It is silly. I mean't that when you complete your coding stage and put out your work product (code) for the subsequent stage it should not need any refactoring ideally, and any refactoring should be considered a bug.


    Yes, I agree. Ideally refactoring isn't something you do in its own phase, after coding. It is something that you do continuously *while* coding, as *part of* coding. As Ron Jeffries puts it: if you can tell how much of your time you spend refactoring, you aren't doing enough of it.

    We developers think that we will do thinks right! In fact I have known developers who believe that independent testing is a waste of time !!

    Well, the goal of a well functioning XP team in fact *is* to make independent testing a waste of time - by getting really good at internal testing. There are teams reporting less than one bug escaping the team every couple of months...
    But I guess that differs somewhat from the intention of the developers you quoted...
    Shubhrajit Chatterjee
    Ranch Hand

    Joined: Aug 23, 2001
    Posts: 356
    quote:
    --------------------------------------------------------------------------------
    Originally posted by Shubhrajit Chatterjee:
    That is one way to do it, but not the way of test first/test driven. When working test first, testing serves the purpose of *forming* the design. The design isn't set in advance, it is an ongoing activity while coding.
    Why should it be so? We can always have a design and a testing artifact. If we need to revisit design while coding, then it is a design BUG. It is normal that it should be done a certain no of times, but beyond a certain point it becomes a cause of concern.

    --------------------------------------------------------------------------------
    The point is, the best way to learn about the quality of a design is to try to implement it - no amount of abstract thinking and reasoning can make up for that. Therefore XP strives to shorten the feedback, starting as early as possible with coding the design, so that you can put what you learned back into the design.
    These is not new ... this has already been proposed in many evolutionary process models. This helps definitely if the technology is not new, but strong domain knowledge will help in creating a good design,
    I am not in a position to comment on XP as I have only rudimentary knowledge of XP, However I find it quite revolutionary to do design while coding

    Martin Fowler wrote a good article on the subject: http://martinfowler.com/articles/designDead.html
    Nice article I gained some knowledge and I might use some of the ideas myself, even if I am not using an XP process
    quote:
    --------------------------------------------------------------------------------
    At CMM 4 or 5 companies there are benchmarks for defect density, based on metrics collected organization wide. If the defect density goes outside upper benchmark, then it is a cause of concern.

    --------------------------------------------------------------------------------
    I am not clear about the connection of defect density to design - could you please elaborate?
    If the design changes in a later stage in the process( in non iterative models ) then it is a design bug. Too many design changes will raise the defect density.
    Ilja Preuss
    author
    Sheriff

    Joined: Jul 11, 2001
    Posts: 14112
    Originally posted by Shubhrajit Chatterjee:
    Nice article I gained some knowledge and I might use some of the ideas myself, even if I am not using an XP process



    Too many design changes will raise the defect density.
    [/QB]

    Possibly.
    OTOH, too few design changes will increase defect density, too! That is, every time you see how you could improve the design, but don't do so (because you fear introducing a bug, for example), the design will get more complicated than necessary. And a more complicated design will almost invariably also mean more bugs. This is especially (but not exclusively) true when trying to incorporate a feature that wasn't considered in the original design.
    So, instead of fixing the design as early as possible to prevent bugs introduced by changes, XP goes the opposite way: design changes are required when they make the design simpler. Bugs are prevented by
    - doing changes in very small steps
    - extensive and frequent testing
    - continuous peer reviewing through pair programming
    - and by getting good at it because of the lot of practice
    There are many teams where this seems to work very well. Exciting, isn't it?
    Shubhrajit Chatterjee
    Ranch Hand

    Joined: Aug 23, 2001
    Posts: 356
    It seems we have come to a graceful conclusion to this thread ( Unless someone wants to add some new insights ) !!!
     
    I agree. Here's the link: http://aspose.com/file-tools
     
    subject: test first