aspose file tools*
The moose likes Agile and Other Processes and the fly likes too much time to write tests Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Engineering » Agile and Other Processes
Bookmark "too much time to write tests" Watch "too much time to write tests" New topic
Author

too much time to write tests

enric jaen
Greenhorn

Joined: Oct 15, 2010
Posts: 25
I am using TDD to write J2EE code using spring, strutstestcase, hibernate and so on, and I realize that writing the tests takes too much time (junit configuration, mock creation, write the test, configure database,..)
I realize that tests provide quality to the code, but it is really agile? What I am doing wrong?
Gregg Bolinger
GenRocket Founder
Ranch Hand

Joined: Jul 11, 2001
Posts: 15299
    
    6

Years ago, I had an architect over our team that was pretty horrible. But she said one thing to me that I've never forgotten.

Me: "I don't have time to write all these tests"

Her: "You don't have time not to write them"

Writing good tests takes practice and, often times, more time than they appear to be worth. But appearances can be deceiving. You may not be doing anything wrong at all. I could write a small book for every time a test has saved my butt. I could write a similar book for every time not writing a test has come back to bite me in the butt.

If you want to get more specific about your environment and how you're writing your tests, maybe we can help you reduce the pain a bit.


GenRocket - Experts at Building Test Data
Duc Vo
Ranch Hand

Joined: Nov 20, 2008
Posts: 254
Well, it's hard to say if you are doing wrong without seeing your work. However, you are right. Once you feel that you've spent too much time writing tests, then you should review your work. Here are few tips:
(1) How does your code adhere to good programing principles and practices like separation of concern, single responsibility, open-closed principles etc?
(2) Are you writing unit tests or functional tests every time?
(3) Do your classes have too many dependencies that you have to mock too many of those?
(4) How often do you need to use persistent storage (i.e. database/file systems) to run your tests? (should be as few as possible, none is the best).

Well, right now I can only think of these four. Let me know if it's useful for you anyway.


“Everything should be as simple as it is, but not simpler.” Albert Einstein
enric jaen
Greenhorn

Joined: Oct 15, 2010
Posts: 25
well, I am writing tests for an already not-TDD-written J2EE web application (jboss applications are horribly to test because of jdni). I have written tests for each layer: dao (junit+spring), ejbs(junit+spring), action (strutstestcase+spring) and web (selenium). I have dozens of integration tests in each layer (dao, dao+ejbs, dao+ejbs+action) and often some tests get obsolete. Modifying the dao layer means writing new tests for each of the 4 layers. Methods tipically have 4 or 5 dependencies (having dependencies IMO is a good sign of low coupling). Writing dao tests without persistent storage for me is not an option, as dao beans already have hibernate queries.

And sometimes I simply don't realize of use cases which I have not considered and so haven't been tested.

I definitively like the TDD approach, but as I said modifying a few lines of code in a layer it may mean rewriting tests in each of the layers above, which takes time (I may spent sometimes 1 hour to write a test because of unexpected surprises (learning to mock static methods calls)

I have read that unit tests should be written in 2 minues. But how much time should an integration test take?

I guess that to become an expert in writing tests efficiently it takes time and expertise.
Duc Vo
Ranch Hand

Joined: Nov 20, 2008
Posts: 254
Hi Enric,

I understand that if you are not looking for a theory and/or full list of good practices on how to write tests. A forum would not be enough for those. So I just stick to your case then.

In your list of technologies, there is no mention of any mocking framework. Are you using one? (you should)

It doesn't seem you writing unit tests but functional tests, hence the issues of changing code in on layer involving changing tests in other layers. Such scenarios should only occur if you change the API between layers (i.e. refactoring signatures of interface methods and/or add/remove interface methods and/or changing attributes of transfer objects etc).

The main idea of writing unit test is that you test only and just only that section of the code that you are about to change without worrying about changing of its dependencies. All dependencies should be controllable i.e. created/managed with some mocking framework. For example, if method mymethod() of class A depending on class B, C, D, E, you should write test on mymethod() of class A with mocked objects with controllable states of B,C,D,E because you are not testing B,C,D,E. Which also forces you to think of what do you want that code section to accomplish instead of what if the dependencies change their behaviors.

It's also important to write testable code: i.e. private methods are generally not testable; methods which depend directly on class attributes are generally hard to test; methods which uses direct connections to persistent storage (i.e. file, jdbc) are generally hard to test. Also, if dependencies are automatically injected into your test object (i.e. via JNDI, Annotations, etc), you may want to consider to create some protected set methods to inject the mocked dependencies to make it testable.

Now, it may be hard to follow with some code sample. How about, you post up here some code that you find it hard to test and your current approaches to test it. Let's see how we can improve them.

Regards,
enric jaen
Greenhorn

Joined: Oct 15, 2010
Posts: 25
Thanks for your suggestions Duc,

Yes I have tried mockito, powermorck and easymock for unit testing.

But I realise that, as you noticed, I am doing too much funcional testing and less unit testing. I am trying now to invert the efforts.

Also, I have reorganized and added an script to run the tests automatically, as until now I was launching the test cases by hand.

I guess it's harder to test an existing application (which has not been designed to be tested at all) than start an application from scratch using tests.

I can't give you an specific code that I can't test, I achieve it at the end. I guess is a question of learning to mock specific situations (static methods, void methods, threads) and, at the end, my own speed/capacity to organize and write test code.







Chris Janicki
Greenhorn

Joined: Aug 30, 2006
Posts: 21

I've been programming Java since 1997. I don't like formal testing, for my types of projects, and here's why. I did it once with a team of four developers and a large code base (roughly 250,000 lines). For us, it was a waste of time. Almost six months to start from scratch and build a complete testing solution using junit, etc. for our existing code. We found one bug during the initial test writing, and another bug later on. Yes, it would have gone faster if we were testing experts and had written tests from the very beginning, but the moral of my story is that a formal testing framework is not the only way to produce good code.

The reason why so few bugs were found is because our normal routines prevented bugs. We generally tested any new code or changes we made by feeding test data into our system and verifying results manually. Since the code was fresh in our head, we had a good idea on how to stress the code in question. The occasional missed condition would quickly be caught unexpectedly during some other test… each end-to-end test would touch so many components that we were effectively doing lots or unintentional testing. (One slight advantage of doing it this way is that the test conditions are more random.)

There are lots of arguments for automated testing, but I don't believe saving time or superior product are always valid reasons, especially when compared to good manual testing for some projects.

I found out later that I was not alone this belief. I read a book that described the early programming world. There were two grand experiments… in the U.S. people were moving towards formal unit testing. In Japan (who was more of a powerhouse programming nation then), they were using pragmatic end-to-end manual testing of their systems. The result?… Both programming methodologies produced the same quality and time-to-completion. (The reasons for Japan falling out of the software race were explained by the author... they had nothing to do with their testing methodology.)

So my advice: Do what you think feels right for your project. I'm sure very rigid projects will fit nicely into a junit kind of approach, but more fluid projects may benefit from a hands-on simulation of use instead.

Remember, in the end you are judged on how well your code works today, and tomorrow. No customer really cares how you achieve that.
Jeanne Boyarsky
author & internet detective
Marshal

Joined: May 26, 2003
Posts: 30938
    
158

There's two benefits to having an automated test suite. One is quality/time to completion. The other is regression. I agree - for some projects - aka the kind that are quick dirty and will be in production only a few months or so - testing has less value. However, for longer lived projects, quality and time to completion for later releases need to be measured if trying to say testing doesn't have value for that project.


[Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos]
Blogging on Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, OCAJP, OCPJP beta, TOGAF part 1 and part 2
Jonathan MBaker
Greenhorn

Joined: Jul 08, 2003
Posts: 7
Another alternative for testing that I don't think I've seen mentioned on this thread are automated system tests. Since the application you are testing is a web application, you should be able to use a tool like selenium to build a nice suite of tests at the application layer.

Selenium has a firefox plugin that allows you to record user actions (ie. login, click button a, type into box b, etc). Once those actions are recorded, you can save those recordings as junit tests that you can setup to run nightly or hourly, etc.

I have used this technique when given a large legacy code base that had no real tests, and was architected in such a way as it was very difficult to test (for all of the reasons that you cite above - tight coupling, numerous dependencies, etc)

The framework works with ajax-enabled interfaces in addition to standard interfaces. It took a minimal amount of modification to the recorded scripts to get them to be re-runnable, and the benefit was tremendous. In my case we ran a suite of read-only actions in production as part of our install procedure to verify our install was correct. (Not what I would recommend, but I wasn't the boss) If you have a test environment available, then read and write actions could be run in that environment at whatever frequency is good for your situation.

I did this over two years ago, so the feature set of selenium could be even richer now.
Peter Nilsson
Greenhorn

Joined: Nov 04, 2010
Posts: 1
Well, it's a question of whether the developer must invest time yourself this or the specialized testers... Automated tests are really necessary if you need to know how your application reacts under repeated execution of the same operations.

Agile Development Tools Review

Sonny Gill
Ranch Hand

Joined: Feb 02, 2002
Posts: 1211

Another interesting discussion I came across a while ago about TDD http://www.artima.com/forums/flat.jsp?forum=106&thread=216434


The future is here. It's just not evenly distributed yet. - William Gibson
Consultant @ Xebia. Sonny Gill Tweets
Tobias Prinz
Greenhorn

Joined: Dec 18, 2008
Posts: 2
Dear Enric Jaen,

that is a story pretty close to my heart: I spend half a year as "embedded QA", which means I was the developer writing and extending the testing framework for an already existing software and I faced the same problem. I got that job because, well, I was the TDD guy (and nowaddays I still am, just as a normal developer).

Here is what I came to believe after doing this job:
TDD does not work well when extending systems that were developed without this methodology. Because usually, those are very closely coupled and, well, ugly (this is a technical term describing something more ugly than close coupling, in the "you call your uncle daddy" kind of way - you know, code that looks like it is taken from an example of "don't try mocking this because half your code is in here and half my code is in your part"). In my case, even the most basic actions needed sessions or database connections or, later on, OSGI services to be present. If you start mocking stuff there, you'll get to the point where you do not know whether you are looking for bugs in your mocks or the actual system.

So I currently advocate (sounds good - not that anyone is listening) a gentle CYA approach (if you do not want to call it that, call it "separation of concerns" or simply "at least it is not my bundle's fault") for legacy systems:
  • Do TDD for stuff that you build anew. Build your testing framework up as far as you need to go to verify your own stuff. Spent not much more for tests than the usually recommended 50% of your time.
  • Do opportunistic testing for newly occuring bugs in the legacy part: Extend your own framework if possible to cover the system from bottom up. If not, don't feel ashamed to do top-down stuff from the highest layer (with Selenium in my case) and live wit the fact that while this gives you a lot of coverage pretty quickly, you'll have to search for the actual bug much longer than if you have a unit test that points to the right place.
  • Don't go through all layers (so tests for layer N involve layers 1 to N-1) like you mentioned. If you start wondering why a bug occurs in tests for layer N-1 but not in layer N (or even worse: the other way around), you got too much complexity in your tests. And a simple API change (or, since "API change" sounds so unprofessional, let's call it "API clarification") means changing a lot of test code.


  • Otherwise, as you point out, you will spent too much time on that.

    I don't feel to bad about advocating this: If you manage to work on a legacy project long enough to see problems like this, it is probably not completely broken. In my case, the system is gradually getting better, new bundles are surprisingly easy to test, sometimes even developed using TDD. The old code gets improved whenever there is a bug (and time... hahaha). And the software sells well and has outpaced most of its competitors, which seems a good indicator that this mix of methodologies is not too shabby.

    I also believe from my own experience that a non-trivial software project can have 90% code coverage if started with TDD and that a legacy project can be happy to go to about 50%. After that, the utility of additional tests declines rapidly.
    Gabor Liptak
    Greenhorn

    Joined: Jul 14, 2009
    Posts: 8
    enric jaen wrote:configure database


    If your unit tests depend on the real database, that could be part of the issue.
    You might consider looking into JEE specific unit testing like Smokestack (a project initiated by me).
    Pradeep bhatt
    Ranch Hand

    Joined: Feb 27, 2002
    Posts: 8919

    For writing test you need time, you will have to convince management that more time is needed. Most management are open to this now a days

    As we add more code maintaining test cases becomes nightmare particularly as we need to serach test case files to check if the test case is already covered.


    Groovy
    Jan Cumps
    Bartender

    Joined: Dec 20, 2006
    Posts: 2510
        
      10

    Do you know a faster way to test?
    (and to retest when needed)


    OCUP UML fundamental and ITIL foundation
    youtube channel
    R. Duval
    Greenhorn

    Joined: Sep 16, 2008
    Posts: 11
    In my humble opinion there is a little misunderstanding about what is Test Driven Development and Test Coverage. TDD is a different way of thinking development process and it goes far ahead of just having a couple of test for a class.

    In TDD we write the tests as soon as we get the component's interface and it's a way to think in advance the component dependencies, what the methods algorithms will be, better ways to decouple and evaluate our code granularity. Based on our tests and our mocks expectations we build our new component knowing what it will have to adhere to, it's like a contract, except that the contract is a test. That's why TDD don't rely on integration tests, although it's a good practice to have them AFTER you build your component. Again, in my opinion that is the great difference: to use TDD we have to build tests BEFORE.

    When we have a legacy system, all we can do is build or increment those components test coverage, but in most cases you are not rethink how you gonna rebuild that class, you just want to be sure it's working like it's supposed to do.

    Cya,
    R. Duval
    Greenhorn

    Joined: Sep 16, 2008
    Posts: 11
    Jan Cumps wrote:Do you know a faster way to test?
    (and to retest when needed)


    Depends on what you are trying to test. Is it decoupled code? If it is, you can use mock tools like Mockito, JMock, EasyMock and so. If it's not you'll probably should use a context. If your project uses Spring, that's half the answer, you should give spring-testing a try, even if your project doesn't use this DI container. I usually work with a set of tools to create integration tests, like HSQLDB, DBUnit, Spring Testing, Struts Test Case, etc, etc, etc, all depends on project scope and what should be tested.
     
    I agree. Here's the link: http://aspose.com/file-tools
     
    subject: too much time to write tests