I have been tasked with creating a framework for testing the web services of a new project. Rather than buying an off the shelf solution such as LISA my manager wants me to roll my own using JUnit and Ant.
One idea I have is to create JUnit TestCases which have the XML requests and SOAP calls as fixture objects, and test*() methods which make the requests and validate the responses. These test cases will be run by Ant from a <junit> task and the results aggregated into an HTML report via a <junitreport> task.
Does this sound reasonable, or is there a better way / best practice for doing this? I am new to web services with SOAP (I am used to doing web services using XML over HTTP) so perhaps there are SOAP-related issues I'm overlooking.
Tests are often started from Ant, so what you describe sounds fine. If for some reason you need more flexibility in the Ant/text integration (since you say that using Ant is important), you could pass the operation to invoke, its parameters and expected result from the build file as parameters to the test class. You couldn't use the <junit> task in that case, though - you'd need to start the test using a regular <java> invocation.
Ping & DNS - updated with new look and Ping home screen widget
check out SoapUI. You can write semi-intelligent SOAP conversations (ie part of the response builds the next request, test format and data in the result, pass/fail etc) and it can also run suites of tests from the command line and report the output. One thing that was missing and I did log it with them was the ability to integrate the results with JUnit/CruiseControl but I haven't checked in a while to see if they did anything about it.