An http page ( say A ) contains many links . Is it possible to take one link after another in A, and do processing as follows. Get the next link in A . Let us say it is B. If B is having many links, can we open each link in B and write them to a separate file.
The program 'wget' will spider through all of the links on a page and download each. It's the program that is used to for mirror sites to update themselves. I've never seen a Java version of it. You might want to take a look at http://jakarta.apache.org/commons/httpclient.