I have been tasked with writing a process that will be making multiple ftp connections to multiple servers.
1. The FTP process will be connecting to multiple servers
2. All connections will be kept alive at all times (if possible)
3. Each connection will require different logic when upoading files. For example files in Server A will be uploaded into a temporary directory then moved to a different directory. Files uploaded into Server B will have to be in the following format ["IBM343"][Date] etc.
4. Some servers will require upload in Binary format and some will require ASCII only
The options that i am thinking of are as follows.
1. Have one object that forks out mulitple threads. Each thread will be managing a specific FTP connection. The logic to handle the specific FTP server will be in the thread.
2. Have one object that manages several objects that will be managing the FTP connections.
I am curious to get your opinions on how you would approach this. Can you think of other more efficent design approaches that are better than those listed above.
Hmm, a shame I missed this earlier, it's a topic familiar to me (I've written an FTP JCA connector).
1.) Have you investigated using an open source library to do some of the heavy lifting here? E.g. The Apache commons net lib.
2.) If you're running on a JEE/J2EE application server you could use one of the JCA FTP connectors that are out there.
That way much of the complicated threading and resource handling is done for you by the app server.
I will be using the org.apache.commons.net.ftp component to manage the FTP processing. The main issue that i am struggling with is coming up with a design solution that would work well for multiple ftp servers with different requirements.
For example, server A requires that files are "put" in directory /ibm/incoming/ and server B requires that files are put in a different directory. Server A would require processing of binary files whereas server b will only require ascii files. Files retrieved from Server A will usually be more than one file whereas in Server B there will only be one file to pick up.
Obviously i would need some kind of generic class that handles the putting and getting and some specific concrete classes that handles the logic dependant on which server is being processed. I am trying to come up with a solution that is modular, extendible and easy to maintain.
It sounds reasonably sensible. What we have (a simplistic view) is a generic FTP connector that can perform all of the standard FTP commands and then a series of clients for that FTPConnector. Each client has its own specific behaviour, with any common behaviour put into a base class. We use standard naming and configuration to ensure a common way of configuring each FTP connection.
What suddenly comes to my mind is the decorator pattern.
ie, when you have a new kind of methodology to be used while uploading/downloading files, you can decorate the FTPHandler with a new "decorator" or combination of decorators. In this way, you'll be adding decorations at runtime rather than creating a new class for every new type of methodology that exists - which inturn will reduce code reusability and the classes are tightly coupled to one particular kind of methodology.
Using a decorator can give you the additional flexibility of making a change to one base class to get it reflected in all the decorated members as well. [You can use the java.io package as a reference for decorator, incase if you are not familiar with decorators].