• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Tim Cooke
  • Campbell Ritchie
  • Paul Clapham
  • Ron McLeod
  • Liutauras Vilda
Sheriffs:
  • Jeanne Boyarsky
  • Rob Spoor
  • Bear Bibeault
Saloon Keepers:
  • Jesse Silverman
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
Bartenders:
  • Piet Souris
  • Al Hobbs
  • salvin francis

backing up 0.5 GB over the internet

 
Ranch Hand
Posts: 618
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We back up our web app's database and compress it all into a file that's about half a gig in size. We want to beam that file down to our office every night, and that Linux box is set up as an FTP server, so I figure we could just go out there and get it over FTP. The production machines supposedly have 3MB up and 3MB down, and our office here can supposedly get up to 1MB downstream. It seems like under ideal circumstances, it might only take about 2-3 hours to FTP it down to here every night. Does this sound about right? Is there a faster way? I know how to use Apache's HTTPClient, and my understanding is that HTTP is faster than FTP, but wouldn't that Linux machine have to be set up as a webserver? I suppose there's also the possibility of writing some simple client/server programs in Java just to transfer the bytes across, but would that be faster than FTP, and also, wouldn't we have to open up ports that are probably closed down on the router/firewall guarding the production machines? Thanks for any suggestions...
 
Bartender
Posts: 9626
16
Mac OS X Linux Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Stephen Huey:
HTTP is faster than FTP,


Overall, while FTP may be very efficient for large file transfers, it is not the best protocol for the transfer of short, simple files.


but wouldn't that Linux machine have to be set up as a webserver?


Correct.


I suppose there's also the possibility of writing some simple client/server programs in Java just to transfer the bytes across, but would that be faster than FTP


Don't reinvent the wheel. Use a standard protocol, be it FTP or HTTP. Makes your code smaller, less error-prone and easier to maintain. Since FTP is already optimized to handle large files, any speed gain from your proprietory method would be neglegable, and really unnecessary since there's nobody waiting around for the data to arrive.


and also, wouldn't we have to open up ports that are probably closed down on the router/firewall guarding the production machines?


Correct. And introducing security holes to write around established standards is a Bad Idea.
 
You showed up just in time for the waffles! And this tiny ad:
Building a Better World in your Backyard by Paul Wheaton and Shawn Klassen-Koop
https://coderanch.com/wiki/718759/books/Building-World-Backyard-Paul-Wheaton
reply
    Bookmark Topic Watch Topic
  • New Topic