• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Tim Cooke
  • Jeanne Boyarsky
  • Bear Bibeault
Sheriffs:
  • Knute Snortum
  • paul wheaton
  • Devaka Cooray
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Ron McLeod
  • Piet Souris
  • Ganesh Patekar
Bartenders:
  • Tim Holloway
  • Carey Brown
  • salvin francis

Data corruption while using I/O streams and sockets in Java

 
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The application which I am developing is a typical client-server app using Java sockets with TCP/IP. I am using bufferedinputstream and a byte array of size-1024. I will be transferring multiple files one after the other. Information regarding the file is send as a header splitted by delimiters. This info is extracted to get the file name as well as size. The header size is calculated using the convention by which I add to the byte stream before I send the file.
when I run the application in the same machine it gives fairly consistent results regarding the file size, but running the client and server apps on different machines result in data corruption with a significant increase in the number of bytes recieved. I am unable to comprehend why this is happening when the app is the same and is supposed to use the same TCP/IP stack whether run on one machine or another...Can somebody give me a soln???
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!