• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Piped output stream hangs on flush

 
Ranch Hand
Posts: 30
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello,

I am implementing the unix shell pipe feature in java.
These are multi processed pipes that are connected with the java pipe streams. Only if the commands expect to use an input stream (like more, grep etc.) then then read from it else commands like ls etc that do not need the i/p stream ignore it and just write to the output stream provided (std out / piped output).

The problem I'm facing now is that if ls output is huge (if the final string is about 2000+ characters), then while flushing to the piped output stream, my code hangs. This is the case only when ls is being piped to a command that does not read input.

So ls | more works fine but ls | alias does not since alias not not read the input stream that ls is writing to.

Is there a limitation to the number of bytes that can be written to the piped output stream that is not being read? Is there an easier solution to this?

If anyone has faced this issue before or knows the right way of handling this, please do let me know.

Appreciate your help.

Harsha
 
Bartender
Posts: 9626
16
Mac OS X Linux Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Harsha Pherwani:

Is there a limitation to the number of bytes that can be written to the piped output stream that is not being read?



Yes

Linux pipes have a limitation on the size of a write before it blocks. The kernel level buffer dedicated to each pipe is exactly 4096
bytes big. A write of more than 4K will block unless a reader empties the pipe. In practice this is not much of a limitation because
the read and write actions are performed in different threads



Pipes in Linux, Windows 2000, and Windows XP
 
Harsha Pherwani
Ranch Hand
Posts: 30
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks for your response Joe.

This means whether a command can be piped to or not, I have to make sure the Piped reader end is somehow emptying the pipe contents.
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
A PipedInputStream uses a circular buffer internally, size 1024 bytes. A PipedReader uses a buffer of size 1024 chars. That doesn't mean that you can't write more that that - you can. But the method will block until the excess bytes have been read. If nothing reads the input stream, it will block forever. You could add a BufferedOutputStream or BufferedWriter to the outputstream or writer, to increase the capacity. But the fundamental problem remains - there is still a limit; you can't write more than that limit unless something is reading. (Also if you call flush(), the extra buffer won't help, as the method will block until all data has been written to te PipedOutputStream.)

Offhand I think your best bet would be to create a thread which reads from the PipedInputStream and just ignores the content. At least, that seems to mimic standard unix behavior for tools that don't use the input stream. Personally I'd rather throw an exception in this situation, letting the user know that what they're asking for makes no sense at all. But if your goal is to emulate unix, I guess it's best to create a thread to clear the stream instead.
[ November 20, 2007: Message edited by: Jim Yingst ]
 
Harsha Pherwani
Ranch Hand
Posts: 30
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks Jim!

I created a dummy cleaner thread in case a command in the pipeline does not expect an input. This would just keep reading the input stream so that the output connected to it does not block. And it works beautifully
 
Jim Yingst
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Glad to hear it. It occurs to me that if you have a reference to the previous command's OutputStream at the point where you detect that the next command ignores input, it would probably be more effective to disable the output so it's never written in the first place:


This way you don't need to allocate a thread or waste additional time processing bytes that will be ignored anyway.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic