Re: Pipes and fd question. Large amounts of data.

2005-01-31 Thread Oded Shimon
On Monday 31 January 2005 17:02, Chris Friesen wrote: > Your other option would be to use processes with shared memory (either > sysV or memory-mapped files). This gets you the speed of shared memory > maps, but also lets you get the reliability of not sharing your entire > memory space. > > If yo

Re: Pipes and fd question. Large amounts of data.

2005-01-31 Thread Chris Friesen
Oded Shimon wrote: On Sunday 30 January 2005 11:41, Miles wrote: I'd say that this was one of the rare cases where a solution using threads is not only superior to one using event-driven IO, but actually required. Yeah, I reached just about the same conclusion. At first I thought only 2 threads

Re: Pipes and fd question. Large amounts of data.

2005-01-30 Thread Miquel van Smoorenburg
In article <[EMAIL PROTECTED]>, Oded Shimon <[EMAIL PROTECTED]> wrote: >I have implemented this, but it has a major disadvantage - every 'write()' >only write 4k at a time, never more, because of how non-blocking pipes are >done. at 20,000 context switches a second, this method reaches barely 10

Re: Pipes and fd question. Large amounts of data.

2005-01-30 Thread Oded Shimon
On Sunday 30 January 2005 11:41, Miles wrote: > My suggestion would be to perform blocking writes in a seperate thread > for each of the two written-to fds. You can still use select/poll for > the read side ... tho' once you're using threading on the write side it > might be more straightforward to

Pipes and fd question. Large amounts of data.

2005-01-30 Thread Oded Shimon
A Unix C programming question. Has to do mostly with pipes, so I am hoping I am asking in the right place. I have a rather unique situation. I have 2 programs, neither of which have control over. Program A writes into TWO fifo's. Program B reads from two fifo's. My program is the middle step.