Regarding i/o buffering, as Rob discusses

On 1/30/13 2:40 AM, Rob van der Heij wrote:
>
> If the input files have a lot of 'chunks' that go to the same output
> file, it might be fairly easy to gobble up the ones that go together
> and write them in a single go. Based on more heuristics, you may be
> able to keep a few of those buckets to avoid appending one record at a
> time, disposing each bucket when it's full enough upon switch.
>

I'd suggest buffering isn't anything you'd need to manually code.

Most application level output stuff (e.g. c's stdio) automagically does
buffering for you anyway.  I know for certain perl buffers output,
here's the FAQ on the topic:
http://learn.perl.org/faq/perlfaq5.html#How-do-I-flush-unbuffer-an-output-filehandle-Why-must-I-do-this-

It should buffer the input too, reading a big chunk at a time from the
input pipe.

Ergo, just saving filehandles and using perl's standard input / output
stuff should be a big win w.r.t. the number syscall overhead.

-- Pat

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to