On Thu, Jan 31st, 2013 at 12:44 AM, John McKown wrote:

> Thanks to all for the input! I _tried_ to run the script over night. I
> added an echo to tell me which input file I was working on. I came in
> this morning. It had been running from 14:00 to 06:30 (16 1/2 hours)
> and was still on the first input file. That ain't gonna cut it. Time
> to rethink. Using a Perl hash to contain an open file handle seems
> logical. As does buffering multiple records per output file to do a
> single I/O to write them. But I may be forced into using C or C++ for
> speed. Too bad I'm not a very good C programmer.

Interesting timing. I was about to suggest you utilize your perl skills.
Having originally ignored it, I now use awk extensively for text
parsing/reduction. But for *BIG* jobs, perl is it.

But all that input I/O is going to be death whatever you choose.

Shane ...

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
----------------------------------------------------------------------
For more information on Linux on System z, visit
http://wiki.linuxvm.org/

Reply via email to