Hi,

they're all good questions.

The problem I see is that on z/OS now we have just one read for each record on 
the input file (which is actually a long sequence of chained tape datasets).  
There are probably about ten million or so records to process each daily run, 
of that order of magnitude anyway.

With zLinux named pipes I think there will be more than one read for each 
record, viz one for the cat command  and one for the pipe itself (perhaps two 
if the program then has to issue another read to get the data from the pipe).  
Assuming an extra two milliseconds for each I/O, for every million I/Os you get 
about another 30 minutes' time (a bit more).  So an extra ten or twenty million 
I/Os adds about 5 to 10 extra hours of processing time.

The overhead matters most in elapsed time, as we need to stay within the 
allowed batch window.  Running on an IFL the CPU cycles won't matter as much 
but of course we naturally want to keep usage there as low as possible as well, 
as elapsed time is still going to be a consideration on IFLs (where the "SAS" 
isn't quite as efficient as on z/OS).

I certainly haven't given up on this, it's just looking a bit harder than I 
first expected.

cheers
Peter

Peter Bishop
HP Enterprise Services Asia Pacific South Mainframe Capability & Engineering  

+61 2 9012 5147 office | +61 2 9012 6620 fax | peter.bis...@hp.com
36-46 George St | Burwood | NSW 2134 Australia

-----Original Message-----
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of Leslie 
Turriff
Sent: Monday, 16 November 2009 10:33 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: weird(?) idea for an extended symlink functionality

On Sunday 15 November 2009 17:24:34 John McKown wrote:
> On Mon, 2009-11-16 at 08:22 +1000, Shane wrote:
>
> I think by I/O, the OP is saying that reading the file directly is done
> via a single read() or fread() or ... . Using a named pipe, the "cat"
> does this, but then does a write() or fwrite() to the pipe. And then his
> program does another read() or fread() to read the pipe. So, even if
> there is no physical I/O, there __is__ increased processor overhead in
> writing to and reading from the pipe. The question is "how much" extra
> overhead? With the program being SAS (which I think is CPU heavy most of
> the time), the percentage is likely small compared to the total CPU
> usage.
>
        I wonder how intelligent the Linux pipe mechanism is?  If the connection
works by something equivalent to QSAM's get/locate, put/locate, the overhead
would be miniscule; just passing pointers and reactivating the pipeline
stages?

Leslie
> --
> John McKown (from home)
> Maranatha! <><
>

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to