On Tue, Oct 06, 2020 at 03:34:51PM -0400, Joseph Reichman wrote:
> Seemed like I processed 100 files concatenated a lot quicker 
> 
> But I didn’t do any exact testing you may be right 

I'd get or build a subroutine which captured the current real and cputime
(timeused macro?) and call it before/after each significant system call.

This would include dynamic allocation, open, full file read, close, 
and dealloc.

Then look at how long real/cpu each took.

You've said you have 4471 files, but not:

 How large each is, what type of device they are on (tape? disk? virt?)

 > Huge amounts of DASD
 > 4,608 VB (huge) files

   How much is "huge".  Given the size an estimate might be made
   on the time to just read that amount of data...

 Is this a one time job or recurring every year, month, week, day?

I'd suspect that if the delay is allocation/deallocation that they will
ENQ while processing and only allow one at a time if you try to do more
than one in a single address space.

> Well this process is taking forever. I initially ran the program under
> TEST and it took 3 wall minutes to get to the 58th file

That's about 19 files/minute so 4600 files is about 4 hours.  Is this
really a problem?  How long do you have?

This might also depend on how your system performance is tuned to
deal with longer running TSO users (running as in not waiting for 
the terminal).   I remember TSO response tuning used to be (still is?)
set to give a wake up from terminal wait a quick response but if the TSO
session didn't go back to terminal wait push it's response down since
it didn't seem to be getting done.  This helped maintain quick response
for the other users doing shorter running commands.  A batch job
might see vastly different tuning/performance.

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to