Give us an idea of how big each file is. OPEN/CLOSE is expensive. QSAM with
large buffers should go pretty quickly.

LOCATE instead of MOVE mode can speed things up when you are reading.

On a different note. I just compared EDIT macro performance versus
IPOUPDTE. IPOUPDTE was about 600 times faster.



On Wed, Oct 7, 2020 at 9:44 AM Farley, Peter x23353 <
0000031df298a9da-dmarc-requ...@listserv.ua.edu> wrote:

> Joseph,
>
> I agree with Michael, if you are trying to do this in a TSO session, then
> stop doing that.  Run it as a batch job.
>
> It still may not get done very quickly, it is common for the initiators
> that allow a programmer to run large CPU / large elapsed time batch jobs
> also get bumped way down in the performance settings.  That's a just fact
> of life for programmers.
>
> If you can get away with it, try to get the batch job bumped up in
> priority after it starts running, if you have any good friends in OPS.  Or
> you could try convincing your Capacity/Performance team that this really
> needs to get done and you need the performance boost to get it done in
> order to meet management's schedules.
>
> I would use the SORT utility at your shop as the main read/select process
> if at all possible, even if it means setting up E15 and/or E35 exits
> because the field selection or output re-formatting process needs more than
> SORT control cards can provide.  You may be surprised at how much SORT can
> do for you though.
>
> As a POC, I personally I would set up the select-a-file-to-process logic
> in Rexx and have Rexx invoke SORT to select the records I want extracted.
> From my experience SORT has MUCH better I/O and record-selection and output
> formatting performance than any other utility or custom program you can
> name.
>
> If that POC works for 100 datasets but still takes too much time, then
> re-code the select-a-file-to-process logic in COBOL (easier) or Assembler
> (a little harder) or your favorite compiled language (maybe Metal C?) and
> try that solution for 1000 files.
>
> Just saying how I would approach the problem.  I too have to deal
> regularly with very large VB files, so I do understand your predicament.
> SORT is your friend for such files.
>
> If you are up to learning yet another language, the z/OS port of the
> open-source lua scripting language is purported to be a pretty
> high-performance tool.  I don’t have any practical knowledge of it, just
> reporting what I have read.
>
> HTH
>
> Peter
>
> -----Original Message-----
> From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> On Behalf
> Of Michael Stein
> Sent: Tuesday, October 6, 2020 6:15 PM
> To: IBM-MAIN@LISTSERV.UA.EDU
> Subject: Re: dataset allocation
>
> EXTERNAL EMAIL
>
> On Tue, Oct 06, 2020 at 03:34:51PM -0400, Joseph Reichman wrote:
> > Seemed like I processed 100 files concatenated a lot quicker
> >
> > But I didn’t do any exact testing you may be right
>
> I'd get or build a subroutine which captured the current real and cputime
> (timeused macro?) and call it before/after each significant system call.
>
> This would include dynamic allocation, open, full file read, close, and
> dealloc.
>
> Then look at how long real/cpu each took.
>
> You've said you have 4471 files, but not:
>
>  How large each is, what type of device they are on (tape? disk? virt?)
>
>  > Huge amounts of DASD
>  > 4,608 VB (huge) files
>
>    How much is "huge".  Given the size an estimate might be made
>    on the time to just read that amount of data...
>
>  Is this a one time job or recurring every year, month, week, day?
>
> I'd suspect that if the delay is allocation/deallocation that they will
> ENQ while processing and only allow one at a time if you try to do more
> than one in a single address space.
>
> > Well this process is taking forever. I initially ran the program under
> > TEST and it took 3 wall minutes to get to the 58th file
>
> That's about 19 files/minute so 4600 files is about 4 hours.  Is this
> really a problem?  How long do you have?
>
> This might also depend on how your system performance is tuned to deal
> with longer running TSO users (running as in not waiting for
> the terminal).   I remember TSO response tuning used to be (still is?)
> set to give a wake up from terminal wait a quick response but if the TSO
> session didn't go back to terminal wait push it's response down since it
> didn't seem to be getting done.  This helped maintain quick response for
> the other users doing shorter running commands.  A batch job might see
> vastly different tuning/performance.
> --
>
> This message and any attachments are intended only for the use of the
> addressee and may contain information that is privileged and confidential.
> If the reader of the message is not the intended recipient or an authorized
> representative of the intended recipient, you are hereby notified that any
> dissemination of this communication is strictly prohibited. If you have
> received this communication in error, please notify us immediately by
> e-mail and delete the message and any attachments from your system.
>
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>


-- 
Wayne V. Bickerdike

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to