As much to the point, why does this need to be 24-bit LSQA?

Cheers, Martin

Martin Packer

zChampion, Systems Investigator & Performance Troubleshooter, IBM

+44-7802-245-584

email: martin_pac...@uk.ibm.com

Twitter / Facebook IDs: MartinPacker

Blog: 
https://www.ibm.com/developerworks/mydeveloperworks/blogs/MartinPacker

Podcast Series (With Marna Walle): https://developer.ibm.com/tv/mpt/    or 
  
https://itunes.apple.com/gb/podcast/mainframe-performance-topics/id1127943573?mt=2


Youtube channel: https://www.youtube.com/channel/UCu_65HaYgksbF6Q8SQ4oOvA



From:   Barbara Nitz <nitz-...@gmx.net>
To:     IBM-MAIN@LISTSERV.UA.EDU
Date:   29/04/2020 08:21
Subject:        [EXTERNAL] Re: S0F9 and SOFD ABENDs and SVC dumps - oh my!
Sent by:        IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU>



You say that the problem happens when all the tasks terminate. Your 
problem is with not enough LQSA for termination. During termination a 
number of RBs are getmained by RTM to handle termination - like an RB that 
your ESTAE gets control under (a PRB, IIRC). Or a PURGEDQ SVRB. Depending 
on what your ESTAE does, you'll need more LSQA for further stuff. 

I don't have a rule of thumb how much LSQA is needed per TCB. Given that 
you say you create 1000 tcbs, and each tcb creates at least one subtask, 
we're talking at least 2000 TCBs. Plus their associated RBs and CDEs. I'd 
guess that you need at least 6MB below the line of storage reserved for 
LSQA, possibly more. The only way to do that is to write a custom IEFUSI 
that really reserves that much LSQA especially for your job. Or the 
equivalent parmlib member.

Remember that LSQA 'grows' from top of region below downwards while 
private storage 'grows' from bottom of region upwards. So conditional 
getmains don't help here IMHO. You would have to determine current top of 
region programmatically and then subtract 1-2MB for termination and then 
check if you've still got enough room to do your getmain.

Anecdote: Before IBM introduced command classes and all the messages that 
go with too many commands issued at the same time there used to be regular 
wait states (wait07E, IIRC) due to LSQA exhausted in *master*. Commands 
execute in *master* (for the most part). Too many commands at the same 
time generated the exact same situation you currently have - not enough 
LSQA left. Which is really deadly when it happens in asid 1. IBM only 
allows 100 commands per class these days. If more are issued, they get 
held back until there's 'room' again to have them execute.

Why do you need 1000 tcbs?

Regards, Barbara

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN




Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to