Elardus

> When VTAM is doing a GETMAIN for more buffers, all using VTAM need to wait 
> for that GETMAIN to complete. Not a problem, but observable ...

The trick here is to "tune" the dynamic buffering parameters. You can set the 
affected buffer pool to have sufficient buffers to be able to cater for the 
"morning rush". In order to do that you obviously enter the DISPLAY NET,BFRUSE 
command *after* the "morning rush" happens.

If that describes the "normal running" state, then that is rather obviously the 
way to go. If however there is some fluctuation subsequently and the "morning 
rush" - or any other event - is causing a peak, you then need to increase the 
expansion increment or the expansion point or both in order to avoid delays 
acquiring additional sets of buffers. Message, IST561I[1], explains that VTAM 
was obliged to wait while the expansion of a buffer pool completed. You should 
keep adjusting until there are no IST561I messages appearing - without, of 
course, setting too large an expansion increment or "too early" an expansion 
point.

Note that you should always round up whatever number you think may be 
appropriate for an expansion point to an integral number of pages.[2]

It's best to concentrate your analysis of buffers to the time just following an 
IPL. It's in the nature of the way the numbers are reported that you get the 
best information after a "reset".

An IPL causes a "reset" of course but so does taking a snapshot dump under 
control of the SNAPREQ start option. Many years ago, I tried setting a very 
small value for the SNAPREQ start option - probably it was one of the ISTRACON 
module "replaceable constants" at the time - but it was "too much" for VTAM and 
I caused VTAM to abend!

-

>>In order to illustrate the point ...

What I forgot to emphasise here is that I have assumed half-duplex flow so that 
the acknowledgements - at the request unit level - cannot flow while a unit of 
application data is flowing in the opposite direction. This is in contrast to 
802.2 - to which the document from Gates's men refers - where the 802.2 
parameters can be set generally to permit continuous acknowledgement. For the 
LAN underpinning the systems of my Education/Test environment of long ago, I 
was able to judge that I was getting continuous acknowledgement with an 
outstanding acknowledgement limit (TW) of 12 to 13. Why not simply set the 
maximum 128 I hear you ask. Well, a frame that is unacknowledged must be held 
in case it needs to be resent. Need I spell it out? Again, it's just common 
sense.

-

[1] From z/OS Communications Server SNA Messages:

<quote>

7.97 IST561I

   IST561I STORAGE UNAVAILABLE: bp BUFFER POOL

Explanation: A VTAM request for storage from the buffer pool bp could not be 
satisfied because there was not enough available storage in the buffer pool.

bp is the name of the buffer pool. See the z/OS Communications Server: SNA 
Network Implementation Guide for an explanation and description of buffer pools 
and for general information on buffer pool specification and allocation.

...

</quote>

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/F1A1C6B1/7.97

[2] This all clearly stated in the manual in which you might expect to find it:

23.6.2.3 Guidelines for dynamic expansion

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/f1a1b5b0/23.6.2.3

-

Chris Mason

On Tue, 25 Sep 2012 05:02:01 -0500, Elardus Engelbrecht 
<elardus.engelbre...@sita.co.za> wrote:

>Chris Mason wrote:
>
>>> 'Maximizing the SNA RU size reduces end-to-end acknowledgments at the 3270 
>>> application level.'
>>This is just common sense.
>
>Thanks. Agreed.
>
>>In order to illustrate the point I'm assuming you have to send 10K of data 
>>and the flow control parameters are such that each unit of data sent must be 
>>acknowledged - just to keep it simple.
>
>>- If you send 10 units of 1K you are obliged to wait while 10 
>>acknowledgements are returned.
>>- If you send 1 unit of 10K you are obliged to wait while 1 acknowledgement 
>>is returned.
>>QED!
>
>Thanks for this useful explanation.
>
>>I am a great advocate of regular monitoring of buffer pool patterns of use. 
>>First of all I advise that dynamic buffering should be used. Then I advise to 
>>guard against "too much" or "too little".
>
>This is what my VTAM and TCP/IP guys and gals doing. When VTAM is doing a 
>GETMAIN for more buffers, all using VTAM need to wait for that GETMAIN to 
>complete. Not a problem, but observable especially when all and everyone is 
>logging on in the morning after the previous night IPL. 
>
>Groete / Greetings
>Elardus Engelbrecht

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to