Martin, 

still not sure that I get your point :-)

>I'm wondering whether one wouldn't be better off making the structures 
>bigger and (perhaps) the buffer pools in the z/OS systems smaller.
>In other words, what are the trade offs for biasing towards space in the 
>CF list structures vs biasing towards space in XCF buffers in z/OS?

And here's why: I don't think XCF/XES work like this! 

Take the xcf signalling structures: You size them with the sizer, you get asked 
for number of systems that will connect and for the classlen which determines 
the maximum message length such a signalling structure can handle. That 
determines the number of 'needed' signalling paths for full connectivity. In 
reality, XCF uses your (init-)size-specification and allocates the structure. 
Each 'signalling path', i.e. each connection from one system to another, uses 
one list in that list structure. The (INIT)-SIZE specification basically 
determines how many lists with a certain entry-to-element ratio can be 
allocated in the structure. As far as I know, overspecifying the size in the 
policy basically wasts CF space, as those lists (aka signalling paths) are not 
used! 

Lock structures are different in that the hashing algorithm used basically 
determines where the 'quantum jump' is in termes of how many 'contentions' get 
the same size structure. I am assuming (Bill?) that the hashing algorithm used 
somehow influences the cfsizer, which is why ISGLOCK shows different size 
numbers than IRLM lock structures when inputting the contention information.

Now cache structures, let's look at RACF :-) The nature of a cache structure is 
that it caches data, so I'd say biasing towards CF storage versus MVS storage 
depends on the application, what it caches, what type of cache is used, what 
serialization protocol is used for updates. There may be situations where more 
CF storage offers an advantage over bigger 'local' cache, even when 
serialization overhead is involved. You would probably know this for DB2 group 
buffer pools :-)

But (you knew there was a but, right?): We're back to signalling structures, 
and here the parm that influences the 'buffer pool size' owned by XCF is called 
MAXMSG. There's a section in the sysplex setup book talking about maxmsg, which 
is the allowed upper limit for the 'pool'. Okay, that storage must be backed 
DREF as the code actually accessing it runs disabled. Assuming a properly 
configured system, *if* that buffer fills to capacity (and hence impacting 
paging and performance), chances are something is wrong with either another 
system (because it is probably not reading out the appropriate list in the 
signalling structure, causing it to fill until no one can write into the 
structure anymore, and XCF has to retry the CF write operation until there's 
space) or with your own system because *someone* isn't doing the msgin service 
properly. Wouldn't you rather have the application unable to go on talking to 
other members in the group (i.e. msgout gets a return code) rather then XCF 
having to organize huge storage areas and retrying CF operations? And running 
the danger of loosing not only this, but other systems as well? (I certainly 
would, but I had looked at sadumps where *something* was wrong :-) ). 

So in essence, I think the answer is 'it depends'. Biasing towards CF structure 
size in my opinion won't buy you anything in terms of performance for list 
structures, at least not for signalling structures. For lock structures you may 
gain something if you're able the determine the point of the 'quantum jump'. 
And for cache structures it depends on how they are used. 

Okay, now over to others.....
Best regards, Barbara
-- 
GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to