On Wed, 14 Apr 2010 17:39:53 -0500, Arthur Gutowski <aguto...@ford.com> wrote:

>In keeping with the consolidated response (if I'm not losing too much context):
>
>Mark,
>
>Yes, I recall the striping issue you had.  I think we can avoid that.
>Yes, the allocate/delete job is what I referred to.  Glad to hear that's not
>needed any longer.  One less hurdle for our Storage team.  Band-Aids can be
>tough to remove.  In which direction did you adjust the migration threshold?
>

First, lets rewind a little.  This thread was originally discussing
the issues related to LOGR in a sysplex with heterogeneous LPARs.  Our
sharing of LOGR volumes across SMSplexes was a requirement (although the
title of this thread is non-SMS LOGR).   Making our SVC dump pool shared
was done at my request. The development sysplex in this environment is 
pretty much all static data (SAP DB2 in one LPAR, WAS in the other) 
and I couldn't get the storage team to implement HSM in that sysplex
to manage the dumps.  So I pushed them to create the single pool
across SMSplex boundaries.  This worked, but since the HSM migrations
that "emptied" the dump volumes took place in another SMSplex, the
devl sysplex's SMS (COMMDS) didn't have an accurate picture of 
free space.   This is probably almost a worst case scenario for sharing
between SMSplexes because "huge" files were created in one plex
and deleted in another (as opposed to LOGR or "every day" allocations
from a batch pool).   But regardless it worked well until I turned
on the striping.   

To answer your question, the high threshold was something like 85 and
ISTR the low being something like 75.  I set them to 95/10 and I think I also
changed "AUTO MIGRATE" to I.  "When AUTO MIGRATE is I, migration is 
done when the space used exceeds the half way mark between the HIGH 
and LOW threshold. " 

But another factor here was the migration of all the volumes in the
pool to 3390-9.   The more free space on a volume and in the pool, the 
better for this sharing since the other SMSplex may not have an accurate
picture of the volume at allocation time.  My problems were due to large
dumps filling up a mod-3 and then they would get migrated from the prod
plex, but the devl plex didn't know that space was available again (unless
a new allocation "fit" and that volume was selected.  So once the volumes
were all changed from mod-3 to mod-9 along with the threshold changes,
there was enough wiggle room for the allocations to take place without
running the job. 

My statement above about "worst case" is because even though I have
never set up shared "batch" volumes across the SMSplexes, I would guess
it would work better with lots of smaller allocations on large volumes 
happening often from each plex.  Every time a file is allocated or deleted
from a particular plex, that plex would get a current picture of that space
on the volume.    We have shared LOGR across SMSplexes from day 1
and never had any problems at all and this is probably why.

>>I just looked and all volumes in the LOGR pool are enabled on
>>both SMSplexes also.  There is a small development plex where
>>there is also 2 SMSplexes ... but it is only a single volume in
>>that pool shared between the 2 plexes. [...]
>
>I'm hoping we can "fence" allocations by system, and keep truly shared
>volumes (ENABLED everywhere) to one or zero.  Operlog is the only "problem"
>child, and depending on where we use it and what RACF can do for us, would
>be the only reason for a globally ENABLED volume.
>

It worked fine that way here before.  I think they are all enabled now
across the SMSplexes because there are 4 3390-9 in that pool now since
our last DASD migration.  It used to be 3 times as many volumes so they
were easier to split up between the 2 SMSplexes. 

Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS       
mailto:mzel...@flash.net                                          
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html 
Systems Programming expert at http://expertanswercenter.techtarget.com/

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html

Reply via email to