The technique I used from the outset of SMS (early 90's), was to deploy
QUINEW volumes either in the storage groups of interest, or a common/shared
OVERFLOW storage group that your SG ACS routine would point to for the
storage groups of interest.

This does rely on appropriate HI-ALLOC thresholds being set, the principle
being that as long as there are ENABLE volumes in a SG with enough space on
to satisfy allocations without breaching the high threshold, the QUINEW
volumes will never get touched.
Technically (and simplistically!), ENABLE volumes with utilisation under
the high threshold are placed in the primary volume selection list, and
will be 'first choice' for allocations.  QUINEW volumes, and any ENABLE
volumes over the high threshold, will be put on the secondary selection
list, and will be used of the primary selection list cannot satisfy the
allocation.
So the QUINEW volumes effectively remain as a 'just in case' bucket of
space.
No automation needed.

But you do need to have a sufficient amount of spare DASD to be able to do
this (although a common OVERFLOW pool helps in that respect).

And I would recommend you are diligent in 'clearing up; any allocations
made to QUINEW volumes if they should get used, to either move data back to
enable volumes, or ENABLE those volumes if the pool is truly running short
of space, and validly needs to grow.  And then perhaps deploy another
QUINEW volume to replace it.
And you probably only want to restrict this principle to truly batch-only
storage groups, as you probably wont want files used by online systems
being put on QUINEW volumes, as that will then mean you cannot clean them
up easily, and then things get messy.
This become even more pertinent if you use a shared OVERFLOW storage group.

Not saying this will suit every site, and every SG in any site, but I have
found it generally works OK in principle, and avoids having to worry about
automation, but does need some diligence to clean things up on a regular
basis, and does need appropriate HI-ALLOC thresholds defined (which can
sometimes be at odds with what you may wish them to be for DFHSM processing
- but hey, we dont live in an ideal world!)



On 4 January 2013 21:27, John McKown <john.archie.mck...@gmail.com> wrote:

> Perhaps he could have a "reserve" of volumes in each storage group
> which are DISNEW. When he got the message, he could do a VARY
> VOL(vvvvvv),ENABLE . Of course, there is now the problem of finding a
> volume in the appropriate SG which is in DISALL or DISNEW status. I
> would look at the DISPLAY SMS,SG(storgrup),LISTVOL output personally.
> But this is a MLWTO and might be difficult to capture and parse.
>
> On Fri, Jan 4, 2013 at 3:19 PM, Mike Wood <mww...@ntlworld.com> wrote:
> > Antonio, are you really on z/OS V1.2 ????
> > If you were on a supported release you would have some of the newer SG
> related capabilities in DFSMS to help - such as overflow and extend storage
> groups.
> > You would not need to automate - simply define the available volumes
> that you would add, and tell DFSMS how you want them used/added.
> >
> > Start here ... www.redbooks.ibm.com/redbooks/pdfs/sg246979.pdf
> >
> > Mike Wood
> >
> > ----------------------------------------------------------------------
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>
>
>
> --
> Maranatha! <><
> John McKown
>
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN
>

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to