Making everything bigger is not a good option.  Not everything "needs" to be 
bigger, but there are those that even 40% won't be enough.  The Serverpac used 
to ship with SMF and JES spool/checkpoint and SMF on the catalog volume, and 
I'm sure no one probably leaves it there, but the size of the datasets is a big 
issue.  For instance, if you wanted to use a full 3390-27 for the spool dataset 
(not an unreasonable size), how would you do that using z/OSMF?  My assumption 
from your previous answers is that you can't.  It's not hard to change this 
later, but you have just made the installation process a LOT more difficult for 
people.  Some people will want a 3390-9 or a 3390-54, and that's just the one 
single dataset, there are lots more that will end up with exactly the same 
issue(s).  I think creating z/OSMF product delivery without the ability to 
change the size and location of the datasets (easily) is a bad idea.  Among all 
of the other "bad ideas" I have already identified in z/OSMF.  

Brian


On Tue, 20 Jul 2021 12:47:54 -0500, Marna WALLE <mwa...@us.ibm.com> wrote:

>Hi Barbara (and others),
>Nice to see so many users of PDSEs!  We do not today have the capability to 
>switch from PDS to PDSE in z/OSMF, but we've got it in our backlog from 
>requests to have it.  If PDSEs users would like to help us prioritize that, 
>with their business impacts without this capability, please feel free to email 
>me (mwa...@us.ibm.com).
>
>As for the sizes, for z/OSMF ServerPac we have increased the shipped free 
>space to 40% per data set, and with linklst data sets have zero secondary.  
>This is an increase over the prior free space size we used to provide, in 
>hopes that will help for the time being.  This was done because we don't have 
>the ability to re-size today in z/OSMF.  
>
>Now...I would like to look at the data set size problem in a larger context - 
>in order to understand where to solve this problem.  More than ever, we have 
>been shipping Continuous Delivery PTFs.  Many of these PTFs are quite large, 
>and occur over the life of a release.  This can put quite a lot of pressure on 
>the size of the target and DLIB data sets being able to accommodate these 
>updates for every service install episode.  I am wondering, if it might be of 
>better use to have the capability of accommodating the need for more space in 
>a more ongoing manner?  Meaning, installing a release for a first time - even 
>with enlarging the data sets with some predictive percentage (50%, 100%, 
>200%?) - still doesn't completely help with running out of space in some data 
>sets or even volumes continually, and could result in some data sets being 
>overly and unnecessarily large.  Would it be better if z/OS itself was able to 
>assist better when the problem occurred in a targeted and timely fashion?  Do 
>you feel that if z/OSMF Software Management provided this ability to one-time 
>increase the size of allocated target and DLIBs, that would conclusively solve 
>your space problems for these data sets?
>
>-Marna WALLE
>z/OS System Install and Upgrade
>
>----------------------------------------------------------------------
>For IBM-MAIN subscribe / signoff / archive access instructions,
>send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to