Re: ServerPac Installs and dataset allocations
I'm assuming if I go the re-allocate and copy direction I can use IEBCOPY with the COPY parm (don't really want to use COPYMOD since it will re-link everything) for PDSs and COPYGRP for the PDSEs? George Dranes Western Illinois University University Information Management Systems Manager-Technical Services 1 University Circle Morgan Hall Room 121 Macomb, IL 61455-1390 email:[EMAIL PROTECTED] tel: 309-298-1097 X261 fax: 309-298-1451 - Original Message - From: Ken Porowski [EMAIL PROTECTED] To: IBM-MAIN@BAMA.UA.EDU Sent: Tuesday, September 11, 2007 8:37:09 AM (GMT-0600) America/Chicago Subject: Re: ServerPac Installs and dataset allocations There is a way in the ServerPac dialogs to globally alter the space allocations. I usually add 25-50% to the allocations. After my initial load of the datasets I compress all then reallocate as needed to minimum 50% free in a single extent. This is probably not needed but I like all my system datasets and anything in LINKLIST or LPALIST to be at least 50% free in one extent. I do not really worry if they are allocated with secondary as long as I'm not using it although for OEM LINKLIST I will allocate zero secondary. Ken Porowski AVP Systems Software CIT Group E: [EMAIL PROTECTED] -Original Message- George Dranes We just recently installed z/OS 1.8 using ServerPac. We then applied all of the latest RSU maintenance. I noticed many of the SYS1 datasets on our SYSRES apparently were allocated too small by ServerPac jobs. I know some may just need compressed but others such as SYS1.SHASLNKE which is a PDSE is also into 5 extents. My question is, does anyone out there try to get these allocated into one extent or do you just leave as is and not worry about it because of today's DASD?? It just gets kind of irritating when many of these datasets are obviously allocated (in blocks even) too small by the ServerPac allocate jobs. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: VIO/HCD Questions
--unsnip--- George, the device doesn't need to be online or available. By adding the VIO=YES parameter, you are telling the system which type of device to emulate for a UNIT=VIO request. This defines the geometry of the emulated device, so the various access methods using it are device aware and the I/O Supervisor routines responsible can make any device-dependant decisions. It's way of saying the VIO = 3390 geometry. That's kind of what I thought. Thanks for all of your help! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
WLM questions
I was recently looking at my rules for subsystem type STC and the look as follows: 1 TN GRS ___ NO TRANSACTION 1 TN CATALOG ___ NO TRANSACTION 1 TNGONLPRD ___ NO REGION 1 TNGONLTST ___ NO REGION 1 TNGDBPROD ___ NO TRANSACTION 1 TNGDBTEST ___ NO TRANSACTION 1 SPMSYSSTC ___ NO TRANSACTION 1 SPMSYSTEM ___ NO TRANSACTION 1 TNGSTC3270 ___ NO REGION 1 TNGSTCHI___ NO REGION 1 TNGSTCMD___ NO REGION 1 TNGSERVERS ___ NO TRANSACTION 1 TNGMONITORS ___ NO TRANSACTION 1 TNGWEBPROD ___ NO REGION 1 TNGWEBTEST ___ NO REGION I had never used F11 and found there was a field Manage Region Using Goals of. The only STCs I know of that we use Transaction Goals is for CICS (SERVERS). I'm curious is TRANSACTION the default? Should I change the rest of these to REGION? For example DBPROD contains our production DB2 regions which I expect to be managed with region goals but its set to Transaction and I do have a default defined in the DB2 subsystem type as Cheryl Watson has in her sample: Qualifier ---Class ActionType Name StartService Report DEFAULTS: NEWWORK RDB2 1 ___ The JES2 SUBSYSTEM type is also TRANSACTION? Just curious of I should have all of these set to REGION except CICS or if there is a reason to leave TRANSACTION for some of these like JES2. WLM clueless. Thanks for the help! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
PDS allocation in blocks
I have a quick question I was hoping someone could answer. This is probably documented out there somewhere but I can't find it. When I allocate a PDS in blocks for example with DCB values of LRECL=80,blksize=27920,space=(27920,(6,2,10)), I notice that an extra couple of blocks (1 track) is added to the primary allocation. I'm guessing when allocating a PDS in blocks, the directory blocks called for, in this case 10 (1 block rounded up to fill out the track to 2 blocks) are added in to the primary allocation separately. Is this correct?? When allocating in tracks or cylinders, the directory blocks are allocated directly out of the primary allocation there for you get exactly the allocation you asked for. Thanks for the help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
SMF Exits Question
I was doing some clean-up work in our SMFPRMXX parmlib member and I noticed we specify the following exits: SYS(EXITS(IEFACTRT,IEFUJI,IEFU83,IEFU84,IEFU85,IEFUTL,IEFU29,IEFUJV)) SUBSYS(STC,EXITS(IEFACTRT,IEFUJI,IEFU29,IEFU83,IEFU84,IEFU85)) The only exit that we have actually altered from IBM's default is IEFACTRT. I'm assuming the other exits specified are using IBM's default modules in LPALIB which probably don't actually do anything. Is this correct? If so should I just be specifying IEFACTRT as the only exit. Any help would be appreciated. Thanks. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Help w/HSM Autobackup and Concurrent Copy
I currently have concurrent copy preferred in all of my management classes. I noticed during autobackups that datasets in use receive a RC of 8 from concurrent copy due to, I'm assuming, the TOL(ENQF) parameter not being included in the defaults for the dump command. HSM then issues an HBACK for the dataset and it gets put on an ML1 volume. Is there a way to change the default values HSM uses for ADRDSSU and if there is should I change them What do others do? Thanks for the help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Freevol on DASD BACKUP Volumes
We have a small lpar which we only use DASD for our daily backups. We have no backups defined as spill volumes. I would like to convert our current daily backup volumes from mod-3 3390s to mod-9s. I thought I could do a freevol on my backups but the freevol appears to require Spill volumes to be defined (or tapes which aren't available on this LPAR). What would be my best option? I thought I could ADDVOL some spill volumes and then do the FREEVOL but I would have to keep these SPILL volumes around indefinitely (until the backups on them expired), correct? Is there another option available? All help is appreciated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
WLM and HTTP servers
We've been using a WLM service poicy based on Cheryl Watson's sample for years now. I just had a few questions about our HTTP server classifications. We are not using scalable servers so I just need to manage the server address spaces. I currently have our 2 production servers running in the same service class along with our production DB2 address spaces (ONLPRD Service class) where they are managed with a velocity value of 50% and importance of 1. We also stick our Session manager task in this class since it needs performance just below that of VTAM and JES. Does this sound fine. Cheryl doesn't really have any recommendations for webservers but it sure sounds like they would fit into ONLPRD. Thanks for any recommendations you could give me. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Component Traces
I've noticed IBM's default CTIx parmlib members have TRACEOPTS On. Should I be going into these members and turning traceopts to off? I doubt I need all of the tracing overhead. What do others do? Thanks for any help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Couple Datasets and MAXSYSTEM in monoplex
We are running a couple of LPARS in MONOPLEX mode. We have XCF, LOGR and WLM couple datasets defined. I was curious about the maxsystem value. The sample job provided by IBM for WLM allocation uses 32 for maxsystem since its the max value so I went ahead and used 32 for my XCF and LOGR datasets. Since we are in monoplex mode, will this value cause any problems? Should I redefine these datasets using a smaller maxsystem value and if so, what should I use? IBM seemed to think leaving them as 32 is the way to go but I would like some other opinions. Thanks for any help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Default System BLKSIZE for PDSE
I have a question I was hoping someone on the board would have some experience with. I'm converting many of our PDSs to PDSEs. Most of these datasets are LRECL=80 and RECFM=FB. The default blksize (which I typically take) for the PDS is 27920 while the default for the PDSE is 32720. I'm curious if its fine to take the default of 32720 even though its not using half track blocking like the PDS. Is it safe going with the default or am I wasting a bunch of space? Does it matter with PDSE's since they are radand written to differently? Thanks for all of your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Command to clear Sysplex Dump Directory??
I know there is a command which clears all entries from the SYS1.DDIR but I can't find any Docs for it. Could someone point me to the correct manual or give me the command? Our dump directory is nearly full and I would like to clear it without going into IPCS and manually deleting each entry. I know I've entered the command before but I've went totally blank. Thanks for your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Defrag with Consolidate option
We use HSM's ARCMVEXT exit to kick of defrags on our SMS volumes and I've always used the default value which in my ADRDSSU sysin member is just DEFRAG with no options. I was curious about the consolidate option. Would it be safe for me to stick this option in?? Are there any gotchas like performance issues (longer defrag times) etc. that others have run into? Thanks for your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PDSE's in LINKLIST and LLA
I'm curious as to what the concensus on this subject is??? If I convert my PDS's to PDSE's, which in our case would only be in use by LINKLIST/LLA, would space be reclaimed or would I be forced to stop LLA and issue a SETPROG LNKLST,UNALLOC to reclaim? There is a large amount of adds and deletes to these datasets and I'm really only doing the conversion to avoid compresses to the PDS's. While I'm on the subject of PDSE's, we are on z/OS 1.7 and currently take the defaults for PDSE_LRUTIME = 60, HSP_SIZE = 0MB and PDSE_LRUCYCLES = 15. We are converting more and more PDSs to PDSEs for example all of our applications load libraries in the RPL list in our CICS regions are now PDSE's. Are these default SMS values safe to take and if not what would be some inital recommended settings? Thanks all of your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PDSE's in LINKLIST and LLA
I must be thick headed but how then would I reclaim space in these PDSEs which are in LINKLIST/LLA?? I'm assuming a typical LLA,REFRESH isn't going to do it. Thanks for your help. Quoting Tom Marchant [EMAIL PROTECTED]: On Fri, 15 Sep 2006 06:56:05 -0500, George D Dranes [EMAIL PROTECTED] wrote: I'm curious as to what the concensus on this subject is??? If I convert my PDS's to PDSE's, which in our case would only be in use by LINKLIST/LLA, would space be reclaimed or would I be forced to stop LLA and issue a SETPROG LNKLST,UNALLOC to reclaim? There is a large amount of adds and deletes to these datasets and I'm really only doing the conversion to avoid compresses to the PDS's. SETPROG LNKLST,UNALLOC only releases the ENQ. It does *not* cause the linklist to be unallocated or closed. Tom Marchant -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: PDSE's in LINKLIST and LLA
The only reason I was considering the conversion was to reduce compresses also. If this is true, there would be no reason for me to convert the LLA managed load libraries to PDSE's. Does anyone know for sure?? Also, we are at z/OS 1.7 and currently take the defaults for PDSE_LRUTIME = 60, HSP_SIZE = 0MB and PDSE_LRUCYCLES = 15. We are converting more and more PDSs to PDSEs for example all of our applications libraries in the RPL list in our CICS regions are now PDSE's. Are these default SMS values safe to take and if not what would be some inital recommended settings? Thanks for the help. I wouldn't know who to ask if it wasn't for the Listserv! Quoting Patrick O'Keefe [EMAIL PROTECTED]: On Thu, 14 Sep 2006 16:37:05 -0500, George Dranes [EMAIL PROTECTED] wrote: ... Is there any problem for me to convert these plain old PDS loadlibs to PDSE's which would be in the LINKLIST and LLA We currently have PDSE's in our CICSRPL concatenations and don't seem to have any problems. Thanks for your help. ... The one thing I ran into several releases ago (maybe as far back as 2.10) was that freed space wasn't reclaimed as long as XCFAS and LLA had the dataset allocated (or maybe open?). And eliminating compresses was the only reason I had made the dataset a PDSE. :-( We had the dataset in the linklist on 6 or 7 LPARs. Going to the hassle of getting it freed was not worth the effort. It stayed full of no members while I STEPLIBed to a PDS until we IPLed the last LPAR. As each LPAR was IPLed, the PDSE was replaced with the PDS I was STEPLIBing to. As far as I know this is still a restriction of PDSEs in the linklist. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
ACS Data Class Routine Question
I've been playing around with our ACS routines and I've ran into a wall with the Data Class routine. I'm curious if there is a way to determine the RECFM value within the ACS routine??? We hava a Data Class called SRCVLIB for PDS's with RECFM=VB. The problem arises with our vendor products which don't use our naming conventions. I would like to determine RECFM in the ACS routine but it doesn't appear to be a read-only variable available to me in the routine. I'm wondering if there is some trick I don't know that would allow me to access this value within the routine. This would allow me to assign the correct data class to these files (VB or FB). How does everyone else handle these situations? Thanks for your help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Enlarged VVDSs causing problems
We recently enlarged the VVDS on 30 or so SMS volumes becuase they were extremely small. I ran diagnoses against each of the volumes before hand and everything check out clean. To enlarge them we used a product from DINO software called TREX (which has really been a great product) which exports, lets you delete the VVDS and then recreate it and then import the records back. I ran diagnoses again after this was done and everything checked out fine (including VVDSFIX with the check option). I have recently noticed some issues with these volumes mainly from DFHSM's perspective. By the way this is z/OS 1.7. When I do an HMIG on a datset from one of these volumes (not all fail), it migrates fine but does not clean the VVDS entry out so I have an orphaned entry. What is really weird is that it appears to update the space map becuase when I attempt an IDCAMS delete NVR it is not found. Yet if I list the VVDS it is there and DIAGNOSE sees it as a problem. I am also receiving ARC0734I RC70 and RSN2 messages from HSM during autobackups and automigration yet I see no problems with any of the VVDS when using the diagnosis tools with these volumes. If I use p;3;4 to delete a file that failed migration from one of these volumes, the VVDS is updated correctly, so it appears to be an HSM thing possibly. I have IBM's tech support and Dino software's working on it but I thought I'd run it by you experts on the board. Thnaks for your help. By the way we have recently IPL'd and it is still happening. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Enlarged VVDSs causing problems
It was really for consistency. We mainly have 15 track VVDS so we wanted to make them 15 track. I've always heard that you don't want your VVDSs to go into extents for performance reasons? Quoting Ted MacNEIL [EMAIL PROTECTED]: We recently enlarged the VVDS on 30 or so SMS volumes becuase they were extremely small VVDS's are allowed to take extents, and since they are VSAM (123 extents); I don't understand the issue. The default used to be 4 tracks per extent, but if you do your homework, you can be better than that. Also, your VVDS doesn't have to be very large. . -teD Marching to the beat of a different flute -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: IEFACTRT and routing questions
I was able to eliminate the IEF170I on STCs by setting the MCSFLAG field in IEEACTRT to the following: MCSFLAG DCB'1000' *0123456789ABCDEF The WTO now goes to our consoles and to hardcopy since our hardcopy is defined as ROUTCODE(ALL) in our CONSOLXX member. Thanks for the help. Quoting David Andrews [EMAIL PROTECTED]: On Mon, 2006-05-01 at 08:35 -0500, George Dranes wrote: This output is only written to Hardcopy, not the consoles. In our exit we let the ROUTE codes and MCSFLAGS default. We use ROUTCDE=(13),DESC=(6) and then configure SYS1.PARMLIB(CONSOLxx) appropriately to route the messages where we want. (We don't use ROUTCDE=11 because this results in duplicate IEF196I messages when IEFACTRT runs under a WLM initiator.) -- David Andrews A. Duda and Sons, Inc. [EMAIL PROTECTED] -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: ML2 Datasets not Expiring
Thanks Chris, that was it. I think in this case instead of changing EXPIREDDATASETS to scratch, I'm going to change RETENTION LIMIT to 0 for that particular management class. I'm just not sure how many datasets and which datasets have EXPIRATION dates and will get deleted. Correct me if I'm wrong, changing the RETENTION LIMIT to 0 tells HSM not to pay any attention to the RETENTION PERIOD entered by the user but instead use the Management class settings instead. One again thanks for the help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
ML2 Datasets not Expiring
I have a management class DB2LOGMC defined as: Expiration Attributes Expire after Days Non-usage . . 60 (1 to or NOLIMIT) Expire after Date/Days . . . . . 60 (0 to , /mm/dd or NOLIMIT) Retention Limit . . . . . . . . . NOLIMIT (0 to or NOLIMIT) I've noticed secondary space management doesn't appear to be expiring these datasets. As an example: The ML2 TTOC record for dataset DB2DBD0.DBD1.ARCLG1.D05012.T0701067.B517 shows an Exp date of 05/02/09. MCDS Record: DSN=DB2DBD0.DBD1.ARCLG1.D05012.T0701067.B517 MIGVOL=030186 DSO=PS SDSP=NO LAST REF=05/01/12 MIG=05/02/08 TRKS=013 2K BLKS= *** TIMES MIG= 001 16K BLKS=08 LAST MIGVOL=*NONE* BCDS record: DSN=DB2DBD0.DBD1.ARCLG1.D05012.T0701067.B517 BACK FREQ = *** MAX VERS=*** BDSN=DFHSM.BACK.T023902.DB2DBD0.DBD1.J5013 BACKVOL=030413 FRVOL=SMS140 BACKDATE=05/01/13 BACKTIME=02:39:00 CAT=YES GEN=000 VER=001 UNS/RET= NO RACF IND=NO BACK PROF=NO The catalog record: NONVSAM --- DB2DBD0.DBD1.ARCLG1.D05012.T0701067.B517 HISTORY DATASET-OWNER-(NULL) CREATION2005.012 RELEASE2 EXPIRATION--2005.040 ACCOUNT-INFO---(NULL) SMSDATA STORAGECLASS --DB2SC MANAGEMENTCLASS-DB2LOGMC DATACLASS -DB2DC LBACKUP ---.XXX. VOLUMES VOLSERMIGRAT DEVTYPE--X'78048083' FSEQN- - ASSOCIATIONS(NULL) ATTRIBUTES These are DB2 logs and it does appear looking at the catalog entry that DB2 is specifying its own expiration date (28 days) for these datasets. Shouldn't Secondary space management still be expiring these ML2 datasets? What am I doing wrong? Thanks for any help. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html