Re: SDSF syslog slow access, was: Dump management
We run an SDSF LOG command in batch periodically to try and reduce the long wait for the first log access. Bill On Thu, 24 Jul 2008 06:33:33 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: How much syslog do you keep in your spool (hours, days, weeks)? Also, that depends on how often someone does it and update HASPINDX. 24 hours on every system. Which is why I'd rather use operlog when I know something stretches across midnight. I'll keep in mind that the long time for the first access is due to the haspindx update, should this become a general complaint around here... It always feels like forever, and yes, mostly on those systems nobody ever really has a need to look at Best regards, Barbara -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Wed, 23 Jul 2008 06:44:49 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: When I established operlog here, I got temporary update access to all ISPF profiles for the company to change the default log a in SDSF to log s to not 'irritate everybody'. Since it was too late for me to change the default in the SDSF parms I used the ISFUSER exit instead of updating profiles. http://bama.ua.edu/cgi-bin/wa?A2=ind0510L=ibm-mainD=1amp;O=DX=5C28D7764F4450405CP=75777 -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Wed, 23 Jul 2008 06:44:49 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: The thing that I really hate is that it also takes just about forever when I look into syslog - log s (even on the same system). How much syslog do you keep in your spool (hours, days, weeks)? Also, that depends on how often someone does it and update HASPINDX. Sometimes my first access in the morning is slow if it is one of the systems that operators don't look at the syslog (LPARs that don't run batch jobs like SAP and WebSphere). Some of our systems keep 24 hours, others only 12 hours. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
I just have our scheduler system submit a batch SDSF job every morning @0700 that issues a LOG S followed by ++ALL and that updates HASPINDX. No more waiting on SYSLOG when I get in the morning. Dennis -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Mark Zelden Sent: Wednesday, July 23, 2008 7:50 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Dump management On Wed, 23 Jul 2008 06:44:49 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: The thing that I really hate is that it also takes just about forever when I look into syslog - log s (even on the same system). How much syslog do you keep in your spool (hours, days, weeks)? Also, that depends on how often someone does it and update HASPINDX. Sometimes my first access in the morning is slow if it is one of the systems that operators don't look at the syslog (LPARs that don't run batch jobs like SAP and WebSphere). Some of our systems keep 24 hours, others only 12 hours. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
Nice. I wouldn't want to set that up for 30 systems though (even if I could schedule it myself). I'm not that impatient. :-) Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html On Wed, 23 Jul 2008 09:31:20 -0500, Dennis Trojak [EMAIL PROTECTED] wrote: I just have our scheduler system submit a batch SDSF job every morning @0700 that issues a LOG S followed by ++ALL and that updates HASPINDX. No more waiting on SYSLOG when I get in the morning. Dennis -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Mark Zelden Sent: Wednesday, July 23, 2008 7:50 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Dump management On Wed, 23 Jul 2008 06:44:49 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: The thing that I really hate is that it also takes just about forever when I look into syslog - log s (even on the same system). How much syslog do you keep in your spool (hours, days, weeks)? Also, that depends on how often someone does it and update HASPINDX. Sometimes my first access in the morning is slow if it is one of the systems that operators don't look at the syslog (LPARs that don't run batch jobs like SAP and WebSphere). Some of our systems keep 24 hours, others only 12 hours. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: SDSF syslog slow access, was: Dump management
How much syslog do you keep in your spool (hours, days, weeks)? Also, that depends on how often someone does it and update HASPINDX. 24 hours on every system. Which is why I'd rather use operlog when I know something stretches across midnight. I'll keep in mind that the long time for the first access is due to the haspindx update, should this become a general complaint around here... It always feels like forever, and yes, mostly on those systems nobody ever really has a need to look at Best regards, Barbara -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
I keep dumps for 6 months, as I recall reading that is how long DAE remembers an SVC dump occurred... Bruce Hewson -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
How old is the oldest dump you recall having to go back to? That was the one occurance, and since all dumps get filtered through me, anyway, I *knew* there was a saved one that was about 3 weeks old. Anyway, it took me long enough to get a consistent erep saving set up here, so that I can get back with erep records a while (15 months). As for dumps, it is usually that they weren't even written or suppressed by DAE when we need one. I'm curious; are you using an automated process to get SYSLOG to tape? Syslog gets assigned a class that is written to our default archive system that also archives all joblogs and assorted other files, so in production and in theory, I can go back ten years finding syslogs. I have been unable so far to convince the powers that be that operlog is much better as it definitely shows more (everythings after a $pjes2 during shutdown), so no one really cares about operlog. Regards, Barbara Nitz -- Ist Ihr Browser Vista-kompatibel? Jetzt die neuesten Browser-Versionen downloaden: http://www.gmx.net/de/go/browser -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Tue, 22 Jul 2008 04:17:11 -0500, Bruce Hewson [EMAIL PROTECTED] wrote: I keep dumps for 6 months, as I recall reading that is how long DAE remembers an SVC dump occurred... Sort of. DAE ignores entries that haven't been updated in 180 days. Sounds like a good reason to use 6 months. Although I've never had to go back to one older than the 30 days we keep them in my 6 years here. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Tue, 22 Jul 2008 14:13:27 +0200, Barbara Nitz [EMAIL PROTECTED] wrote: I have been unable so far to convince the powers that be that operlog is much better as it definitely shows more (everythings after a $pjes2 during shutdown), so no one really cares about operlog. That is a minor benefit. The big benefit is having all the logs for the sysplex combined in one place when trying to diagnose a problem. There is still a 9 system sysplex here that doesn't use it because they couldn't afford the extra CF activity in their DB2 data sharing environment. But that decision goes back to 9672 CFs. I don't think it would have been an issue with the z900-100 CFs and it certainly wouldn't be an issue today with z9 CFs. Just haven't gotten around to it yet. Anyway... trying to look back at problems in that sysplex can be a real PITA to say the least if you don't know what system to start looking on. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
That is a minor benefit. Not to me, it isn't. To me this is a HUGE benefit. I am the one who is supposed to figure out why 'automation didn't work' during system shutdown, especially when the operator forced down Jes2 after he couldn't see the jobs/stcs that hadn't come down *before* JES. Or when they say 'automation didn't terminate'. Try to find out why when there is no syslog, no automation log, no joblog (the thing is started sub=mstr) They are especially fond of saying 'the system doesn't come down' on the monoplexes (where we didn't have operlog) until I got so fed up that I forced the setup of operlog even on the monoplexes. The big benefit is having all the logs for the sysplex combined in one place when trying to diagnose a problem. *That* is considered a big drawback here! When I established operlog here, I got temporary update access to all ISPF profiles for the company to change the default log a in SDSF to log s to not 'irritate everybody'. I admit that I go to syslog, too, when I have to check for a problem that I know is limited to one system, mostly when that system doesn't output a lot of lines. Scrolling past all the stuff from other systems that output a lot is tiresom. (And yes, I know about operlog viewer...) The thing that I really hate is that it also takes just about forever when I look into syslog - log s (even on the same system). Barbara -- Pt! Schon vom neuen GMX MultiMessenger gehört? Der kann`s mit allen: http://www.gmx.net/de/go/multimessenger -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Dump management
Quoting Mark Zelden: Why not migrate them directly to ML2 with HSM? That is what we do. They are only on DASD for a short time. We use MPF to kick off a task to extract the dump summary and create an ISPF table I use to manage the dumps. The dump itself is flicked to ML2 - on virtual tapes these days, so I can get them back in next to no time in need. They stay there till I decide otherwise - I prefer to have too many than delete one I might need later on. My (manual) purge window is around the 3 month mark (picked out of the air, no science involved). Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
Shane Ginnane wrote: Quoting Mark Zelden: Why not migrate them directly to ML2 with HSM? That is what we do. They are only on DASD for a short time. We use MPF to kick off a task to extract the dump summary and create an ISPF table I use to manage the dumps. The dump itself is flicked to ML2 - on virtual tapes these days, so I can get them back in next to no time in need. They stay there till I decide otherwise - I prefer to have too many than delete one I might need later on. My (manual) purge window is around the 3 month mark (picked out of the air, no science involved). Shane ... We also use the dynamic dump allocation process and send them to our TMM pool where they sit for three days, HSM then migrates them to tape where they automatically expire in six months, a date I also picked out the air. -- Mark Jacobs Time Customer Service Tampa, FL When the Nazis came for the communists, I remained silent; I was not a communist. When they locked up the social democrats, I remained silent; I was not a social democrat. When they came for the trade unionists, I did not speak out; I was not a trade unionist. When they came for the Jews, I remained silent; I wasn't a Jew. When they came for me, there was no one left to speak out. Pastor Martin Niemöller (1892–1984) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
-Original Message- From: IBM Mainframe Discussion List On Behalf Of Mark Jacobs Shane Ginnane wrote: Quoting Mark Zelden: Why not migrate them directly to ML2 with HSM? That is what we do. They are only on DASD for a short time. We use MPF to kick off a task to extract the dump summary and create an ISPF table I use to manage the dumps. The dump itself is flicked to ML2 - on virtual tapes these days, so I can get them back in next to no time in need. They stay there till I decide otherwise - I prefer to have too many than delete one I might need later on. My (manual) purge window is around the 3 month mark (picked out of the air, no science involved). We also use the dynamic dump allocation process and send them to our TMM pool where they sit for three days, HSM then migrates them to tape where they automatically expire in six months, a date I also picked out the air. Likewise: We send dumps to ML2 tonight, where they stay until somebody gets around to deleting them. :-) -jc- -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Mon, 21 Jul 2008 10:28:39 -0400, Mark Jacobs [EMAIL PROTECTED] wrote: Shane Ginnane wrote: Quoting Mark Zelden: Why not migrate them directly to ML2 with HSM? That is what we do. They are only on DASD for a short time. We use MPF to kick off a task to extract the dump summary and create an ISPF table I use to manage the dumps. The dump itself is flicked to ML2 - on virtual tapes these days, so I can get them back in next to no time in need. They stay there till I decide otherwise - I prefer to have too many than delete one I might need later on. My (manual) purge window is around the 3 month mark (picked out of the air, no science involved). Shane ... We also use the dynamic dump allocation process and send them to our TMM pool where they sit for three days, HSM then migrates them to tape where they automatically expire in six months, a date I also picked out the air. -- I just looked at dump the MGMTCLAS and ours is set for 30 days. I can't imagine needing a dump 6 months after it happened, but tape is cheap. How old is the oldest dump you recall having to go back to? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
Mark Zelden wrote: On Mon, 21 Jul 2008 10:28:39 -0400, Mark Jacobs [EMAIL PROTECTED] wrote: Shane Ginnane wrote: Quoting Mark Zelden: Why not migrate them directly to ML2 with HSM? That is what we do. They are only on DASD for a short time. We use MPF to kick off a task to extract the dump summary and create an ISPF table I use to manage the dumps. The dump itself is flicked to ML2 - on virtual tapes these days, so I can get them back in next to no time in need. They stay there till I decide otherwise - I prefer to have too many than delete one I might need later on. My (manual) purge window is around the 3 month mark (picked out of the air, no science involved). Shane ... We also use the dynamic dump allocation process and send them to our TMM pool where they sit for three days, HSM then migrates them to tape where they automatically expire in six months, a date I also picked out the air. -- I just looked at dump the MGMTCLAS and ours is set for 30 days. I can't imagine needing a dump 6 months after it happened, but tape is cheap. How old is the oldest dump you recall having to go back to? Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html We have had several occurrences of needing both dumps and archived syslog/operlog data from several months in the past. As you stated tape is cheap and I would much rather have it and not need it than the reverse. -- Mark Jacobs Time Customer Service Tampa, FL When the Nazis came for the communists, I remained silent; I was not a communist. When they locked up the social democrats, I remained silent; I was not a social democrat. When they came for the trade unionists, I did not speak out; I was not a trade unionist. When they came for the Jews, I remained silent; I wasn't a Jew. When they came for me, there was no one left to speak out. Pastor Martin Niemöller (1892–1984) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
snip--- We have had several occurrences of needing both dumps and archived syslog/operlog data from several months in the past. As you stated tape is cheap and I would much rather have it and not need it than the reverse. --unsnip I'm curious; are you using an automated process to get SYSLOG to tape? We use an automatic command to close the current log and open a new one every night at midnight, but we've never been able to completely automate the process of saving it to tape. Also, we like to keep seven calendar days' worth on the spool, for SDSF searches, etc. If we could save it to tape in a completely automated fashion, and restore to a sequential dataset when needed, our regulators and auditors would be ECSTATIC. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
Rick Fochtman wrote: snip--- We have had several occurrences of needing both dumps and archived syslog/operlog data from several months in the past. As you stated tape is cheap and I would much rather have it and not need it than the reverse. --unsnip I'm curious; are you using an automated process to get SYSLOG to tape? We use an automatic command to close the current log and open a new one every night at midnight, but we've never been able to completely automate the process of saving it to tape. Also, we like to keep seven calendar days' worth on the spool, for SDSF searches, etc. If we could save it to tape in a completely automated fashion, and restore to a sequential dataset when needed, our regulators and auditors would be ECSTATIC. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html We use a combination of IBM's operlog write to dataset, which I modified slightly to send each days operlog to DASD and then HSM automatically migrates it to tape based on its management class. Our automated operations package starts the process going five minutes prior to midnight by issuing a start command for this procedure; //OPERLOG PROC //SSS EXEC PGM=IEAMDBLG, // PARM='COPY(0),DELETE(0),HCFORMAT(CENTURY)' //STEPLIB DD DSN=TECHSVC.TECHNAPF.LOADLIB,DISP=SHR //SYSLOG DD DSN=MVSISV.OPERLOG.DLYYMMDD., // DISP=(NEW,CATLG,DELETE), // DCB=BLKSIZE=22880, // SPACE=(134,(2,1),RLSE),AVGREC=M, // DATACLAS=PSSTRIPE, // RETPD=180 The dataset gets allocated with todays date, but my modification to IEAMDBLG just added a wait for ten minutes so it actually begins writing yesterdays operlog data at 12:05 AM. -- Mark Jacobs Time Customer Service Tampa, FL When the Nazis came for the communists, I remained silent; I was not a communist. When they locked up the social democrats, I remained silent; I was not a social democrat. When they came for the trade unionists, I did not speak out; I was not a trade unionist. When they came for the Jews, I remained silent; I wasn't a Jew. When they came for me, there was no one left to speak out. Pastor Martin Niemöller (1892–1984) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
Our automation starts the process at 23:59:59 with W L followed by starting a job with S CONSOLE which writes the syslog to tape GDG (30 levels), prints a copy for our report archival system and is kept for a few years, and then copies the tape to a disk GDG (3 levels). //CONSOLE PROC //* //IEFPROC EXEC PGM=IASXWR00,PARM=PL //IEFRDER DD UNIT=TAPE,VOLUME=(,RETAIN,,2),DISP=(,CATLG), //DSNAME=CONSOLE.DAILY.OUT.TO.TAPE(+1) //* //PRINTEXEC PGM=IEBGENER //SYSPRINT DD SYSOUT=* //SYSINDD DUMMY //SYSUT1 DD DISP=(OLD,PASS),UNIT=TAPE, //DSN=*.IEFPROC.IEFRDER //SYSUT2 DD SYSOUT=U SYSOUT CLASS FOR OUR REPORT ARCHIVAL SYSTEM //* //COPY EXEC PGM=IEBGENER,COND=EVEN //SYSPRINT DD SYSOUT=* //SYSINDD DUMMY //SYSUT1 DD DISP=OLD,DSN=*.IEFPROC.IEFRDER //SYSUT2 DD DISP=(,CATLG),DSN=CONSOLE.DAILY.OUT.TO.DISK(+1), //UNIT=DISK,SPACE=(TRK,(2400,100),RLSE),MGMTCLAS=MNIG9 //* -Original Message- From: IBM Mainframe Discussion List [mailto:[EMAIL PROTECTED] On Behalf Of Rick Fochtman Sent: Monday, July 21, 2008 10:45 AM To: IBM-MAIN@BAMA.UA.EDU Subject: Re: Dump management snip--- We have had several occurrences of needing both dumps and archived syslog/operlog data from several months in the past. As you stated tape is cheap and I would much rather have it and not need it than the reverse. --unsnip I'm curious; are you using an automated process to get SYSLOG to tape? We use an automatic command to close the current log and open a new one every night at midnight, but we've never been able to completely automate the process of saving it to tape. Also, we like to keep seven calendar days' worth on the spool, for SDSF searches, etc. If we could save it to tape in a completely automated fashion, and restore to a sequential dataset when needed, our regulators and auditors would be ECSTATIC. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
On Mon, 21 Jul 2008 11:45:03 -0500, Rick Fochtman [EMAIL PROTECTED] wrote: --unsnip I'm curious; are you using an automated process to get SYSLOG to tape? We use an automatic command to close the current log and open a new one every night at midnight, but we've never been able to completely automate the process of saving it to tape. Also, we like to keep seven calendar days' worth on the spool, for SDSF searches, etc. If we could save it to tape in a completely automated fashion, and restore to a sequential dataset when needed, our regulators and auditors would be ECSTATIC. Yes. Automation does the writelog command at midnight and starts an STC. That STC is an external writer that picks up the syslog from class L (which is unique to syslog output in the spool) and writes the syslog to a disk GDG. After the idle message from the external writer, automation stops the external writer. The next step in STC copies the disk gdg to a tape GDG. In sysplex environments, one of the LPARs is chosen per sysplex to have additional steps to dump the operlog to disk and copy to tape. The GDG limit controls the retention of disk files, tape may be controlled by GDG limit an / or tape mgmt system retention period control (may not be the same everywhere). In a few of the LPARs, the disk GDG retention is set more than 3 or 4 GDGs but HSM migrates those to ML1 (the GDG limit is not high enough to where they ever end up on ML2). This is quicker than recall from virtual tape that is not in the buffer. Oh... in shared spool environments the writelog / STC only executes on one LPAR and I have a REXX exec I wrote that splits off the syslog into separate LPARs. This was so we didn't have to dedicate a different class per LPAR for the syslog. Back in the day output classes were hard to come by. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:[EMAIL PROTECTED] z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Dump management
That sounds good, Mark, but I don't have the advantage of the level of automation you seem to have. If I can't get it via JES2 Automatic Commands or MPF exits, I just plain ain't got it. But a MPF exit to stop the External Writer when it goes idle doesn't seem like much of a challenge. :-) I have one advantage: in my three-system Basic Sysplex, each system's SYSLOG goes to a different class, L, M, or N, so I can at least maintain some degree of separation. Our job streams that might benefit from automation are constructed so that each job, when completed normally, submits the next job in the stream via IEBGENER from the JOBS PDS to the internal reader. Reports are routed to desktop printers as appropriate via TCP/IP. (I don't yet know all the details here, so forgive me for being a bit vague.) The first job in each stream is submitted by a JES2 Automatic Command that starts the process as a specific time. I'm doing part-time consulting in a university environment, in return for free access to the system for my own development work (on RACF reporting tools and ARCHIVER upgrades). But I can't be more specific than that, because of an agreement with management. Rick --snip-- Automation does the writelog command at midnight and starts an STC. That STC is an external writer that picks up the syslog from class L (which is unique to syslog output in the spool) and writes the syslog to a disk GDG. After the idle message from the external writer, automation stops the external writer. The next step in STC copies the disk gdg to a tape GDG. In sysplex environments, one of the LPARs is chosen per sysplex to have additional steps to dump the operlog to disk and copy to tape. The GDG limit controls the retention of disk files, tape may be controlled by GDG limit an / or tape mgmt system retention period control (may not be the same everywhere). In a few of the LPARs, the disk GDG retention is set more than 3 or 4 GDGs but HSM migrates those to ML1 (the GDG limit is not high enough to where they ever end up on ML2). This is quicker than recall from virtual tape that is not in the buffer. Oh... in shared spool environments the writelog / STC only executes on one LPAR and I have a REXX exec I wrote that splits off the syslog into separate LPARs. This was so we didn't have to dedicate a different class per LPAR for the syslog. Back in the day output classes were hard to come by. ---unsnip- --- Query: why didn't Noah swat those two mosquitoes?? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html