Re: how-to Sysplex? - the LOGR and exploiters part
(Friday question about priceplex) It happens that independent systems/apllications are sysplexed (connected together) just to get savings from licenses etc. Q: why not to put all the systems on one large CPC? From licensing point of view it can be as effective as parallel sysplex. From RAS point of view - well, application A on CPC A and app. B on CPC B is not more secure, unless you really integrate it. -- Radoslaw Skorupka Lodz, Poland -- BRE Bank SA ul. Senatorska 18 00-950 Warszawa www.brebank.pl Sąd Rejonowy dla m. st. Warszawy XII Wydział Gospodarczy Krajowego Rejestru Sądowego, nr rejestru przedsiębiorców KRS 025237 NIP: 526-021-50-88 Według stanu na dzień 01.01.2009 r. kapitał zakładowy BRE Banku SA (w całości wpłacony) wynosi 118.763.528 złotych. W związku z realizacją warunkowego podwyższenia kapitału zakładowego, na podstawie uchwały XXI WZ z dnia 16 marca 2008r., oraz uchwały XVI NWZ z dnia 27 października 2008r., może ulec podwyższeniu do kwoty 123.763.528 zł. Akcje w podwyższonym kapitale zakładowym BRE Banku SA będą w całości opłacone. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Tue, 2009-08-25 at 01:28 -0500, Chris Craddock wrote: > ... > So I'll throw the question to the assembled throng; should IBM abandon > sysplex as the centerpiece of their growth/automation/availability strategy > for z/OS workloads, or should they remove the barriers (real and imagined) > to genuine sysplex adoption and exploitation? But if you vote for abandoning > sysplex, what's your other plan? Back when the z9 announcement was done here I queried the product manager whether the hoopla re consolidating everything back to one footprint indicated an abandonment of the sysplex dream. Mm - of course not ! The z10 and even larger engine counts seems headed in the same direction. Personally, unless fully implemented, I don't see the point to parallel sysplex. In other worlds clusters work, and cloud is well above the event horizon. Will z/OS still be a player, or will zLinux be the dominant force in this pond ?. Barbara won't be happy if it's the latter ... :0) Shane ... -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Tue, 25 Aug 2009 00:38:48 -0500, Barbara Nitz wrote: >Don, >[...] >>OFFLOAD_SYSTEMS( >> [INCLUDE(sysname1 [,sysname2]...)] >> [EXCLUDE(sysname1 [,sysname2]...)] >>) > >This was written so nicely :-), I went and checked the books if it's a function >in 1.11. Pity it isn't. I like the idea, though. Now who is going to write the >requirement?!? Yep, he got me, too... Art Gutowski -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sysplex dis/advantages, was: Re: how-to Sysplex? - the LOGR and exploiters part
- Original Message From: Barbara Nitz nitz-...@gmx.net > Isn't this a very human trait: Don't change the way I have always done this, > I > am used to it, I am getting older and it is harder to learn new things? Indeed is human. But this way of thinking can literally kill a company. My 2 cents, of course. Walter Marguccio z/OS Systems Programmer BELENUS LOB Informatic GmbH Munich - Germany -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Sysplex dis/advantages, was: Re: how-to Sysplex? - the LOGR and exploiters part
Chris, >it doesn't really alter the point I was making >that sysplex design is predicated on sharing resources everywhere. That >design philosophy is what removes single points of failure and enables the >sysplex' marquee features of high availability and in theory, horizontal >scaling. It is equally obvious that that design philosophy is STILL (nearly >20 years on) at odds with the way people by and large continue to use those >systems. Don't I know it! And I agree that in theory sysplex is a good idea for horizontal growth and availability. And for sharing. In theory. I have learned the hard way (with a lot of scars still hurting!) when I left the IBM software support glasshouse that 'the real world' doesn't care about the theory. All they want is to go on the way they always did it. But they also want to use 'the good things' (namely the pricing advantages that were introduced to push the idea of sysplex forward). Use the new stuff, but don't change anything! Isn't this a very human trait: Don't change the way I have always done this, I am used to it, I am getting older and it is harder to learn new things? (I am sticking with analogue photography, I don't want to deal with the newfangled digital stuff!) >To be utterly blunt, there are many open >systems (and even windows systems) out in the big wide world >running >business critical workloads that absolutely run rings around typical z/OS >systems configured in 1989 fashion. And no, I am not kidding about that. >Trust me. I believe you, without seeing proof, and that's saying something! . Unfortunately, it is very expensive to set things up sysplex-wise to be really really available and reconfigurable at the drop of a hat. One buys the high availability with a lot of redundancy. Only the very big installations can afford that these days, I think. >I will be the last person to defend software extortion but candidly the >complexity (and development cost) of these products is so high Are they? I have always found the 'sysplex primitives' very straight forward and easy to grasp. I have never really understood why others don't feel like I do :-( >and their addressable market is so small that now *that* I can really believe! >So I'll throw the question to the assembled throng; should IBM abandon >sysplex as the centerpiece of their growth/automation/availability strategy >for z/OS workloads, or should they remove the barriers (real and imagined) >to genuine sysplex adoption and exploitation? But if you vote for abandoning >sysplex, what's your other plan? I want to work in z/OS until I retire, so I opt for removing barriers by making sub-sysplexing more of a strategy. :-) As that is what the PHBs all want. But to me it looks like the future may hold a z-Box, but not necessarily z/OS. IBM appears to push for zLinux, and z/OS falls down. Or the other platforms win making z obsolete, not because they're better, but because they're less expensive. But then, what was first - the high prices or the smaller market? Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Tue, Aug 25, 2009 at 12:38 AM, Barbara Nitz wrote: > >The inconvenient truth of all things sysplex is that sysplex is a "shared > >everything" architecture which means for it to work correctly, everything > >must be available everywhere in the plex. Subdividing the plex with old > >fashioned ideas like "Prod runs on SYSA, development runs on SYSB and the > >sysprog sandbox is SYSC" can only lead to operational inconvenience (at > >best) or disaster (worst) > >It isn't 1989 any more folks. It is time to shift gears. This stuff (only) > >works when you don't try to fight it. If you keep configuring systems like > >it is 1989, you will get operational results like it is 1989. > > Sorry Chris, but this is spoken like someone living in a cloud. > Unfortunately, > with the apparent exception of very few huge installations, money is the > biggest concern. And software pricing. And no amount of name-calling > "you're > living behind the times" is going to change *that* management approach (a > few enlightened individuals maybe excepted, but I haven't met any > personally.) > > Oh, and did I mention that response times for IMS shared queues are sooo > bad > (confirmed by IMS development) that we CANNOT do sysplex sharing (also > confirmed by IMS development)? What are we supposed to do? Pay twice (or > more) as much just because we cannot establish 'real' data sharing as > defined > by IBM? Not use sysplex pricing to reduce software costs in order to still > *have* a mainframe/MVS/zOS? > > Out in the real world we're stuck with the mindset 'SYSA is production, > SYSB is > development, SYSC is sysprog sandbox', and please keep them separated but > join them for pricing reasons! How do you suggest 'we not fight them'? > > Barbara, I completely agree that "must" was a poor choice of word, "should" would have been better. However, it doesn't really alter the point I was making that sysplex design is predicated on sharing resources everywhere. That design philosophy is what removes single points of failure and enables the sysplex' marquee features of high availability and in theory, horizontal scaling. It is equally obvious that that design philosophy is STILL (nearly 20 years on) at odds with the way people by and large continue to use those systems. That's a big dilemma for everyone (IBM, ISVs and customers) because for all its warts and perceived flaws, sysplex is still the least bad of all the ways of dealing with multiple systems. In the absence of that "shared everything" configuration, horizontal scaling is obviously moot and the actual availability you can achieve is gated by the sum of the (planned or otherwise) down-times for the software stack supporting business functionality on any one system. In practice that turns out to be embarassingly low. To be utterly blunt, there are many open systems (and even windows systems) out in the big wide world running business critical workloads that absolutely run rings around typical z/OS systems configured in 1989 fashion. And no, I am not kidding about that. Trust me. High end sysplex users typically get what they pay for in terms of availability and scalabilty. Whether or not all of the individual functions (eg. IMS CQS) that are based on sysplex primitives actually perform as desired in that sort of configuration is a subject for a sepparate discussion - as is the software cost argument. I will be the last person to defend software extortion but candidly the complexity (and development cost) of these products is so high and their addressable market is so small that it is irrational to expect their prices to anything by brutally expensive and that has nothing at all to do with sysplex. We're well into the era where a single sales person could personally know all of the customers who account for 80% of IBM (and everyone else's) revenue in the z ecosystem. So I'll throw the question to the assembled throng; should IBM abandon sysplex as the centerpiece of their growth/automation/availability strategy for z/OS workloads, or should they remove the barriers (real and imagined) to genuine sysplex adoption and exploitation? But if you vote for abandoning sysplex, what's your other plan? -- This email might be from the artist formerly known as CC (or not) You be the judge. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
Don, >You could request new DEFINE/UPDATE LOGSTREAM options to control which >systems are permitted to perform offloads. Perhaps something along the lines >of: > >OFFLOAD_SYSTEMS( > [INCLUDE(sysname1 [,sysname2]...)] > [EXCLUDE(sysname1 [,sysname2]...)] >) This was written so nicely :-), I went and checked the books if it's a function in 1.11. Pity it isn't. I like the idea, though. Now who is going to write the requirement?!? >The inconvenient truth of all things sysplex is that sysplex is a "shared >everything" architecture which means for it to work correctly, everything >must be available everywhere in the plex. Subdividing the plex with old >fashioned ideas like "Prod runs on SYSA, development runs on SYSB and the >sysprog sandbox is SYSC" can only lead to operational inconvenience (at >best) or disaster (worst) >It isn't 1989 any more folks. It is time to shift gears. This stuff (only) >works when you don't try to fight it. If you keep configuring systems like >it is 1989, you will get operational results like it is 1989. Sorry Chris, but this is spoken like someone living in a cloud. Unfortunately, with the apparent exception of very few huge installations, money is the biggest concern. And software pricing. And no amount of name-calling "you're living behind the times" is going to change *that* management approach (a few enlightened individuals maybe excepted, but I haven't met any personally.) Oh, and did I mention that response times for IMS shared queues are sooo bad (confirmed by IMS development) that we CANNOT do sysplex sharing (also confirmed by IMS development)? What are we supposed to do? Pay twice (or more) as much just because we cannot establish 'real' data sharing as defined by IBM? Not use sysplex pricing to reduce software costs in order to still *have* a mainframe/MVS/zOS? Out in the real world we're stuck with the mindset 'SYSA is production, SYSB is development, SYSC is sysprog sandbox', and please keep them separated but join them for pricing reasons! How do you suggest 'we not fight them'? Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>> The inconvenient truth of all things sysplex is that sysplex is a "shared >> everything" architecture which means for it to work correctly, everything >> must be available everywhere in the plex. While I respect CC, greatly, I disagree with "must". >>Subdividing the plex with old fashioned ideas like "Prod runs on SYSA, >>development runs on SYSB and the >> sysprog sandbox is SYSC" can only lead to operational inconvenience (at >> best) or disaster (worst) I have seen/worked with 'partitioned' SYSPLEX environments. I've had up to 5 images in a SYSPLEX, not as large as some, and more in a GDPS environment. We shared IMS & DB2 in up to three, and CICS with DB2 in up to two. I've partitioned off two MASPLEX environments, and had two separate development environments. I've also worked with a 'shared nothing' SYSPLEX, except for system, consoles, RACF, etc. I'm not trying to brag, rather to point out that I don't think "must" is accurate. SYSPLEX facilitates sharing, but it doesn't make it mandatory. >> >> It isn't 1989 any more folks. It is time to shift gears. This stuff (only) >> works when you don't try to fight it. If you keep configuring systems like > it is 1989, you will get operational results like it is 1989. >> Again, I disagree. SYSPLEX gives you many options. >> (Ooh, did I say that out loud? My bad!) >As a VSE shop migrating to z/OS I have little understanding of these things. >In VSE we have two development guests (under VM) and two production, and never >the twain shall meet. >In my VSE mindset I think that under z/OS we would have development LPARs >separate from production LPARs. That is a valid configuration. >Is this not the case? Yes. And, you can enforce it through security and configuration options. For example: 1. Only assign test classes to the development LPARs. 2. Don't start TSO on production. 3. Use your security product to restrict access to production. 4. etc. >Are there special "z/OS" things (I am not a Sysprog) that keep test jobs >nicely segregated from prod, even when running in the same LPAR? I would not run test & prod on the same LPAR, but that's me. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>>> On 8/24/2009 at 10:10 AM, in message , Chris Craddock wrote: >> >> >> > The inconvenient truth of all things sysplex is that sysplex is a "shared > everything" architecture which means for it to work correctly, everything > must be available everywhere in the plex. Subdividing the plex with old > fashioned ideas like "Prod runs on SYSA, development runs on SYSB and the > sysprog sandbox is SYSC" can only lead to operational inconvenience (at > best) or disaster (worst) > > It isn't 1989 any more folks. It is time to shift gears. This stuff (only) > works when you don't try to fight it. If you keep configuring systems like > it is 1989, you will get operational results like it is 1989. > > (Ooh, did I say that out loud? My bad!) As a VSE shop migrating to z/OS I have little understanding of these things. In VSE we have two development guests (under VM) and two production, and never the twain shall meet. In my VSE mindset I think that under z/OS we would have development LPARs separate from production LPARs. Is this not the case? Are there special "z/OS" things (I am not a Sysprog) that keep test jobs nicely segregated from prod, even when running in the same LPAR? The information contained in this electronic communication and any document attached hereto or transmitted herewith is confidential and intended for the exclusive use of the individual or entity named above. If the reader of this message is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any examination, use, dissemination, distribution or copying of this communication or any part thereof is strictly prohibited. If you have received this communication in error, please immediately notify the sender by reply e-mail and destroy this communication. Thank you. -- Frank Swarbrick Applications Architect - Mainframe Applications Development FirstBank Data Corporation - Lakewood, CO USA P: 303-235-1403 -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
I agree that it is an inconvenient truth: IBM has designed the sysplex to work best when resources are equally accessible by every system. However, the real world does not seem to be a symmetric. So forcing the square peg (i.e., real world) into the round hole (sysplex), may not accomplish a customer's goals. Therefore, IBM's sysplex design may need some alterations as well. Don Williams -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Chris Craddock Sent: Monday, August 24, 2009 12:10 PM To: IBM-MAIN@bama.ua.edu Subject: Re: how-to Sysplex? - the LOGR and exploiters part > > > The inconvenient truth of all things sysplex is that sysplex is a "shared everything" architecture which means for it to work correctly, everything must be available everywhere in the plex. Subdividing the plex with old fashioned ideas like "Prod runs on SYSA, development runs on SYSB and the sysprog sandbox is SYSC" can only lead to operational inconvenience (at best) or disaster (worst) It isn't 1989 any more folks. It is time to shift gears. This stuff (only) works when you don't try to fight it. If you keep configuring systems like it is 1989, you will get operational results like it is 1989. (Ooh, did I say that out loud? My bad!) -- This email might be from the artist formerly known as CC (or not) You be the judge. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
> > > The inconvenient truth of all things sysplex is that sysplex is a "shared everything" architecture which means for it to work correctly, everything must be available everywhere in the plex. Subdividing the plex with old fashioned ideas like "Prod runs on SYSA, development runs on SYSB and the sysprog sandbox is SYSC" can only lead to operational inconvenience (at best) or disaster (worst) It isn't 1989 any more folks. It is time to shift gears. This stuff (only) works when you don't try to fight it. If you keep configuring systems like it is 1989, you will get operational results like it is 1989. (Ooh, did I say that out loud? My bad!) -- This email might be from the artist formerly known as CC (or not) You be the judge. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
You could request new DEFINE/UPDATE LOGSTREAM options to control which systems are permitted to perform offloads. Perhaps something along the lines of: OFFLOAD_SYSTEMS( [INCLUDE(sysname1 [,sysname2]...)] [EXCLUDE(sysname1 [,sysname2]...)] ) Don Williams -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Barbara Nitz Sent: Monday, August 24, 2009 2:38 AM To: IBM-MAIN@bama.ua.edu Subject: Re: how-to Sysplex? - the LOGR and exploiters part >Barbara, we don't data share with DB2, so we don't see the problems with RRS >offloads. Does the introduction of the data sharing in a subplex force you to >share structures for the logstreams, even though they are sub-plexed? The problem in itself is NOT DB2 data sharing, it is the fact that DB2 uses RRS for some administrative tasks unequivocally. I am NOT a DB2 person. From what I understand, in order to see some of the current DB2 definitions, you need RRS to be active. Again, it is NO problem to separate RRS to use different log streams on different subplexes. It is the fact that RRS in turn uses the LOGR that causes the problems (from what I understand, in several different components). In all cases it is the design for LOGR offload to harden data to DASD at disconnect from the logstream. Short reminder of how the design currently works: Connector to logstream disconnects. LOGR issues an (XCF) signal to all remaining connectors to that logstream that offload is necessary. All remaining connectors enter the race condition to do the offload. At this point, things like number of CPUs ond lpar weight come into play as to *who* wins that race. In the case of operlog, it may be someone on a subplex that does not share the SMS constructs. Result: corrupted log stream. In the case of RRS, it may be that someone from another system (in another subsysplex) has just started to browse a log stream 'the other group' creating a temporary connection to the log stream that is in the process of being offloaded. Again, offload can happen then in the wrong subplex. Result: corrupted log stream. In the case of SMF, offload can happen on a system that is in itself in shutdown, so it may happen that the offload gets forcibly terminated by that system being wait stated from the v xcf, offline. Result: corrupted log stream. All of you going to share: Please raise a requirement against the LOGR offload process to be redesigned! (But please don't ask me how :-) ) >> simply connects to the operlog structure and reads it out. There is no >> check in place if operlog is actually *enabled* on that system! >And, that's how it should be. And *that* is probably well hidden in some manual. I had expected different behaviour, namely only the ability to browse the log stream on the systems where operlog is written to. Until I saw the corrupted operlog, I had no reason to assume that it is otherwise. We run both, with operlog being considered mostly a nuisance because it shows 'too much'. Most people prefer syslog, anyway. But they are not the ones who have to investigate shutdown and IPL problems when there is no JES2! In the meantime, 'I have adapted.' >FWIW, I submitted a SHARE requirement years ago to allow multiple >OPERLOG log streams in a sysplex. The intent was that each system could >substitute its XCF group name into the log stream name. Then you could >have an OPERLOG for each JESplex. I think THAT would be useful! Can I have the number, please? I would like to concur with that! (Mark will not want to configure his systems that way, but then at least you have a 'choice'.) That leaves the offload problem, though! >That's why (E)JES implements a feature called "Syslog auto cmd routing". Which is kinda available for S/MCS consoles as CONTROL V,CMDSYS=name. CMDSYS is also an attribute for an EMCS console, but I am missing how to specify that via SDSF (that controls how my SDSF console is set up). And while I can probably change the cmdsys setting via some sort of RACF command, I am probably not authorized to issue those RACF commands! Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>The problem in itself is NOT DB2 data sharing, it is the fact that DB2 uses >RRS for some administrative tasks unequivocally. I am NOT a DB2 person. From >what I understand, in order to see some of the current DB2 definitions, you need RRS to be active. It uses RRS for DDF, for sure. I was involved in a DDF POC (2004) and RRS had to be 'customised'. It caused a political battle because RRS was considered part of the OS, which was supported by a different group than the one supporting DB2. If anybody's interested in the ins and outs, hit the archives. I posted the (then) current redbooks explaining the process. - Too busy driving to stop for gas! -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>Barbara, we don't data share with DB2, so we don't see the problems with RRS >offloads. Does the introduction of the data sharing in a subplex force you to >share structures for the logstreams, even though they are sub-plexed? The problem in itself is NOT DB2 data sharing, it is the fact that DB2 uses RRS for some administrative tasks unequivocally. I am NOT a DB2 person. From what I understand, in order to see some of the current DB2 definitions, you need RRS to be active. Again, it is NO problem to separate RRS to use different log streams on different subplexes. It is the fact that RRS in turn uses the LOGR that causes the problems (from what I understand, in several different components). In all cases it is the design for LOGR offload to harden data to DASD at disconnect from the logstream. Short reminder of how the design currently works: Connector to logstream disconnects. LOGR issues an (XCF) signal to all remaining connectors to that logstream that offload is necessary. All remaining connectors enter the race condition to do the offload. At this point, things like number of CPUs ond lpar weight come into play as to *who* wins that race. In the case of operlog, it may be someone on a subplex that does not share the SMS constructs. Result: corrupted log stream. In the case of RRS, it may be that someone from another system (in another subsysplex) has just started to browse a log stream 'the other group' creating a temporary connection to the log stream that is in the process of being offloaded. Again, offload can happen then in the wrong subplex. Result: corrupted log stream. In the case of SMF, offload can happen on a system that is in itself in shutdown, so it may happen that the offload gets forcibly terminated by that system being wait stated from the v xcf, offline. Result: corrupted log stream. All of you going to share: Please raise a requirement against the LOGR offload process to be redesigned! (But please don't ask me how :-) ) >> simply connects to the operlog structure and reads it out. There is no >> check in place if operlog is actually *enabled* on that system! >And, that's how it should be. And *that* is probably well hidden in some manual. I had expected different behaviour, namely only the ability to browse the log stream on the systems where operlog is written to. Until I saw the corrupted operlog, I had no reason to assume that it is otherwise. We run both, with operlog being considered mostly a nuisance because it shows 'too much'. Most people prefer syslog, anyway. But they are not the ones who have to investigate shutdown and IPL problems when there is no JES2! In the meantime, 'I have adapted.' >FWIW, I submitted a SHARE requirement years ago to allow multiple >OPERLOG log streams in a sysplex. The intent was that each system could >substitute its XCF group name into the log stream name. Then you could >have an OPERLOG for each JESplex. I think THAT would be useful! Can I have the number, please? I would like to concur with that! (Mark will not want to configure his systems that way, but then at least you have a 'choice'.) That leaves the offload problem, though! >That's why (E)JES implements a feature called "Syslog auto cmd routing". Which is kinda available for S/MCS consoles as CONTROL V,CMDSYS=name. CMDSYS is also an attribute for an EMCS console, but I am missing how to specify that via SDSF (that controls how my SDSF console is set up). And while I can probably change the cmdsys setting via some sort of RACF command, I am probably not authorized to issue those RACF commands! Best regards, Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
Tom Marchant wrote: On Fri, 21 Aug 2009 14:10:45 -0700, Edward Jaffe wrote: ... (E)JES implements a feature called "Syslog auto cmd routing". With this enabled, commands not explicitly routed are automatically routed to the system whose SYSLOG you are browsing. Wow. Cool feature! Thanks. We aim to please. 8-) -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Fri, 21 Aug 2009 14:10:45 -0700, Edward Jaffe wrote: >... (E)JES implements a feature called "Syslog auto cmd routing". >With this enabled, commands not explicitly routed are automatically >routed to the system whose SYSLOG you are browsing. Wow. Cool feature! -- Tom Marchant -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
Mark Zelden wrote: On Fri, 21 Aug 2009 12:02:49 -0700, Edward Jaffe wrote: FWIW, I submitted a SHARE requirement years ago to allow multiple OPERLOG log streams in a sysplex. The intent was that each system could substitute its XCF group name into the log stream name. Then you could have an OPERLOG for each JESplex. I think THAT would be useful! That would be a nice option. But I like OPERLOG for the exact opposite reason. I can have a single log to look at with all systems (or a single different system with filtering) that is in a different JESplex within my sysplexes. I can't do that with SYSLOG if the spool isn't shared. Of course, you would not have the all-systems-at-once OPERLOG option if my suggestion was implemented and you were configured to use it. The existing support is confusing for some users seeing job names and numbers from another JESplex in the log. As I envision it, and as I would implement the browser support in (E)JES, you would still be able to connect to any OPERLOG log stream you wanted to see from anywhere in the SYSPLEX. Presumably, other OPERLOG browser products would follow suit. Filtering OPERLOG by JESplex requires a system naming convention that differentiates by JESplex. JES3 customers are used to a merged SYSLOG for the entire JESplex. An OPERLOG per JESplex would provide them with 100% equivalent function. JES2 customers must browse one SYSLOG per system. Even that is sometimes confusing for them: e.g., browsing SYS2's SYSLOG while logged on to SYS1, issue a system command without ROUTE and have it execute on SYS1. That's why (E)JES implements a feature called "Syslog auto cmd routing". With this enabled, commands not explicitly routed are automatically routed to the system whose SYSLOG you are browsing. -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Fri, 21 Aug 2009 13:45:34 -0500, Mark Zelden wrote: >All you have to do is define the volumes / pools (storage group) in the >different SMSplexes that you want to share and then have consistent ACS >routines. It's that simple. > To clarify that statement, I meant "consistent" only for those data sets you wish to SMS control across the SMSplexes. Not the entire set of ACS routines. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Fri, 21 Aug 2009 12:02:49 -0700, Edward Jaffe wrote: >FWIW, I submitted a SHARE requirement years ago to allow multiple >OPERLOG log streams in a sysplex. The intent was that each system could >substitute its XCF group name into the log stream name. Then you could >have an OPERLOG for each JESplex. I think THAT would be useful! > That would be a nice option. But I like OPERLOG for the exact opposite reason. I can have a single log to look at with all systems (or a single different system with filtering) that is in a different JESplex within my sysplexes. I can't do that with SYSLOG if the spool isn't shared. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
Barbara Nitz wrote: We *thought* we were safe on this front. Until we found out that the operlog logstream gets corrupted on a regular basis because it gets offloaded on the wrong subsysplex where the offload datasets cannot be found from the 'other side'. One would have thought (and it came as a BIG surprise to me) that an SDSF session cannot access operlaog if operlog is NOT enabled on that system. Boy, have I been wrong here! It is extremely easy to access the operlog from one subplex from the other side that actually should not access that. Just type a simple sdsd/log o on the system that does NOT have operlog enables, and you'll get it because SDSF simply connects to the operlog structure and reads it out. There is no check in place if operlog is actually *enabled* on that system! And, that's how it should be. If you issue V SYSLOG,HARDCPY,OFF you can still view SYSLOG. The only difference is that the system is no longer writing to it. Likewise with OPERLOG. If you issue V OPERLOG,HARDCPY,OFF you can still view OPERLOG to see what was written up to the point where the VARY was issued. This becomes extremely important when OPERLOG gets unilaterally disabled by the system due to an error in LOGR--which I have seen BTW. Imagine if you have no hardcopy and can't view what happened just before it got disabled! You would be totally blind! Yikes! FWIW, I submitted a SHARE requirement years ago to allow multiple OPERLOG log streams in a sysplex. The intent was that each system could substitute its XCF group name into the log stream name. Then you could have an OPERLOG for each JESplex. I think THAT would be useful! -- Edward E Jaffe Phoenix Software International, Inc 5200 W Century Blvd, Suite 800 Los Angeles, CA 90045 310-338-0400 x318 edja...@phoenixsoftware.com http://www.phoenixsoftware.com/ -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Fri, 21 Aug 2009 13:11:53 -0500, Arthur Gutowski wrote: >Barbara, we don't data share with DB2, so we don't see the problems with RRS >offloads. Does the introduction of the data sharing in a subplex force you to >share structures for the logstreams, even though they are sub-plexed? > >As for Operlog, we have the same exposure with EJES (we have a lot of JES3 >images), but we live with it, too. > >Mark, I would really like to know how you share an SMS-managed pool outside >of an SMSPlex. I have been grappling with this issue for a while now, and until >now thought that not only was this not advisable, but that it was not even >possible. > I'd talk to you about it at SHARE if you were there, but I don't think you are going. For background, we have different SMSplexes within a sysplex and we also share other data between sysplexes using MIM (MII). We have one SMSplex that spans sysplexes (a prod / devl LPAR). They apps folks want or need that for cloning SMS controlled DB2 from prod to devl (SAP). It really isn't rocket science and I wouldn't recommend it for any production volumes with lots of allocation activity. The volume space only gets updated in the COMMDS at allocate / delete time for a particular volume (that is, when a data set on that particular volume is created or deleted). So you can end up with a situation where SMS thinks there is more space than there is or visa versa.But it works fine for our dump pool (shared across SMSplexes and SYSPLEXes) and LOGR pool (obviously sysplex in scope, but shared across SMSplexes). All you have to do is define the volumes / pools (storage group) in the different SMSplexes that you want to share and then have consistent ACS routines. It's that simple. I did run into a problem with the dump pool when we turned on striping (we already had compression) because the volume selection rules change in regards to the high and low thresholds. So those had to be tweaked and we started sharing the entire pool to fix the problems. Previously (against my wishes) the storage admin disabled certain dump pool volumes on certain SMSplexes so they weren't really completely shared. It caused a problem because the HSM migration only happened in one SMSplex (one of the reasons we wanted to share this pool to begin with - no HSM in the other plex) and the plex without HSM thought the volumes were too full when they weren't. See my comment about about that above and read the DFSMS manuals on volume selection for striped data sets for more info on that.Anyway.. once all the volumes were shared and there was a bigger pool online and the thresholds were tweaked, that issue went away. Dumps are taken in that devl plex (the one without HSM) and as I said above, when the SVC dump allocations occur, the knowledge of how much space is on the volume is updated at that time. We are not the only shops that does this sort of thing and there may even be a requirement out there for the COMMDS to look on its own every so often because of this problem (may even be an info APAR or note in one of the IBMLINK data bases). A simple work around for shops that do this on "active" volumes is to schedule a periodic job in each SMSplex that allocates / deletes a data set on each volume (obviously the STORCLAS needs to have guaranteed space for this to work). Our LOGR pool doesn't have the problem and all the volumes were always all enabled across the different SMSplexes. This is a requirement (as Barbara can tell you) since any system in the plex can be chosen to do an offload and try to delete old archive data sets. Again, I would not recommend this for "production" but it has worked well over the years for this limited use since we started with sysplex and things were not shared as they are today (less separation, share SMS, catalogs, volumes, master catalogs etc.). I hope I've explained it clearly enough. Regards, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
Barbara, we don't data share with DB2, so we don't see the problems with RRS offloads. Does the introduction of the data sharing in a subplex force you to share structures for the logstreams, even though they are sub-plexed? As for Operlog, we have the same exposure with EJES (we have a lot of JES3 images), but we live with it, too. Mark, I would really like to know how you share an SMS-managed pool outside of an SMSPlex. I have been grappling with this issue for a while now, and until now thought that not only was this not advisable, but that it was not even possible. Regards, Art Gutowski Ford Motor Company -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Mon, 17 Aug 2009 00:27:52 -0500, Barbara Nitz wrote: >>But the logger problem (for operlog / logrec) was still >>easily solved by creating a shared SMS pool (even though there were >>separate SMSplexes), a shared catalog on one of the logger volumes >>and a new logger HLQ. The priceplex was born! > >Given that in our case it is kinda impossible to create a 'shared SMS pool' >between the two subplexes (this time mostly for technical reasons and a huge >amount of manpower needed for that, so I was told), this is why I live with >the corrupted operlog and fight NOT to use RRS on one half. (We don't use >logrec logstreams.) > >Barbara Okay. I thought you mentioned you already had a shared string of DASD, but you didn't say how much there was. Obviously you don't need too much for this. A single 3390-3 (or bigger) would suffice. Assuming you had a spare shared volume, I would have to question the "huge amount of manpower" statement. The other aspects of this (CF / LOGR policy, SMS ACS routines, possibly RACF & HCD) are trivial. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>But the logger problem (for operlog / logrec) was still >easily solved by creating a shared SMS pool (even though there were >separate SMSplexes), a shared catalog on one of the logger volumes >and a new logger HLQ. The priceplex was born! Given that in our case it is kinda impossible to create a 'shared SMS pool' between the two subplexes (this time mostly for technical reasons and a huge amount of manpower needed for that, so I was told), this is why I live with the corrupted operlog and fight NOT to use RRS on one half. (We don't use logrec logstreams.) Barbara -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
On Fri, 14 Aug 2009 01:01:43 -0500, Barbara Nitz wrote: >We *thought* we were safe on this front. Until we found out that the operlog >logstream gets corrupted on a regular basis because it gets offloaded on the >wrong subsysplex where the offload datasets cannot be found from the 'other >side'. One would have thought (and it came as a BIG surprise to me) that an >SDSF session cannot access operlaog if operlog is NOT enabled on that >system. Boy, have I been wrong here! > Long ago (prior to Y2K) I was consulting at the same shop I'm at now. We had similar issues due to consolidations and disparate systems in the sysplex. But the logger problem (for operlog / logrec) was still easily solved by creating a shared SMS pool (even though there were separate SMSplexes), a shared catalog on one of the logger volumes and a new logger HLQ. The priceplex was born! Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: how-to Sysplex? - the LOGR and exploiters part
>We had problems with OPERLOG, using a structure, so now we only enable it >on one subplex that shares DASD. We have the odd problem with EJES users >on other systems trying to connect to the logstream from images outside the >duplex. Perhaps we should try moving to DASD-only to resolve it. > >We have had no issues with RRS or CICS logstreams - they are DASD-only >andsystem-specific. Remember - we do NOT share DASD other than the three columes with CDSs, no common catalog *at all*. LOGR had been setup on both subplexes to use SMS. Operlog is only active in one half, as is RRS. Both operlog and RRS require structures (and cannot go to DASD only) as RRS is needed for some DB2s that do data sharing. We *thought* we were safe on this front. Until we found out that the operlog logstream gets corrupted on a regular basis because it gets offloaded on the wrong subsysplex where the offload datasets cannot be found from the 'other side'. One would have thought (and it came as a BIG surprise to me) that an SDSF session cannot access operlaog if operlog is NOT enabled on that system. Boy, have I been wrong here! It is extremely easy to access the operlog from one subplex from the other side that actually should not access that. Just type a simple sdsd/log o on the system that does NOT have operlog enables, and you'll get it because SDSF simply connects to the operlog structure and reads it out. There is no check in place if operlog is actually *enabled* on that system! Again, *I* am the main user of operlog (because I have to look after shutdown problems *after* JES2 was shut down), so I just live with that corrupted log stream and delete the offload datasets on the wrong system periodically. RRS is another matter entirely. If any of the needed RRS log streams gets offloaded on the wrong system (and thus corrupts the log stream), one is in for an RRS cold start. Not really advisable in a heavy load/usage production environment just because an AD system in the same plex was IPL'd. Again with RRS, you're safe as long as there aren't any connections from the wrong system. Unfortunately here, too, is is extremely easy (using the IBM provided RRS application) way to force a corrupted log stream, given enough activity on even separate structures/separate RRS groups for the separate subplexes! Just take the RRS subgroup from the 'wrong' side and browse the data. That connection from the wrong side is short, but long enough on bigger streams to fall into an offload window. Corruption on the other side, cold start. Which is why I fight tooth and nails NOT to activate RRS on the second subplex. And that is especially hard, as there are DB2 functions now that *require* RRS, and the concern is not with RRS, it is with Logger offload processing. Try to get *that* across. My colleagues just roll their eyes and think I want to make their lives unnecessarily hard. So, it is a good thing we're IMS and not CICS. And why I say everything LOGR that needs sysplex sharing also MUST share the DASD necessary to offload this, no matter which LOGR exploiter it is. I'll take a real hard look at SMF sharing once we get to 1.10 to see if we can live with it or not. Given the problems offload causes for just about all exploiters, offload processing should really get redesigned, as it was NOT intended for price- plexes! Regards, Barbara Nitz -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html