Re: Changing sysplex hardware
Biggest thanks goes to the sysplex parallel sysplex development team. Reconfiguration is much more robust and easier to manage than when we started out in the mid 90s. We just finished upgrading both CECs in our production complex, one push-pull at a time, about three weeks apart. I built two new CFRM policies in advance: first one (interim) with one new CEC defined, second one (final) with both new CECs defined. 1 Move all structures to the old CEC *not* being replaced in round 1. SETXCF START,MAINTMODE,CFNM=old-CEC-being-replaced SETXCF START,REALLOC 2 Shut down and disconnect old CEC being replaced. 3 Swap out and connect first new CEC. 4 Activate the interim CFRM policy from the (up and running) old CEC. SEXCF START,POL,TYPE=CFRM,POLNM=interim-policy SETXCF START,REALLOC Three weeks later, perform a similar drill with the other new CEC and the final CFRM policy. Voila. 15 years ago this would have been much harder and scarier. . . JO.Skip Robinson SCE Infrastructure Technology Services Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile jo.skip.robin...@sce.com From: Natasa Savinc To: IBM-MAIN@bama.ua.edu Date: 03/05/2012 02:03 AM Subject:Re: Changing sysplex hardware Sent by:IBM Mainframe Discussion List Thank you all for taking interest, especially to Skip, who answered what was really the question here. We followed this procedure for our sandbox sysplex, we will do the same for test and production. Best regards, Natasa -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
Thank you all for taking interest, especially to Skip, who answered what was really the question here. We followed this procedure for our sandbox sysplex, we will do the same for test and production. Best regards, Natasa -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
OK, I guess I didn't realize that there was some mirroring software that didn't allow a changed-only resync after updates were done on the target volumes. Rex -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Mike Schwab Sent: Tuesday, February 14, 2012 4:10 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Changing sysplex hardware Resync after the secondary volume is updated? If the mirroring software supports that, it would save a lot of retransmitting. I am fairly sure the ESS F20 and 800 PPRC did not have that, and the user did not say what he is using to mirror. But you only need that after a backout after running at the new site. On Tue, Feb 14, 2012 at 3:54 PM, Pommier, Rex R. wrote: > Mike, > > Wouldn't number 10 be a massive amount of unnecessary work and replication? > I was under the impression that if you had replication going between the two > arrays and you suspended the replication, that you could bring up the > replication targets in a read/write mode on the new servers. If you had to > back out, after shutting the new servers down, you could "unsuspend" the > replication and data that had changed on the source volumes would be > replicated to the targets, and data on the targets that had changed would > also have the source data pushed to overlay the changed targets. Is this not > how replication works? > > Rex > > -Original Message- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf > Of Mike Schwab > Sent: Tuesday, February 14, 2012 2:59 PM > To: IBM-MAIN@bama.ua.edu > Subject: Re: Changing sysplex hardware > > Since you are moving the entire datacenter and all dasd is already > replicated, then. > Old location: > 1. Shut down your existing systems. > Old location prefered. > 2. Break dasd replications. > New location. > 3. IPL one system. > 4. Start Sysplex using your new datasets. > 5. IPL the other systems. > > Backout: > New location > 6. Shut down your systems at new locations. > Old Location. > 7. IPL one system. > 8. Start Sysplex using your old datasets. > 9 IPL the other systems > When up: > 10. Restart replication from scratch for next try. The secondaries > will have been updated (access date at a minimum), so restarting a > suspended replication would result in bad volumes. -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN The information contained in this e-mail may contain confidential and/or privileged information and is intended for the sole use of the intended recipient. If you are not the intended recipient, you are hereby notified that any unauthorized use, disclosure, distribution or copying of this communication is strictly prohibited and that you will be held responsible for any such unauthorized activity, including liability for any resulting damages. As appropriate, such incident(s) may also be reported to law enforcement. If you received this e-mail in error, please reply to sender and destroy or delete the message and any attachments. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
There are two sides to the sysplex coin: one lives on DASD, the other lives in the CF. As long as DASD is fully replicated, the newly IPLed sysplex member(s) should look exactly like the old. CF is another matter. In the CFRM policy, each CF is identified by a unique combination of properties: 1. NAME 2. TYPE (model) 3. SEQUENCE (serial number) 4. PARTITION (number) NAME is used throughout the policy to specify structure location. TYPE, SEQUENCE, and PARTITION are used at XCF initialization to identify the hardware to be used for CF structures associated with NAME. If all four properties are different in the new location, the easiest migration path is to create--today--a CFRM policy that includes the new CF in addition to the old CF(s). Include the new NAME in all structure PREFLISTs. XCF in the old location will not be flummoxed that the new CF is unreachable. Likewise in the new location XCF will survive without access to the old CF(s). In this way you can IPL into the mirrored DASD complex with little disruption. Since you have the same CFRM policy throughout, fallback should be as simple as IPLing in the old location. Caveat: be absolutely sure that you're happy with your new home before allowing production updates to occur. Consider that replicating updates back to the old location is essentially a lost cause. Check everything out as thoroughly as possible while users are locked out. Once you let them out on the range, you'll never get them back in the barn. . . JO.Skip Robinson SCE Infrastructure Technology Services Electric Dragon Team Paddler SHARE MVS Program Co-Manager 626-302-7535 Office 323-715-0595 Mobile jo.skip.robin...@sce.com From: Mike Schwab To: IBM-MAIN@bama.ua.edu Date: 02/14/2012 02:09 PM Subject: Re: Changing sysplex hardware Sent by:IBM Mainframe Discussion List Resync after the secondary volume is updated? If the mirroring software supports that, it would save a lot of retransmitting. I am fairly sure the ESS F20 and 800 PPRC did not have that, and the user did not say what he is using to mirror. But you only need that after a backout after running at the new site. On Tue, Feb 14, 2012 at 3:54 PM, Pommier, Rex R. wrote: > Mike, > > Wouldn't number 10 be a massive amount of unnecessary work and replication? I was under the impression that if you had replication going between the two arrays and you suspended the replication, that you could bring up the replication targets in a read/write mode on the new servers. If you had to back out, after shutting the new servers down, you could "unsuspend" the replication and data that had changed on the source volumes would be replicated to the targets, and data on the targets that had changed would also have the source data pushed to overlay the changed targets. Is this not how replication works? > > Rex > > -Original Message- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Mike Schwab > Sent: Tuesday, February 14, 2012 2:59 PM > To: IBM-MAIN@bama.ua.edu > Subject: Re: Changing sysplex hardware > > Since you are moving the entire datacenter and all dasd is already > replicated, then. > Old location: > 1. Shut down your existing systems. > Old location prefered. > 2. Break dasd replications. > New location. > 3. IPL one system. > 4. Start Sysplex using your new datasets. > 5. IPL the other systems. > > Backout: > New location > 6. Shut down your systems at new locations. > Old Location. > 7. IPL one system. > 8. Start Sysplex using your old datasets. > 9 IPL the other systems > When up: > 10. Restart replication from scratch for next try. The secondaries > will have been updated (access date at a minimum), so restarting a > suspended replication would result in bad volumes. -- Mike A Schwab, Springfield IL USA -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
Resync after the secondary volume is updated? If the mirroring software supports that, it would save a lot of retransmitting. I am fairly sure the ESS F20 and 800 PPRC did not have that, and the user did not say what he is using to mirror. But you only need that after a backout after running at the new site. On Tue, Feb 14, 2012 at 3:54 PM, Pommier, Rex R. wrote: > Mike, > > Wouldn't number 10 be a massive amount of unnecessary work and replication? > I was under the impression that if you had replication going between the two > arrays and you suspended the replication, that you could bring up the > replication targets in a read/write mode on the new servers. If you had to > back out, after shutting the new servers down, you could "unsuspend" the > replication and data that had changed on the source volumes would be > replicated to the targets, and data on the targets that had changed would > also have the source data pushed to overlay the changed targets. Is this not > how replication works? > > Rex > > -Original Message- > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf > Of Mike Schwab > Sent: Tuesday, February 14, 2012 2:59 PM > To: IBM-MAIN@bama.ua.edu > Subject: Re: Changing sysplex hardware > > Since you are moving the entire datacenter and all dasd is already > replicated, then. > Old location: > 1. Shut down your existing systems. > Old location prefered. > 2. Break dasd replications. > New location. > 3. IPL one system. > 4. Start Sysplex using your new datasets. > 5. IPL the other systems. > > Backout: > New location > 6. Shut down your systems at new locations. > Old Location. > 7. IPL one system. > 8. Start Sysplex using your old datasets. > 9 IPL the other systems > When up: > 10. Restart replication from scratch for next try. The secondaries > will have been updated (access date at a minimum), so restarting a > suspended replication would result in bad volumes. -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
Mike, Wouldn't number 10 be a massive amount of unnecessary work and replication? I was under the impression that if you had replication going between the two arrays and you suspended the replication, that you could bring up the replication targets in a read/write mode on the new servers. If you had to back out, after shutting the new servers down, you could "unsuspend" the replication and data that had changed on the source volumes would be replicated to the targets, and data on the targets that had changed would also have the source data pushed to overlay the changed targets. Is this not how replication works? Rex -Original Message- From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Mike Schwab Sent: Tuesday, February 14, 2012 2:59 PM To: IBM-MAIN@bama.ua.edu Subject: Re: Changing sysplex hardware Since you are moving the entire datacenter and all dasd is already replicated, then. Old location: 1. Shut down your existing systems. Old location prefered. 2. Break dasd replications. New location. 3. IPL one system. 4. Start Sysplex using your new datasets. 5. IPL the other systems. Backout: New location 6. Shut down your systems at new locations. Old Location. 7. IPL one system. 8. Start Sysplex using your old datasets. 9 IPL the other systems When up: 10. Restart replication from scratch for next try. The secondaries will have been updated (access date at a minimum), so restarting a suspended replication would result in bad volumes. On Tue, Feb 14, 2012 at 8:43 AM, Staller, Allan wrote: > It is not clear if you are moving the entire SYSPLEX, or merely one or > more of the members. > > It you are moving the entire SYSPLEX, perhaps a SYSPLEX wide restart is > appropriate. However, even if you are moving one or more members of the > SYSPLEX, why not use the "standard" SYSPLEX facilities to assist in the > move? (IIRC, a parallel sysplex can communicate over about 20 km(??) > without "special" accommodations e.g. GDPS). > > Your original SYSPELX CDS's will be intact, so no action should be > necessary if you need to revert to the original location, just IPL and > go. > I would create new CDS's/policies for the new location. > > HTH, > > > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On > Behalf Of Natasa Savinc > Sent: Tuesday, February 14, 2012 4:11 AM > To: IBM-MAIN@bama.ua.edu > Subject: Changing sysplex hardware > > Hello, > we are moving data center to another location. The data is already there > on DASD, replicated synchronously. We plan to stop the sysplex and IPL > from the replicated data , on new processor. We pretty much answered all > questions so far, except for the sysplex and CF. On new location we have > one new processor, that will in the end replace one of the existing > processors. The configuration (LPAR names) are the same, including CF. > > I would like to verify following scenario: > > 1. For fall-back purpose: We allocate new CFRM couple data sets and > prepare new set of IPL parameters. Old ones will be used if we have to > IPL at old location. > 2. Activate new CDS > 3. Change existing policy - define different HW for the existing CF > 4. Start new policy - first question is - will it report an error or > will it just have pending changes for CF? > 5. Shut down system (sysplex) > 6. IPL on new processor > > Would it be better option to define different name for CF on new > processor, and just add a new CF to the active policy, and in all > preference lists? > > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN The information contained in this e-mail may contain confidential and/or privileged information and is intended for the sole use of the intended recipient. If you are not the intended recipient, you are hereby notified that any unauthorized use, disclosure, distribution or copying of this communication is strictly prohibited and that you will be held responsible for any such unauthorized activity, including liability for any resulting damages. As appropriate, such incident(s) may also be reported to law enforcement. If you received this e-mail in error, please reply to sender and destroy or delete the message and any attachments. Thank you. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
Since you are moving the entire datacenter and all dasd is already replicated, then. Old location: 1. Shut down your existing systems. Old location prefered. 2. Break dasd replications. New location. 3. IPL one system. 4. Start Sysplex using your new datasets. 5. IPL the other systems. Backout: New location 6. Shut down your systems at new locations. Old Location. 7. IPL one system. 8. Start Sysplex using your old datasets. 9 IPL the other systems When up: 10. Restart replication from scratch for next try. The secondaries will have been updated (access date at a minimum), so restarting a suspended replication would result in bad volumes. On Tue, Feb 14, 2012 at 8:43 AM, Staller, Allan wrote: > It is not clear if you are moving the entire SYSPLEX, or merely one or > more of the members. > > It you are moving the entire SYSPLEX, perhaps a SYSPLEX wide restart is > appropriate. However, even if you are moving one or more members of the > SYSPLEX, why not use the "standard" SYSPLEX facilities to assist in the > move? (IIRC, a parallel sysplex can communicate over about 20 km(??) > without "special" accommodations e.g. GDPS). > > Your original SYSPELX CDS's will be intact, so no action should be > necessary if you need to revert to the original location, just IPL and > go. > I would create new CDS's/policies for the new location. > > HTH, > > > From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On > Behalf Of Natasa Savinc > Sent: Tuesday, February 14, 2012 4:11 AM > To: IBM-MAIN@bama.ua.edu > Subject: Changing sysplex hardware > > Hello, > we are moving data center to another location. The data is already there > on DASD, replicated synchronously. We plan to stop the sysplex and IPL > from the replicated data , on new processor. We pretty much answered all > questions so far, except for the sysplex and CF. On new location we have > one new processor, that will in the end replace one of the existing > processors. The configuration (LPAR names) are the same, including CF. > > I would like to verify following scenario: > > 1. For fall-back purpose: We allocate new CFRM couple data sets and > prepare new set of IPL parameters. Old ones will be used if we have to > IPL at old location. > 2. Activate new CDS > 3. Change existing policy - define different HW for the existing CF > 4. Start new policy - first question is - will it report an error or > will it just have pending changes for CF? > 5. Shut down system (sysplex) > 6. IPL on new processor > > Would it be better option to define different name for CF on new > processor, and just add a new CF to the active policy, and in all > preference lists? > > > -- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN -- Mike A Schwab, Springfield IL USA Where do Forest Rangers go to get away from it all? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Re: Changing sysplex hardware
It is not clear if you are moving the entire SYSPLEX, or merely one or more of the members. It you are moving the entire SYSPLEX, perhaps a SYSPLEX wide restart is appropriate. However, even if you are moving one or more members of the SYSPLEX, why not use the "standard" SYSPLEX facilities to assist in the move? (IIRC, a parallel sysplex can communicate over about 20 km(??) without "special" accommodations e.g. GDPS). Your original SYSPELX CDS's will be intact, so no action should be necessary if you need to revert to the original location, just IPL and go. I would create new CDS's/policies for the new location. HTH, From: IBM Mainframe Discussion List [mailto:IBM-MAIN@bama.ua.edu] On Behalf Of Natasa Savinc Sent: Tuesday, February 14, 2012 4:11 AM To: IBM-MAIN@bama.ua.edu Subject: Changing sysplex hardware Hello, we are moving data center to another location. The data is already there on DASD, replicated synchronously. We plan to stop the sysplex and IPL from the replicated data , on new processor. We pretty much answered all questions so far, except for the sysplex and CF. On new location we have one new processor, that will in the end replace one of the existing processors. The configuration (LPAR names) are the same, including CF. I would like to verify following scenario: 1. For fall-back purpose: We allocate new CFRM couple data sets and prepare new set of IPL parameters. Old ones will be used if we have to IPL at old location. 2. Activate new CDS 3. Change existing policy - define different HW for the existing CF 4. Start new policy - first question is - will it report an error or will it just have pending changes for CF? 5. Shut down system (sysplex) 6. IPL on new processor Would it be better option to define different name for CF on new processor, and just add a new CF to the active policy, and in all preference lists? -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN
Changing sysplex hardware
Hello, we are moving data center to another location. The data is already there on DASD, replicated synchronously. We plan to stop the sysplex and IPL from the replicated data , on new processor. We pretty much answered all questions so far, except for the sysplex and CF. On new location we have one new processor, that will in the end replace one of the existing processors. The configuration (LPAR names) are the same, including CF. I would like to verify following scenario: 1. For fall-back purpose: We allocate new CFRM couple data sets and prepare new set of IPL parameters. Old ones will be used if we have to IPL at old location. 2. Activate new CDS 3. Change existing policy - define different HW for the existing CF 4. Start new policy - first question is - will it report an error or will it just have pending changes for CF? 5. Shut down system (sysplex) 6. IPL on new processor Would it be better option to define different name for CF on new processor, and just add a new CF to the active policy, and in all preference lists? I hope I was clear enough, any suggestion will be appreciated. Regards, Natasa -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: INFO IBM-MAIN