BTDT,GTS. See if you can get an indemnification clause in the contract for failure to provide temporary keys within the promised window.
-- Shmuel (Seymour J.) Metz http://mason.gmu.edu/~smetz3 ________________________________________ From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> on behalf of Laurence Chiu <lch...@gmail.com> Sent: Friday, May 10, 2019 10:01 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: DR Failover I need to check with our account manager again but this is my understanding. We just replaced our DR mainframe with a Z14 ZR1. Part of the acceptance testing was running our full DR suite at the DR site. So we activated a CBU which lasted 14 days I am told and ran our tests. The purchase of the new mainframe included 5 years of CBU licences, 2 a year for tests and a number we can activate if we experienced a real DR event. For a test we can run 14 days (we got close to 14 days on our last test since we had some issues that dragged out the testing) and if we were in a real DR event we can run for 90 days. I have been informed that should we ever have to run in DR for real, then IBM software is licensed and our our ISV's would provide temporary keys for us to run for that period of time on the understanding that we are now not running at the production site. In fact during the DR acceptance we had to apply some temporary keys for the third party software. As usual YMMV. On Thu, May 9, 2019 at 3:19 AM Jim IBMMain <jdoll.a0...@gmail.com> wrote: > We had worked out "In Theory" how we would do it. > > About 18 months after we opened our second data center, Our Main Data > Center needed to shutdown over a long weekend, for 100% shutdown power > maintenance. > > After the Online's were shutdown Friday Night, We waited for all the DASD > & Tape to sync to the "other" site, and shut down the site. (100% > Powerdown). We then did our "Plan" (Mostly the DR "real" Plan) and brought > Powered Down Center up in the sister site, and ran the normal Batch > Production workload. Saturday Morning the Online's came up (2hr later then > normal), we ran until Monday afternoon, when we then shut down the Online's > (early, Was a holiday and was not a work day but there is limited activity > even on S/S/H. Then Ran the Monday Night Batch back in the original > DataCenter. > > We were prepared to keep production running in one location, if there was > a issue with the power upgrades until the next weekend (or they fixed it). > > Our Active - Active Servers switched over without any issues, as they were > suppose to. (Except the Voip-PBX) > > The only issue we noted is the VOIP Did not transfer the active calls to > its "sister" VOIP/PBX it hung up on the few calls that were in progress, > but all were able to redial the calls and connect thru its Sister/PBX. > > Few months later we tested the sister site fail over (Test/Dev) to the > main site and ran for the weekend, "Just for the Fun of it" > > In 2012 Hurricane Sandy took out the Powerlines to the Main DataCenter, > The DC was running on the Generators but facilities management was not able > to get more fuel, and we were in danger of running out of fuel, and having > to move the work load to the sister site, We started to make plans for > "When" to do the switch, But we were able to get fuel and did not have to > do the movement of the workload. > > ---------------------------------------------------------------------- > For IBM-MAIN subscribe / signoff / archive access instructions, > send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN > ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN