Re: How to do a planned DR test?
David wrote: >I think also that VM TCPIP is smart enough to look at the system ID and execute a >specific profile based on the system id. It is and that works well (that's how we do it). But MPROUTE is not so clever in the current release. So you still need to work that one out with either an exit or by manually renaming your mproute config files. Marcy Cortes (415) 243-6343 "This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you have received this message in error, please advise the sender immediately by reply e-mail and delete this message. Thank you for your cooperation." -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to do a planned DR test?
> So how > do you tell: > (1) z/VM - bring up TCPIP with this alternate networking > information, and I think also that VM TCPIP is smart enough to look at the system ID and execute a specific profile based on the system id. So, if your normal systemID is FOO1, if you have entries in SYSTEM NETID on the S disk that assign your normal CPUID to FOO1 and have a default value of DISASTER (or something like that) if there is no CPUID match, then you could have a FOO1 PROFILE for TCPIP as the "normal" setup and DISASTER PROFILE with the new information, and TCPIP would figure it out on it's own. No need to have a 2nd machine. Having VM be capable of being a DHCP client would be a nice improvement. Might be hard to do with the current OSA design, though. > The contingency of a real disaster occuring during the DR > test must be addressed. At that time the DR test must be > abandoned and the DR site must be brought online with the > real network information (so modifying the Linux networking > info in the DR site file systems is not a good option). IF you have your VM and guests in subdomains of your main domain, you can have a list of authoritative delegations for the subdomains. If you can make the primary unavailable (by definition in a DR, the primary is probably fried), then DNS will recurse to the secondaries, which could be a virtual machine. Or you can modify the responses sent via DHCP to point at a local DNS instance which has different data than the normal DNS servers. -- db -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to do a planned DR test?
> -Original Message- > From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On > Behalf Of Michael MacIsaac > Sent: Friday, September 02, 2005 11:05 AM > To: LINUX-390@VM.MARIST.EDU > Subject: How to do a planned DR test? > > > Hello list, > > Has anyone addressed the issue of how to do a planned DR > test? Here are > the assumptions: > -) There is a production z/VM+Linux LPAR at the primary data center. > -) There is a DR site where the production LPAR volumes, etc. are > replicated. > -) A planned DR test is necessary. > > In a real disaster, the DR site will use the same IP/DNS as > the production > site which by definition is down. However, to do a planned DR test, > different networking information is required so as to not > conflict with > the primary site. So how do you tell: > (1) z/VM - bring up TCPIP with this alternate networking > information, and > (2) Each Linux to come up with alternate networking information. > Then the DR site can be tested. > > The contingency of a real disaster occuring during the DR test must be > addressed. At that time the DR test must be abandoned and the > DR site must > be brought online with the real network information (so modifying the > Linux networking info in the DR site file systems is not a > good option). > > I can see how you can address (1) - maintain a second TCPIP service > machine - say TCPIP2. Bring up z/VM without AUTOLOG and > manually bring up > TCPIP2. But how to address (2)? Is anyone doing this? > > "Mike MacIsaac" <[EMAIL PROTECTED]> (845) 433-7061 > Literally ALL of our IP addresses used by all our machines are "private" (10.a.b.c). Those IP addresses which are visable to the outside actually reside in our Cisco PIX firewall. It has "rules" that any specific incoming IP address is translated to the "inside" address and passed along. This is rather simple to accomplish with the PIX. During a DR test, all the servers retain their normal, private, IP addresses. The firewall which connects us to the Internet at DR is given a set of public "test" IP addresses (by the DR provider), which are translated by the PIX to the appropriate internal address. This is done by our LAN people. It is not really very difficult. Outside users use these "test" public IP addresses (not the "live" production IP addresses) during their testing. I think this is called a "transparent proxy" or some such thing. The same for outgoing. The PIX translates the internal IP address to the appropriate external IP address. For non-servers, such as desktops, or servers which are not in the DMZ, access to the internet is via the same PIX, but uses the "NAT" capability of the PIX so that all desktops appear to have the same IP address to the outside world. I think that in the case of a real disaster occurring during a DR test, we would have Iron Mountain pull our "ODR1" vault and ship it to the DR site. We would then restart the DR recovery using these more current tapes. Or we might just do a forward recovery of the application data, since it is likely that the OSes did not change much (our test tapes are usually only a couple of weeks old). As far as the IP addresses go, I don't know if the PIX would be updated to start using our production public IP addresses, or if we would send out DNS changes to use the vendor supplied public IP addresses. I'm not in the IP recovery area. -- John McKown Senior Systems Programmer UICI Insurance Center Information Technology This message (including any attachments) contains confidential information intended for a specific individual and purpose, and its' content is protected by law. If you are not the intended recipient, you should delete this message and are hereby notified that any disclosure, copying, or distribution of this transmission, or taking any action based on it, is strictly prohibited. -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to do a planned DR test?
> I can see how you can address (1) - maintain a second TCPIP > service machine - say TCPIP2. Bring up z/VM without AUTOLOG > and manually bring up TCPIP2. But how to address (2)? Is > anyone doing this? 1) Ensure the VM install has a unique MACPREFIX in SYSTEM CONFIG. 2) Ensure that each NICDEF or DEF NIC command specifies a unique value for MAC interface address. 3) Use DHCP to distribute networking information for guests. Configure entries in DHCP server to match for specific MAC addresses. 4) (optional) use DDNS to register the DHCP-assigned addresses. You'd probably need to separate the guests into a separate DNS zone to control the registration behavior, but that's not a bad idea anyway. Transparent for production or DR, and trivially implementable if DR turns real. -- db -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
Re: How to do a planned DR test?
We do not have the luxury of using the production IP addresses in the event of a disaster. We will have to use whatever addresses our DR vendor provides just like during a test. The DNS entries will have to be changed for a real disaster, but for testing, we have the testers use the test IP addresses directly. All TCPIP stacks at the DR site are modified to use the vendor IP addresses after we have done the restore, this includes OS/390, z/VM and multiple linux service machines. The same applies to all of the mid-range systems (Sun, RS/6000) and Intel systems. If your vendor can supply you with your production IP addresses in the event of a real disaster, then you could keep your production and test IP configurations in separate directories in each linux server, then during a test copy the test directory into the /etc/sysconfig/network directory and restart the linux server. If a disaster occurs during your test, you can restore your production IP settings by replacing the test configuration with that safe backup of production settings. If you are not testing with your latest backups, and a disaster strikes in the middle of your test, then you will need to get the latest available backups and do the restore all over again. If you are using your latest backups, just tell your users that you are now the production system. /Tom Kern --- Michael MacIsaac <[EMAIL PROTECTED]> wrote: > Has anyone addressed the issue of how to do a planned DR test? Here are > the assumptions: > -) There is a production z/VM+Linux LPAR at the primary data center. > -) There is a DR site where the production LPAR volumes, etc. are > replicated. > -) A planned DR test is necessary. > > In a real disaster, the DR site will use the same IP/DNS as the production > site which by definition is down. However, to do a planned DR test, > different networking information is required so as to not conflict with > the primary site. So how do you tell: > (1) z/VM - bring up TCPIP with this alternate networking information, and > (2) Each Linux to come up with alternate networking information. > Then the DR site can be tested. > > The contingency of a real disaster occuring during the DR test must be > addressed. At that time the DR test must be abandoned and the DR site must > be brought online with the real network information (so modifying the > Linux networking info in the DR site file systems is not a good option). > > I can see how you can address (1) - maintain a second TCPIP service > machine - say TCPIP2. Bring up z/VM without AUTOLOG and manually bring up > TCPIP2. But how to address (2)? Is anyone doing this? > > "Mike MacIsaac" <[EMAIL PROTECTED]> (845) 433-7061 __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390
How to do a planned DR test?
Hello list, Has anyone addressed the issue of how to do a planned DR test? Here are the assumptions: -) There is a production z/VM+Linux LPAR at the primary data center. -) There is a DR site where the production LPAR volumes, etc. are replicated. -) A planned DR test is necessary. In a real disaster, the DR site will use the same IP/DNS as the production site which by definition is down. However, to do a planned DR test, different networking information is required so as to not conflict with the primary site. So how do you tell: (1) z/VM - bring up TCPIP with this alternate networking information, and (2) Each Linux to come up with alternate networking information. Then the DR site can be tested. The contingency of a real disaster occuring during the DR test must be addressed. At that time the DR test must be abandoned and the DR site must be brought online with the real network information (so modifying the Linux networking info in the DR site file systems is not a good option). I can see how you can address (1) - maintain a second TCPIP service machine - say TCPIP2. Bring up z/VM without AUTOLOG and manually bring up TCPIP2. But how to address (2)? Is anyone doing this? "Mike MacIsaac" <[EMAIL PROTECTED]> (845) 433-7061 -- For LINUX-390 subscribe / signoff / archive access instructions, send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390