There is also the possibility to use the XLINK subset of CSE, and
avoid the need to get a license for Dirmaint (or alike) and PVM.

PVM is only required if you need to share the spool or when you want
to use the few CP commands that work x-system (e.g. MSG xx AT yy).
But "spool sharing" is not a true sharing: even with CSE each VM
system has its own spool areas on its own disks; the PVM connection is
used by CP to be aware of the spool files existing at the other VM
system.  This means that when VM1 is down, VM2 cannot see the spool
files of the VM1.

DIRMAINT (or similar) is only required if you want to have a single
source directory that defines all  users of all VM systems in the CSE
group.  DIRMAINt's CSE support doesn't need PVM, it needs an RSCS
link.

XLINK avoids minidisk integrity problems, it extends the classic R,W,M
link protection modes to other systems in the CSE group.  E.g. to
prohibit that a minidisk is linked R/W concurrently by some user in
VM1 and some user in VM2.  That is what I implemented on my customer's
systems to protect the minidisks on a few disks that are shared
between the VM systems.  For example:
  link KRIS 1111 1111 W
  HCPLNM104E KRIS 1111 not linked; R/O by VMKBCT01

XLINK doesn't need any extra software, one only needs to tell CP which
volumes much be protected by XLINK and define (and format) an area on
the shared disks where CP will maintain a bitmap that tells which
systems have which cylinders (i.e. minidisks) are in use by which VM
system.  The XLINK definitions are defined in SYSTEM CONFIG and cannot
be changed dynamically.
XLINK has a very low overhead, only at LINK and DETACH some IO to the
CSE area may take place.  But, with or without XLINK, CP's Minidisk
Cache should not be used on shared disks.

2008/1/10, Alan Altmark <[EMAIL PROTECTED]>:
> On Thursday, 01/10/2008 at 11:36 EST, Karl Kingston
> <[EMAIL PROTECTED]> wrote:
> > We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2.
> >    Right now, all of our Linux guests (about 5) are on VM1.   They also
> have a
> > directory entry on VM2 (but password set to NOLOG).
> >
> > 1) What's the best way to do failover if we need to get something over?
>   Right
> > now, my plan is basically to log into VM2 and change the NOLOG to a
> password
> > and then start the guest.     Basically I want to avoid having our
> Operations
> > staff make mistakes and start 2 instances of the same linux guest (on 2
> VM
> > systems).
>
> The "best" way is to implement a VM cluster using Cross-System Extensions
> (CSE).  You will need DIRMAINT (or other cluster-enabled directory
> manager) and to special bid the PVM product.
>
> It will
> - Let you share spool files among all systems in the cluster
> - Only allow the Linux user to logon on VM1 or VM2, not both
> - Let Certain Users logon to both systems at the same time (no shared
> spool) such as TCPIP
> - Perform user add / change / delete from a single system
> - Allow the users to have different virtual machine configurations,
> depending on which system they logon to
> - Let LINUX1 on VM2 link to disks owned by LINMASTR on VM1 with link mode
> protection (you can do this without CSE via XLINK)
>
> > 2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.
>    We
> > would like to keep using FDR.    Now I know I can get clean backups if
> the
> > systems are shut down.   Are there any gotcha's if I take a FDR full
> dump
> > against say 530RES or 530SPL while the system is up?  \
>
> I strongly encourage you NOT to do that.  The warmstart area and the spool
> volumes must all be consistent.  If you must use FDR, then set up a 2nd
> level guest (with dedicated volumes, not mdisks) whose sole purpose is to
> provide a resting place for spool files to be backed up by FDR.
>
> SPXTAPE DUMP the first level spool to tape.  Then attach the tape to the
> 2nd level guest and SPXTAPE LOAD it.  Shut it down.  Back it up with FDR.
> When recovering, use FDR to restore the guest volumes. IPL the guest,
> SPXTAPE DUMP them and then load them on the first level system.  If
> necessary, you could IPL the guest first level, as it were.  The nice
> thing is that the guest doesn't have to have the same spool volume
> configuration; it just needs enough space to store the spool files.
>
> Of course, if you only use the spool for transient data and don't care if
> you lose it, you can simply COLD/CLEAN start and rebuild the NSSes and
> DCSSes.
>
> Spool files that were open will not be restored, so be sure to send CLOSE
> CONS commands to each guest before you dump the spool.
>
> > 3) last of all, how often does VM get backed up when it's just used as a
> Linux
> > server system??
>
> The more often it is backed up the less "delta" work you have to do to get
> the system back to the "current" state.  How much pain are you willing to
> endure?  Of course, it also depends on how often your VM system changes.
> If you added 20 new servers this week, are you sure you want to
> reallocate, reformat, and reinstall 20 images?  Or make 20 add'l clones?
>
> You might want to consider a commercial z/VM backup/archive/restore
> product.
>
> Alan Altmark
> z/VM Development
> IBM Endicott
>

-- 
Kris Buelens,
IBM Belgium, VM customer support

Reply via email to