Re: Backups and failover

2008-01-21 Thread Shedlock, George
Robert,
 
Could you forward me a copy of the code you indicated below? Thanks so
much.
 
George Shedlock Jr
AEGON Information Technology
AEGON USA
502-560-3541
 



From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of RPN01
Sent: Friday, January 11, 2008 8:05 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Backups and failover


For the guarded failover portion, we have a rexx script and server that
keeps track of which system the guest was last booted on. If it is
logged in on the same host, the system just starts up. If it is
autologged on the other host, it immediately logs off (it's 191 disk is
R/O, so no damage done.) If it is logged in at a terminal on the other
host, there is a prompt telling the user that it was last brought up on
the other other host, and do you really want to bring it up here?

Answering no causes a logout. Answering yes starts the boot process,
which includes logging the new boot into the system and the process is
ready to work in the other direction.

I have this code available, if you'd like a copy

-- 
   .~.Robert P. Nix Mayo Foundation 
  /V\RO-OE-5-55  200 First Street SW 
 / ( ) \  507-284-0844   Rochester, MN 55905 
^^-^^   - 
In theory, theory and practice are the same, but Join the story...
Ride Ural.
in practice, theory and practice are different. 




On 1/10/08 10:33 AM, Karl Kingston [EMAIL PROTECTED] wrote:




We just installed z/VM 5.3.   We have 2 systems running.   VM1
and VM2.Right now, all of our Linux guests (about 5) are on VM1.
They also have a directory entry on VM2 (but password set to NOLOG). 

1) What's the best way to do failover if we need to get
something over?   Right now, my plan is basically to log into VM2 and
change the NOLOG to a password and then start the guest. Basically I
want to avoid having our Operations staff make mistakes and start 2
instances of the same linux guest (on 2 VM systems). 

2) We use FDR/ABR on our z/OS side for backing up for Disaster
Recovery.We would like to keep using FDR.Now I know I can get
clean backups if the systems are shut down.   Are there any gotcha's if
I take a FDR full dump against say 530RES or 530SPL while the system is
up?  \ 

3) last of all, how often does VM get backed up when it's just
used as a Linux server system?? 

Thanks 







Re: Backups and failover

2008-01-11 Thread Rob van der Heij
On Jan 11, 2008 11:03 AM, Kris Buelens [EMAIL PROTECTED] wrote:

 DIRMAINT (or similar) is only required if you want to have a single
 source directory that defines all  users of all VM systems in the CSE
 group.  DIRMAINt's CSE support doesn't need PVM, it needs an RSCS
 link.

I believe you underestimate the size of the gun that you point at your feet.

If you share disks between systems (even in R/O fashion) you need some
process to keep directory entries up-to-date. An approach to keep
directory entries between systems synchronized by only good will and
promise will fail. Many of us have learned that the lifetime of things
on the mainframe is often much longer than anticipated. When there's
the a chance that something can break, we have enough time that it
will break eventually. Murphy refines this as to when it will break.

When you maintain the directory by hand, you can probably wrap some
tooling around that to check / update the shared disks. Instead of
using a shared disk to hold those entries (which gives a chicken  egg
problem) you could use ISFC links to have two (home grown) service
machines talk to each other. With the CMS Pipelines IUCV stages, you
can issue Q MDISK U commands on the remote system and get the output
of that back into your tooling. Unlike RSCS, the ISFC link does not
require extra software (only CTC links between the LPARs).

Rob


-- 
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/


Re: Backups and failover

2008-01-11 Thread Kris Buelens
There is also the possibility to use the XLINK subset of CSE, and
avoid the need to get a license for Dirmaint (or alike) and PVM.

PVM is only required if you need to share the spool or when you want
to use the few CP commands that work x-system (e.g. MSG xx AT yy).
But spool sharing is not a true sharing: even with CSE each VM
system has its own spool areas on its own disks; the PVM connection is
used by CP to be aware of the spool files existing at the other VM
system.  This means that when VM1 is down, VM2 cannot see the spool
files of the VM1.

DIRMAINT (or similar) is only required if you want to have a single
source directory that defines all  users of all VM systems in the CSE
group.  DIRMAINt's CSE support doesn't need PVM, it needs an RSCS
link.

XLINK avoids minidisk integrity problems, it extends the classic R,W,M
link protection modes to other systems in the CSE group.  E.g. to
prohibit that a minidisk is linked R/W concurrently by some user in
VM1 and some user in VM2.  That is what I implemented on my customer's
systems to protect the minidisks on a few disks that are shared
between the VM systems.  For example:
  link KRIS   W
  HCPLNM104E KRIS  not linked; R/O by VMKBCT01

XLINK doesn't need any extra software, one only needs to tell CP which
volumes much be protected by XLINK and define (and format) an area on
the shared disks where CP will maintain a bitmap that tells which
systems have which cylinders (i.e. minidisks) are in use by which VM
system.  The XLINK definitions are defined in SYSTEM CONFIG and cannot
be changed dynamically.
XLINK has a very low overhead, only at LINK and DETACH some IO to the
CSE area may take place.  But, with or without XLINK, CP's Minidisk
Cache should not be used on shared disks.

2008/1/10, Alan Altmark [EMAIL PROTECTED]:
 On Thursday, 01/10/2008 at 11:36 EST, Karl Kingston
 [EMAIL PROTECTED] wrote:
  We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2.
 Right now, all of our Linux guests (about 5) are on VM1.   They also
 have a
  directory entry on VM2 (but password set to NOLOG).
 
  1) What's the best way to do failover if we need to get something over?
   Right
  now, my plan is basically to log into VM2 and change the NOLOG to a
 password
  and then start the guest. Basically I want to avoid having our
 Operations
  staff make mistakes and start 2 instances of the same linux guest (on 2
 VM
  systems).

 The best way is to implement a VM cluster using Cross-System Extensions
 (CSE).  You will need DIRMAINT (or other cluster-enabled directory
 manager) and to special bid the PVM product.

 It will
 - Let you share spool files among all systems in the cluster
 - Only allow the Linux user to logon on VM1 or VM2, not both
 - Let Certain Users logon to both systems at the same time (no shared
 spool) such as TCPIP
 - Perform user add / change / delete from a single system
 - Allow the users to have different virtual machine configurations,
 depending on which system they logon to
 - Let LINUX1 on VM2 link to disks owned by LINMASTR on VM1 with link mode
 protection (you can do this without CSE via XLINK)

  2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.
We
  would like to keep using FDR.Now I know I can get clean backups if
 the
  systems are shut down.   Are there any gotcha's if I take a FDR full
 dump
  against say 530RES or 530SPL while the system is up?  \

 I strongly encourage you NOT to do that.  The warmstart area and the spool
 volumes must all be consistent.  If you must use FDR, then set up a 2nd
 level guest (with dedicated volumes, not mdisks) whose sole purpose is to
 provide a resting place for spool files to be backed up by FDR.

 SPXTAPE DUMP the first level spool to tape.  Then attach the tape to the
 2nd level guest and SPXTAPE LOAD it.  Shut it down.  Back it up with FDR.
 When recovering, use FDR to restore the guest volumes. IPL the guest,
 SPXTAPE DUMP them and then load them on the first level system.  If
 necessary, you could IPL the guest first level, as it were.  The nice
 thing is that the guest doesn't have to have the same spool volume
 configuration; it just needs enough space to store the spool files.

 Of course, if you only use the spool for transient data and don't care if
 you lose it, you can simply COLD/CLEAN start and rebuild the NSSes and
 DCSSes.

 Spool files that were open will not be restored, so be sure to send CLOSE
 CONS commands to each guest before you dump the spool.

  3) last of all, how often does VM get backed up when it's just used as a
 Linux
  server system??

 The more often it is backed up the less delta work you have to do to get
 the system back to the current state.  How much pain are you willing to
 endure?  Of course, it also depends on how often your VM system changes.
 If you added 20 new servers this week, are you sure you want to
 reallocate, reformat, and reinstall 20 images?  Or make 20 add'l clones?

 You might 

Re: Backups and failover

2008-01-11 Thread Mark Wheeler

 ISFC would be nice, but PVM already supports all the same connection
 methods that ISFC does, plus a few more that ISFC doesn't. I'd *really*
 rather not make this dependent on CTCs.

FWIW...

Coming from a shop where we have quarterly enterprise-wide network outages
for maintenance, I find our FICON infrastructure much more reliable. Hard
to beat the reliability and simplicity of ISFC.

Mark L. Wheeler
IT Infrastructure, 3M Center B224-4N-20, St Paul MN 55144
Tel:  (651) 733-4355, Fax:  (651) 736-7689
mlwheeler at mmm.com
--
I have this theory that if one person can go out of their way to show
compassion then it will start a chain reaction of the same. People will
never know how far a little kindness can go. Rachel Joy Scott


Re: Backups and failover

2008-01-11 Thread David Boyes
 I believe the requirement for PVM makes it very unattractive for
 installations to move forward with that next level of CSE. 

The problem isn't the need for PVM, it's the difficulty of obtaining PVM
in the recommended environment. Special bid prereqs for clustering
function makes it darn hard to want to use the facilities that are
already there.

 The change
 to use ISFC for that is long overdue. 

ISFC would be nice, but PVM already supports all the same connection
methods that ISFC does, plus a few more that ISFC doesn't. I'd *really*
rather not make this dependent on CTCs.


Re: Backups and failover

2008-01-11 Thread Kris Buelens
I've got a few execs that backup all DCSSes found in the spool, they
can also be rebuilt, it can even restore everything unattended
(AUTOLOG2 may start my server to restore all saved segs
The problems are the NSS files (GCS and CMS).  My code can deal with
them too, but there are requirements:
- Rebuilding GCS requires that the card deck to build GCS (the GCTLOAD file)
  is stored on the A-disk of my server, and the name of the GCS must match
  the mdisk label of MAINT 595
- Rebuilding the CMS NSS requires that MAINT 190 has the label of the CMS
  NSS, and a local mod to DMSINI is required so that an IPL 190 doesn't
  post a VM READ.
Saving CMS and GCS manually isn't very difficult, so one could change
the REXX code to skip them.  I'll send you the code.


2008/1/11, Karl Kingston [EMAIL PROTECTED]:

 
2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.
  We
would like to keep using FDR.Now I know I can get clean backups if
   the
systems are shut down.   Are there any gotcha's if I take a FDR full
   dump
against say 530RES or 530SPL while the system is up?  \
  
   I strongly encourage you NOT to do that.  The warmstart area and the spool
   volumes must all be consistent.  If you must use FDR, then set up a 2nd
   level guest (with dedicated volumes, not mdisks) whose sole purpose is to
   provide a resting place for spool files to be backed up by FDR.
  
   SPXTAPE DUMP the first level spool to tape.  Then attach the tape to the
   2nd level guest and SPXTAPE LOAD it.  Shut it down.  Back it up with FDR.
   When recovering, use FDR to restore the guest volumes. IPL the guest,
   SPXTAPE DUMP them and then load them on the first level system.  If
   necessary, you could IPL the guest first level, as it were.  The nice
   thing is that the guest doesn't have to have the same spool volume
   configuration; it just needs enough space to store the spool files.
  
   Of course, if you only use the spool for transient data and don't care if
   you lose it, you can simply COLD/CLEAN start and rebuild the NSSes and
   DCSSes.

 Don't care about the spool.   Can you provide a step by step to rebuild the 
 NSSes and DCSSes?

 Can't use SPXTAPE dump as we have no tape drives attached to either of the VM 
 systems.Is there a way to save the DCSSes and NSSes without a tape drive?





-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Backups and failover

2008-01-11 Thread David Boyes
 Coming from a shop where we have quarterly enterprise-wide network
outages
 for maintenance, I find our FICON infrastructure much more reliable.

At least you know where to focus the solvent...8-)

 Hard
 to beat the reliability and simplicity of ISFC.

PVM likes CTCs too. 


Re: Backups and failover

2008-01-11 Thread Brian Nielsen
On Fri, 11 Jan 2008 08:51:38 -0500, Karl Kingston [EMAIL PROTECTED]
 
wrote:


  2) We use FDR/ABR on our z/OS side for backing up for Disaster
Recovery.
We
  would like to keep using FDR.Now I know I can get clean backups 
if

 the
  systems are shut down.   Are there any gotcha's if I take a FDR full

 dump
  against say 530RES or 530SPL while the system is up?  \

 I strongly encourage you NOT to do that.  The warmstart area and the
spool
 volumes must all be consistent.  If you must use FDR, then set up a 2n
d
 level guest (with dedicated volumes, not mdisks) whose sole purpose is

to
 provide a resting place for spool files to be backed up by FDR.

 SPXTAPE DUMP the first level spool to tape.  Then attach the tape to t
he

 2nd level guest and SPXTAPE LOAD it.  Shut it down.  Back it up with
FDR.
 When recovering, use FDR to restore the guest volumes. IPL the guest,
 SPXTAPE DUMP them and then load them on the first level system.  If
 necessary, you could IPL the guest first level, as it were.  The nice
 thing is that the guest doesn't have to have the same spool volume
 configuration; it just needs enough space to store the spool files.

 Of course, if you only use the spool for transient data and don't care

if
 you lose it, you can simply COLD/CLEAN start and rebuild the NSSes and

 DCSSes.

Don't care about the spool.   Can you provide a step by step to rebuild
the NSSes and DCSSes?

Can't use SPXTAPE dump as we have no tape drives attached to either of t
he
VM systems.Is there a way to save the DCSSes and NSSes without a tap
e
drive?

  If all you care about is the DCSS's and the NSS's you do have an option
, 
albeit unpopular and sure to draw ire.  The key lies in the nuances of 

Alan's statement above that The warmstart area and the spool volumes mus
t 
all be consistent.  If you care about *all* spool files then you *have* 

to do as he recommends and either SPXTAPE them or backup your DASD from a
 
shutdown system.

  If, and only if, you need spool files (such as DCSS's and NSS) which 

have existed for a long enough time that their information has been 
written to the checkpoint area then they will be correctly recovered 
during a FORCE start if you use full pack backups of that running system.
  
This is similar to what you should expect to recover if you lose power or
 
otherwise lose your system without doing a SHUTDOWN.

This can easily be tested by DDR'ing your running 1st level system packs 

to a 2nd level guests MDISKs and IPLing the 2nd level guest.

Again, the best advice is to do it properly, but it's always useful to 

know what is possible and what the risks are even if it's not recommended
.

Brian Nielsen


Re: Backups and failover

2008-01-11 Thread Rob van der Heij
On Jan 11, 2008 3:49 PM, David Boyes [EMAIL PROTECTED] wrote:

 ISFC would be nice, but PVM already supports all the same connection
 methods that ISFC does, plus a few more that ISFC doesn't. I'd *really*
 rather not make this dependent on CTCs.

I think PVM for transport of CP-to-CP communication was an ugly hack,
probably because there were many SNA-less VM shops back then that
already had a PVM network. I've seen big badges so fond of PASSTHRU
that they could demand a PVM network in addition to the rest of
connectivity. But for most shops the days are gone where you could
afford to confuse people just to re-use hardware for something it was
not meant to do.

When you want to clustering of z/VM images, there will be sufficient
other need for interaction (like shared DASD) so that the ISFC
requirements do not make it harder. ISFC is pretty robust as long as
you don't make it into slow connections and non-trivial topology.

There may also be an additional need for a supported IPGATE solution
(to let applications on non-clustered z/VM images talk). But we should
very clearly distinguish between clustered and connected z/VM images.

Rob


Re: Backups and failover

2008-01-11 Thread Huegel, Thomas
What are the limits? I seem to remember DIRMAINT has a SYSAFIN max of 16.
Does CSE have a limit?

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
Behalf Of Rob van der Heij
Sent: Friday, January 11, 2008 7:39 AM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: Backups and failover


On Jan 11, 2008 2:12 PM, Kris Buelens [EMAIL PROTECTED] wrote:

 The size of the Gun?  I was illustrating Dirmaint's place/role in the
 CSE  *requirements*; I explictly wanted to say all CSE functions don't
 need.PVM and Dirmaint.  I know from experience that many people see
 CSE as 1 thing, and that isn't the case: 3 distinct functions, all
 with different requirements.

The primary requirement is that the CP directory of all involved
systems is managed in some way by a single organization.
Use of distributed IUCV requires that you control at least the
userids. And like you describe, when you share mini disks you must
control allocation of those shared mini disks. Whether with DIRMAINT
or with home grown tools.

I believe the requirement for PVM makes it very unattractive for
installations to move forward with that next level of CSE. The change
to use ISFC for that is long overdue. We can only hope this will
change when exciting new function for CP in that area is made
available.

Rob


Re: Backups and failover

2008-01-11 Thread Alan Altmark
On Friday, 01/11/2008 at 08:38 EST, Rob van der Heij [EMAIL PROTECTED]
 The primary requirement is that the CP directory of all involved
 systems is managed in some way by a single organization.
 Use of distributed IUCV requires that you control at least the
 userids. And like you describe, when you share mini disks you must
 control allocation of those shared mini disks. Whether with DIRMAINT
 or with home grown tools.

Yes.  The OP asked for the best way, and that involves using system 
management products that were designed to operate in a clustered 
environment.  A man with two watches does not know the time.  Keeping 
two (or four!) source directories in sync is a must in a *cluster*.  If 
you're just sharing DASD between two separate VM systems (i.e. not a 
cluster), then you have extra work to do to ensure that directory updates 
that affect the shared volume are properly reflected on all systems.

If you've got RACF, you can share the database among all systems in the 
cluster.  I would guess that CA's products have a clustering capability as 
well.

By the way, PVM is also the mechanism that limits a user to logon only 
once in the cluster.  Sure, you can write some scripts of your own, but 
they are advisory in the sense that the system cannot enforce the 
policy.  If someone bypasses the script you may find yourself in trouble. 
And that may not have anything to do with virtual machine corruption, but, 
for example, something as simple as bringing up two hosts with the same IP 
address.

Alan Altmark
z/VM Development
IBM Endicott


Re: Backups and failover

2008-01-11 Thread Kris Buelens
The size of the Gun?  I was illustrating Dirmaint's place/role in the
CSE  *requirements*; I explictly wanted to say all CSE functions don't
need.PVM and Dirmaint.  I know from experience that many people see
CSE as 1 thing, and that isn't the case: 3 distinct functions, all
with different requirements.

We do have Dirmaint, but, our VM systems are that different that
having a single source directory would be unmanageable.  We do however
have an exec that performs a nightly verification of all source
directories.  Basically all MDISK statements found to reside on packs
that happen to be used in more than one source directory must be
identical (this way we even don't have to maintain lists volsers that
are shared)

2008/1/11, Rob van der Heij [EMAIL PROTECTED]:
 On Jan 11, 2008 11:03 AM, Kris Buelens [EMAIL PROTECTED] wrote:

  DIRMAINT (or similar) is only required if you want to have a single
  source directory that defines all  users of all VM systems in the CSE
  group.  DIRMAINt's CSE support doesn't need PVM, it needs an RSCS
  link.

 I believe you underestimate the size of the gun that you point at your feet.

 If you share disks between systems (even in R/O fashion) you need some
 process to keep directory entries up-to-date. An approach to keep
 directory entries between systems synchronized by only good will and
 promise will fail. Many of us have learned that the lifetime of things
 on the mainframe is often much longer than anticipated. When there's
 the a chance that something can break, we have enough time that it
 will break eventually. Murphy refines this as to when it will break.

 When you maintain the directory by hand, you can probably wrap some
 tooling around that to check / update the shared disks. Instead of
 using a shared disk to hold those entries (which gives a chicken  egg
 problem) you could use ISFC links to have two (home grown) service
 machines talk to each other. With the CMS Pipelines IUCV stages, you
 can issue Q MDISK U commands on the remote system and get the output
 of that back into your tooling. Unlike RSCS, the ISFC link does not
 require extra software (only CTC links between the LPARs).

 Rob


 --
 Rob van der Heij
 Velocity Software, Inc
 http://velocitysoftware.com/




-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Backups and failover

2008-01-11 Thread Rob van der Heij
On Jan 11, 2008 2:12 PM, Kris Buelens [EMAIL PROTECTED] wrote:

 The size of the Gun?  I was illustrating Dirmaint's place/role in the
 CSE  *requirements*; I explictly wanted to say all CSE functions don't
 need.PVM and Dirmaint.  I know from experience that many people see
 CSE as 1 thing, and that isn't the case: 3 distinct functions, all
 with different requirements.

The primary requirement is that the CP directory of all involved
systems is managed in some way by a single organization.
Use of distributed IUCV requires that you control at least the
userids. And like you describe, when you share mini disks you must
control allocation of those shared mini disks. Whether with DIRMAINT
or with home grown tools.

I believe the requirement for PVM makes it very unattractive for
installations to move forward with that next level of CSE. The change
to use ISFC for that is long overdue. We can only hope this will
change when exciting new function for CP in that area is made
available.

Rob


Re: Backups and failover

2008-01-11 Thread Dave Jones

Hi, Karl.

Karl Kingston wrote:
[snip.]
Don't care about the spool.   Can you provide a step by step to rebuild 
the NSSes and DCSSes?


Can't use SPXTAPE dump as we have no tape drives attached to either of the 
VM systems.Is there a way to save the DCSSes and NSSes without a tape 
drive?



Yup, there sure is we've got a freebie spool file backup/restore 
utility that can backup (to CMS files) and restore all types of spool 
files; DCSS, NSS, NLS, IMG, UCR, and unit record (PRT, PUN, RDR). Drop 
me a note off list if you'd like a copy of this free utility.


Have a good one.

--
DJ

V/Soft
  z/VM and mainframe Linux expertise, training,
  consulting, and software development
www.vsoft-software.com


Re: Backups and failover

2008-01-11 Thread RPN01
For the guarded failover portion, we have a rexx script and server that
keeps track of which system the guest was last booted on. If it is logged in
on the same host, the system just starts up. If it is autologged on the
other host, it immediately logs off (it¹s 191 disk is R/O, so no damage
done.) If it is logged in at a terminal on the other host, there is a prompt
telling the user that it was last brought up on the other other host, and
do you really want to bring it up here?

Answering no causes a logout. Answering yes starts the boot process, which
includes logging the new boot into the system and the process is ready to
work in the other direction.

I have this code available, if you¹d like a copy

-- 
   .~.Robert P. Nix Mayo Foundation
   /V\RO-OE-5-55  200 First Street SW
 / ( ) \  507-284-0844   Rochester, MN 55905
^^-^^   - 
In theory, theory and practice are the same, but ³Join the story...
Ride Ural.²
 in practice, theory and practice are different.




On 1/10/08 10:33 AM, Karl Kingston [EMAIL PROTECTED] wrote:

 
 We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2.
 Right now, all of our Linux guests (about 5) are on VM1.   They also have a
 directory entry on VM2 (but password set to NOLOG).
 
 1) What's the best way to do failover if we need to get something over?
 Right now, my plan is basically to log into VM2 and change the NOLOG to a
 password and then start the guest. Basically I want to avoid having our
 Operations staff make mistakes and start 2 instances of the same linux guest
 (on 2 VM systems).
 
 2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.We
 would like to keep using FDR.Now I know I can get clean backups if the
 systems are shut down.   Are there any gotcha's if I take a FDR full dump
 against say 530RES or 530SPL while the system is up?  \
 
 3) last of all, how often does VM get backed up when it's just used as a Linux
 server system?? 
 
 Thanks 
 




Re: Backups and failover

2008-01-11 Thread David Boyes
  problem isn't the need for PVM, it's the difficulty of obtaining PVM
  in the recommended environment. Special bid prereqs for clustering
  function makes it darn hard to want to use the facilities that are
  already there. 
 Actually, it's not all that hard any more.  The Special Bid is there,
on
 the shelf.  All you have to do is ask for it.  No fuss, no muss.

I wish more of your sales force knew that. It's not at all clear to
*them*. 

 When CP can support a geographically dispersed clustering model, then
we
 will need to have the ability to establish connections via a WAN
 infrastructure. [snip]
 ISFC's zero-configuration (just ACTIVATE ISLINK) characteristics make
it
 an ideal technology for those who want fast deployment with minimal
 education.  (No one here, of course.) 

On the other hand, you have *working* code that DOES the job, the
problem is figuring out how to browbeat your marketing process into
making it easy to get that code into people's hands. While ISFC is
certainly easier, PVM isn't exactly rocket science, either. 

I'd rather use your development cycles on something that DOESN'T work
yet. 


Re: Backups and failover

2008-01-11 Thread Alan Altmark
On Friday, 01/11/2008 at 08:52 EST, Karl Kingston [EMAIL PROTECTED] 
wrote:

 Don't care about the spool.   Can you provide a step by step to rebuild 
the 
 NSSes and DCSSes? 

The steps for rebuilding the NSSes and DCSSes are in the Service Guide 
(Step 12, chapter 3).

 Can't use SPXTAPE dump as we have no tape drives attached to either of 
the VM 
 systems.Is there a way to save the DCSSes and NSSes without a tape 
drive? 

For DCSSes you can [probably] use DCSSBKUP and DCSSRSAV commands.  For 
NSSes, there is no IBM-supported way I can think of to copy an NSS to disk 
without a tape drive as an intermediary.  Just recreate CMS and GCS (if 
needed).

Fair warning: A stand-alone CP dump requires a tape drive (not VTAPE).

Looking at the General Information Manual, I see that we should probably 
update the Server requirements to include a tape drive for spool file 
backup and stand-alone dump.

Alan Altmark
z/VM Development
IBM Endicott


Re: Backups and failover

2008-01-11 Thread Alan Altmark
On Friday, 01/11/2008 at 09:51 EST, David Boyes [EMAIL PROTECTED] 
wrote:
  I believe the requirement for PVM makes it very unattractive for
  installations to move forward with that next level of CSE.
 
 The problem isn't the need for PVM, it's the difficulty of obtaining PVM
 in the recommended environment. Special bid prereqs for clustering
 function makes it darn hard to want to use the facilities that are
 already there.

Actually, it's not all that hard any more.  The Special Bid is there, on 
the shelf.  All you have to do is ask for it.  No fuss, no muss.

 ISFC would be nice, but PVM already supports all the same connection
 methods that ISFC does, plus a few more that ISFC doesn't. I'd *really*
 rather not make this dependent on CTCs.

When CP can support a geographically dispersed clustering model, then we 
will need to have the ability to establish connections via a WAN 
infrastructure.  As it stands, you already have to have FICON connectivity 
between systems, as it were, so you can share the DASD.  Having another 
chpid for CTCs (if you don't already have one) shouldn't be a 
deal-breaker.  Granted, you will need extra ports on the FICON switch 
where a port shortage would drive up the cost.  But if that's the case, 
then I think you were about to buy another switch anyway.  This means it's 
a littler sooner than you had originally thought.  [Same issue applies to 
ESCON.]

ISFC's zero-configuration (just ACTIVATE ISLINK) characteristics make it 
an ideal technology for those who want fast deployment with minimal 
education.  (No one here, of course.)  But I would advise that creating an 
ISFC Collection, as it is known, does introduce the requirement of a flat 
userid name space.  By that I mean that user ALAN on VM1 must be the same 
person as ALAN on VM2.  If, for example, you enroll ALAN in MYSFS on VM1, 
then ALAN on VM2 will have access with all the same rights, privileges, 
and responsibilities.  This begs for centralized directory managment.

Alan Altmark
z/VM Development
IBM Endicott


Backups and failover

2008-01-10 Thread Karl Kingston
We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2. 
Right now, all of our Linux guests (about 5) are on VM1.   They also have 
a directory entry on VM2 (but password set to NOLOG).

1) What's the best way to do failover if we need to get something over? 
Right now, my plan is basically to log into VM2 and change the NOLOG to a 
password and then start the guest. Basically I want to avoid having 
our Operations staff make mistakes and start 2 instances of the same linux 
guest (on 2 VM systems).

2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.  
We would like to keep using FDR.Now I know I can get clean backups if 
the systems are shut down.   Are there any gotcha's if I take a FDR full 
dump against say 530RES or 530SPL while the system is up?  \

3) last of all, how often does VM get backed up when it's just used as a 
Linux server system??

Thanks


Re: Backups and failover

2008-01-10 Thread Jim Bohnsack
Karl--About the biggest problem you are likely to face if you take 
backups of 530RES and SPL is that you might or I should say, would, lose 
open VM spool files.  An ipl from the restored volumes would have to be 
done using FORCE rather than WARM because there would have been no 
WARM START DATA saved.  For better or worse, our normal backups are 
taken against up and running VM systems. 

Since you're running z/OS and doing backups from there, I would 
recommend doing regularly scheduled backups.  The wasted effort or 
redundant or duplicate backups are a small price to pay for the security 
of knowing that a change was made and you or someone forgot to take a 
backup.  Tapes sometimes go bad.  If you have a previous backup to go 
back to, that's another safety net.


Jim

Karl Kingston wrote:

This is a multipart message in MIME format.
--=_alternative 005B0747852573CC_=
Content-Type: text/plain; charset=US-ASCII

We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2. 
Right now, all of our Linux guests (about 5) are on VM1.   They also have 
a directory entry on VM2 (but password set to NOLOG).


1) What's the best way to do failover if we need to get something over? 
Right now, my plan is basically to log into VM2 and change the NOLOG to a 
password and then start the guest. Basically I want to avoid having 
our Operations staff make mistakes and start 2 instances of the same linux 
guest (on 2 VM systems).


2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery.  
We would like to keep using FDR.Now I know I can get clean backups if 
the systems are shut down.   Are there any gotcha's if I take a FDR full 
dump against say 530RES or 530SPL while the system is up?  \


3) last of all, how often does VM get backed up when it's just used as a 
Linux server system??


Thanks

--=_alternative 005B0747852573CC_=
Content-Type: text/html; charset=US-ASCII


brfont size=2 face=sans-serifWe just installed z/VM 5.3. nbsp; We
have 2 systems running. nbsp; VM1 and VM2. nbsp; nbsp;Right now, all
of our Linux guests (about 5) are on VM1. nbsp; They also have a directory
entry on VM2 (but password set to NOLOG)./font
br
brfont size=2 face=sans-serif1) What's the best way to do failover
if we need to get something over? nbsp; Right now, my plan is basically
to log into VM2 and change the NOLOG to a password and then start the guest.
nbsp; nbsp; Basically I want to avoid having our Operations staff make
mistakes and start 2 instances of the same linux guest (on 2 VM systems)./font
br
brfont size=2 face=sans-serif2) We use FDR/ABR on our z/OS side for
backing up for Disaster Recovery. nbsp; nbsp;We would like to keep using
FDR. nbsp; nbsp;Now I know I can get clean backups if the systems are
shut down. nbsp; Are there any gotcha's if I take a FDR full dump against
say 530RES or 530SPL while the system is up? nbsp;\/font
br
brfont size=2 face=sans-serif3) last of all, how often does VM get
backed up when it's just used as a Linux server system??/font
br
brfont size=2 face=sans-serifThanks/font
br
--=_alternative 005B0747852573CC_=--

  



--
Jim Bohnsack
Cornell University
(607) 255-1760
[EMAIL PROTECTED]


Re: Backups and failover

2008-01-10 Thread Thomas Kern
In a server hosting environment, user spool files may not be necessary at

you backup site. The remaining System Data Files (SDF) are not written ou
t
very often and you can probably do a COLD start at the backup site which
will throw away all of the spool files but not the SDF files. For extra
safety you might backup your SDF files to a CMS minidisk (DCSSBKUP/DCSSRS
AV
commands). Then at the backup site, if you do have problems with some SDF
,
you can restore it and continue. If you do need spool files to be restore
d,
then use SPXTAPE to dump all of your spool separately from your DASD back
ups
(sorry, z/OS can't do this yet), do a CLEAN start at your backup site and

restore all of your spool and SDF files from the backup tape.

/Tom Kern

On Thu, 10 Jan 2008 13:32:34 -0500, Jim Bohnsack [EMAIL PROTECTED] wro
te:
Karl--About the biggest problem you are likely to face if you take
backups of 530RES and SPL is that you might or I should say, would, lose

open VM spool files.  An ipl from the restored volumes would have to be
done using FORCE rather than WARM because there would have been no
WARM START DATA saved.  For better or worse, our normal backups are
taken against up and running VM systems.

Since you're running z/OS and doing backups from there, I would
recommend doing regularly scheduled backups.  The wasted effort or
redundant or duplicate backups are a small price to pay for the security

of knowing that a change was made and you or someone forgot to take a
backup.  Tapes sometimes go bad.  If you have a previous backup to go
back to, that's another safety net.

Jim



Re: Backups and failover

2008-01-10 Thread Alan Altmark
On Thursday, 01/10/2008 at 11:36 EST, Karl Kingston 
[EMAIL PROTECTED] wrote:
 We just installed z/VM 5.3.   We have 2 systems running.   VM1 and VM2. 
Right now, all of our Linux guests (about 5) are on VM1.   They also 
have a 
 directory entry on VM2 (but password set to NOLOG). 
 
 1) What's the best way to do failover if we need to get something over? 
  Right 
 now, my plan is basically to log into VM2 and change the NOLOG to a 
password 
 and then start the guest. Basically I want to avoid having our 
Operations 
 staff make mistakes and start 2 instances of the same linux guest (on 2 
VM 
 systems). 

The best way is to implement a VM cluster using Cross-System Extensions 
(CSE).  You will need DIRMAINT (or other cluster-enabled directory 
manager) and to special bid the PVM product.

It will 
- Let you share spool files among all systems in the cluster
- Only allow the Linux user to logon on VM1 or VM2, not both
- Let Certain Users logon to both systems at the same time (no shared 
spool) such as TCPIP
- Perform user add / change / delete from a single system
- Allow the users to have different virtual machine configurations, 
depending on which system they logon to
- Let LINUX1 on VM2 link to disks owned by LINMASTR on VM1 with link mode 
protection (you can do this without CSE via XLINK)

 2) We use FDR/ABR on our z/OS side for backing up for Disaster Recovery. 
   We 
 would like to keep using FDR.Now I know I can get clean backups if 
the 
 systems are shut down.   Are there any gotcha's if I take a FDR full 
dump 
 against say 530RES or 530SPL while the system is up?  \ 

I strongly encourage you NOT to do that.  The warmstart area and the spool 
volumes must all be consistent.  If you must use FDR, then set up a 2nd 
level guest (with dedicated volumes, not mdisks) whose sole purpose is to 
provide a resting place for spool files to be backed up by FDR.

SPXTAPE DUMP the first level spool to tape.  Then attach the tape to the 
2nd level guest and SPXTAPE LOAD it.  Shut it down.  Back it up with FDR. 
When recovering, use FDR to restore the guest volumes. IPL the guest, 
SPXTAPE DUMP them and then load them on the first level system.  If 
necessary, you could IPL the guest first level, as it were.  The nice 
thing is that the guest doesn't have to have the same spool volume 
configuration; it just needs enough space to store the spool files.

Of course, if you only use the spool for transient data and don't care if 
you lose it, you can simply COLD/CLEAN start and rebuild the NSSes and 
DCSSes.

Spool files that were open will not be restored, so be sure to send CLOSE 
CONS commands to each guest before you dump the spool.

 3) last of all, how often does VM get backed up when it's just used as a 
Linux 
 server system?? 

The more often it is backed up the less delta work you have to do to get 
the system back to the current state.  How much pain are you willing to 
endure?  Of course, it also depends on how often your VM system changes. 
If you added 20 new servers this week, are you sure you want to 
reallocate, reformat, and reinstall 20 images?  Or make 20 add'l clones?

You might want to consider a commercial z/VM backup/archive/restore 
product.

Alan Altmark
z/VM Development
IBM Endicott