The only thing I would really use SFS for would be the product disks (CP,
CMS, GCS, etc), and trying to move those to another pool would mean having
to edit many of the control files that come with the install and maintenance
that contain the VMSYS: filepool name. Too big a headache to make it
worthwhile.

I can't really see requiring SFS to bring up a Linux guest either. If SFS
breaks for some reason, all your Linux guests are broken, should they try to
restart. If minidisks are broken, well... Then you probably have IBM on the
phone, and your life is too miserable at the moment to discuss.

All our minidisks are accessible from both sides of the CSE. We can shutdown
and log out an image in one LPAR, and immediately log it in and boot Linux
in the other, without any changes to the Linux or z/VM configuration. It's
simple, and it works.

If you tie the Linux image to a local filepool (or to a minidisk unique to a
single system in your complex, for that matter), you've hampered your
ability to quickly relocate the image from one LPAR to another; you've
reduced your ability to quickly address problems. I really like the 60
second hardware switch. I wish we had a way to automate the switch during a
problem, to cut that 60 seconds down to near nothing. Still wouldn't be a
complete HA solution, but it'd be as close as we could get for non-HA
compliant applications (things that don't support active-passive or
active-active anyway).

I'm not that resistant to change, but I haven't seen another solution that
still allows us to do what we do here. We're at a bit over 50 Linux guests,
and still growing quickly, and what we do now works very well (so far). You
only buy new mousetraps when someone builds a better one, and we're catching
mice quicker than we can handle them now... We'd consider a better solution;
we just haven't seen one.

-- 
Robert P. Nix          Mayo Foundation        .~.
RO-OE-5-55             200 First Street SW    /V\
507-284-0844           Rochester, MN 55905   /( )\
-----                                        ^^-^^
"In theory, theory and practice are the same, but
 in practice, theory and practice are different."


On 10/28/08 2:44 PM, "O'Brien, Dennis L"
<Dennis.L.O'[EMAIL PROTECTED]> wrote:

> Robert, 
> You don't have to use the VMSYS filepool.  You can create a new filepool
> that doesn't start with "VMSYS" and share it between systems.  The only
> drawback is that if the system that hosts the filepool server isn't up,
> the filepool isn't accessible to the other system.
> 
> We have filepool servers on every system.  They have unique names that
> don't start with "VMSYS".  If we had production Linux on multiple
> systems, we'd use SFS A-disks in a filepool that's on the same system as
> the Linux guests.  Because the pools are sharable, if we had to make a
> change to PROFILE EXEC, we could do that for all systems from one place.
> For our z/OS guests, we have one PROFILE EXEC on each system that has an
> alias for each guest.  If I were setting up Linux guests, I'd do them
> the same way.
> 
>                                                        Dennis
> 
> We are Borg of America.  You will be assimilated.  Resistance is futile.
> 
> -----Original Message-----
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
> Behalf Of RPN01
> Sent: Tuesday, October 28, 2008 12:28
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: [IBMVM] Linux guest 191/200 disk question
> 
> One problem w/ SFS is that we don't run it on our second LPAR at all.
> Anything that we want to be able to run on both systems has to reside on
> a
> minidisk. SFS isn't a choice.
> 
> If IBM would allow the vmsys: pool to be shared between systems, we'd be
> more likely to use it.

Reply via email to