Re: Linux guest 191/200 disk question

2008-10-30 Thread David Boyes
 On a similar but disparate subject: Why do we have to use tape to move
SDF
 type files from one system to another? I just want to move CMS, GCS
and
 the
 various system files from one system within CSE to another... But to
do
 it,
 I have to have a tape drive. It's the only use I have for a tape drive
 now,
 and it keeps us from getting rid of otherwise unneeded hardware in a
data
 center with no space or power to install new systems.

There is an open requirement for this (submitted via WAVV). 


Re: Linux guest 191/200 disk question

2008-10-29 Thread Kris Buelens
I'd not use SFS for Linuxes A-disk.  The benefits SFS surely has for
CMS users, are not enough for Linux guests to outweight the chances of
an SFS that is down.

But, if you insist: renaming the VMSYS filepool to something else is a
task done on 30 seconds (I did that often in my previous live):
- shut down VMSYS (VMSERVS)
- vmlink VMSERVS 191 * * M (filel
  -- XEDIT VMSERVS DMSPARMS and change VMSYS on the FILEPOOLID record
  -- RENAME the VMSYS POOLDEF file to match the new name on FILEPOOLID
Leave FILELIST and restart VMSERVS.

2008/10/28 Adam Thornton [EMAIL PROTECTED]:
 On Oct 28, 2008, at 1:36 PM, Tom Duerbusch wrote:

 I must of missed the first part of the conversation

 Why would you want Linux to have access to your A-disk?
 There might be reasons, but inquiring minds want to know, and deleted the
 original posts G.

 Handy for building systems where you can change Linux behavior without the
 user knowing much of anything about Linux, by editing files in CMS.

 Adam




-- 
Kris Buelens,
IBM Belgium, VM customer support


Re: Linux guest 191/200 disk question

2008-10-29 Thread RPN01
But because I share my res volume among the CSE'd systems, I can't install
any of the products in SFS, because I may need to build one or more of the
products on each of the various systems. So everything gets put in
minidisks, and the vmsys: filepool remains fairly empty.

If I could share vmsys: across systems within a CSE environment, then I
could install the products there, and would be able to build things on any
of the systems. It would greatly simplify maintenance within CSE, because
you'd have fewer minidisks to keep track of, and minidisk size becomes less
of an issue as more and more maintenance is applied to the products (not
that I've had to increase a minidisk since going to 5.0...)

The only product I install to vmsys: is RACF; we don't use it.

Why couldn't vmsys: be localized by default, but allow the option of sharing
it among systems, where it makes sense in the customer's environment? Don't
be so headstrong in protecting me from myself; I may have thought of
something you missed.

On a similar but disparate subject: Why do we have to use tape to move SDF
type files from one system to another? I just want to move CMS, GCS and the
various system files from one system within CSE to another... But to do it,
I have to have a tape drive. It's the only use I have for a tape drive now,
and it keeps us from getting rid of otherwise unneeded hardware in a data
center with no space or power to install new systems.

The other problem with this is that we only have a tape drive on one of the
two z/VM LPARs, so to do the transfer at all, I have to bring up the second
system second-level on the first system. Give SPXTAPE another media, or come
up with another tool for moving these files, please! This is one of the
biggest headaches I have to deal with; thank goodness it only occurs when we
want to upgrade z/VM, but should a problem ever occur that needed SDF
quickly rebuilt on the tapeless system, it'd be chaos.

A question comes to mind here... I can easily build CMS and somewhat easily
build GCS. What is, or where are, the procedures for rebuilding all the
other SDF files? There's likely documentation for the various shared
segments, but what about the IMG and NLS files? I haven't gone on a search
yet, but is there somewhere that these procedures are documented?

FREE SDF AND ITS SPOOL MINIONS FROM THE TAPE TYRANNY!! FREE VMSYS: FROM ITS
SINGLE SYSTEM CELL!!

Tongue in cheek, but these are real issues for us here. Whenever there's
absolutely only one way to do things, everyone suffers.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 2:42 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 On Tuesday, 10/28/2008 at 03:28 EDT, RPN01 [EMAIL PROTECTED] wrote:
 If IBM would allow the vmsys: pool to be shared between systems, we'd be
 more likely to use it.
 
 Say more.  The VMSYS filepool was intended to contain information that is
 used ONLY for THIS system (inventory, service, etc.).  When you establish
 a collection with ISFC, each system's VMSYS filepool remains active and
 private to each system.
 
 Information that you intend to share requires you to set up your own
 filepool and then connect the systems with ISFC (or use IPGATE).
 
 I do recognize that in a clustered environment like CSE it would be good
 to have a VMSYS-like filepool that handles SESesque system data for all
 members of the cluster and is shared.
 
 Alan Altmark
 z/VM Development
 IBM Endicott


Re: Linux guest 191/200 disk question

2008-10-29 Thread RPN01
I generally use M, since if I can¹t get write access, I don¹t really need it
at all at the moment.

The whole issue isn¹t that great here, as we have only four actual users
that would ever attempt to get write access to the Linux guest 191 shared
disk, and two of us sit within shouting distance (much to our other
neighbor¹s regret). Integrity for the disk is handled by saying loudly ³You
using the Linux 191 disk?² and waiting for a response.

The point was that the actual Linux guests certainly never need write access
to their own 191 minidisk, and their read-only usage is only for a few
seconds of time, and hopefully very, very seldom. This is a very safe
candidate for read-only sharing among all the guests, freeing you to think
about other things when you¹re creating a new Linux image. You don¹t have to
add allocating and populating a 191 disk to the list of tasks in building a
new image. You can take care of it in a directory profile included in each
new directory entry and have it completely covered. And, you know that all
the guests are always using exactly the same thing, where with the
individual 191 minidisks, you can¹t ever be really sure. Someone might have
changed something in the profile for one of them, and you¹ll be stuck later
trying to figure out why it doesn¹t work quite the same as all the others.
This alone is a good reason for sharing a single 191 image throughout your
guests.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 2:45 PM, Scott Rohling [EMAIL PROTECTED] wrote:

 Well - technically true if MW is used on the LINK instead of MR -- that's such
 a big no no in general I guess I assume people won't do it -- but good point.
 
 Scott Rohling
 
 
   Until you have two users, access the shared disk in
  R/W mode, to update it.  No protection.  SFS will always protect you.
 
 
 




Re: Linux guest 191/200 disk question

2008-10-29 Thread RPN01
The only thing I would really use SFS for would be the product disks (CP,
CMS, GCS, etc), and trying to move those to another pool would mean having
to edit many of the control files that come with the install and maintenance
that contain the VMSYS: filepool name. Too big a headache to make it
worthwhile.

I can't really see requiring SFS to bring up a Linux guest either. If SFS
breaks for some reason, all your Linux guests are broken, should they try to
restart. If minidisks are broken, well... Then you probably have IBM on the
phone, and your life is too miserable at the moment to discuss.

All our minidisks are accessible from both sides of the CSE. We can shutdown
and log out an image in one LPAR, and immediately log it in and boot Linux
in the other, without any changes to the Linux or z/VM configuration. It's
simple, and it works.

If you tie the Linux image to a local filepool (or to a minidisk unique to a
single system in your complex, for that matter), you've hampered your
ability to quickly relocate the image from one LPAR to another; you've
reduced your ability to quickly address problems. I really like the 60
second hardware switch. I wish we had a way to automate the switch during a
problem, to cut that 60 seconds down to near nothing. Still wouldn't be a
complete HA solution, but it'd be as close as we could get for non-HA
compliant applications (things that don't support active-passive or
active-active anyway).

I'm not that resistant to change, but I haven't seen another solution that
still allows us to do what we do here. We're at a bit over 50 Linux guests,
and still growing quickly, and what we do now works very well (so far). You
only buy new mousetraps when someone builds a better one, and we're catching
mice quicker than we can handle them now... We'd consider a better solution;
we just haven't seen one.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.


On 10/28/08 2:44 PM, O'Brien, Dennis L
Dennis.L.O'[EMAIL PROTECTED] wrote:

 Robert, 
 You don't have to use the VMSYS filepool.  You can create a new filepool
 that doesn't start with VMSYS and share it between systems.  The only
 drawback is that if the system that hosts the filepool server isn't up,
 the filepool isn't accessible to the other system.
 
 We have filepool servers on every system.  They have unique names that
 don't start with VMSYS.  If we had production Linux on multiple
 systems, we'd use SFS A-disks in a filepool that's on the same system as
 the Linux guests.  Because the pools are sharable, if we had to make a
 change to PROFILE EXEC, we could do that for all systems from one place.
 For our z/OS guests, we have one PROFILE EXEC on each system that has an
 alias for each guest.  If I were setting up Linux guests, I'd do them
 the same way.
 
Dennis
 
 We are Borg of America.  You will be assimilated.  Resistance is futile.
 
 -Original Message-
 From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
 Behalf Of RPN01
 Sent: Tuesday, October 28, 2008 12:28
 To: IBMVM@LISTSERV.UARK.EDU
 Subject: Re: [IBMVM] Linux guest 191/200 disk question
 
 One problem w/ SFS is that we don't run it on our second LPAR at all.
 Anything that we want to be able to run on both systems has to reside on
 a
 minidisk. SFS isn't a choice.
 
 If IBM would allow the vmsys: pool to be shared between systems, we'd be
 more likely to use it.


Re: Linux guest 191/200 disk question

2008-10-29 Thread Scott Rohling
|The point was that the actual Linux guests certainly never need write
access to their own |191 minidisk


True for cloning -- not true if you use the RedHat 'kickstart' method (or
SuSE autoyast, which I haven't tried, personally).   I've helped several
clients implement an 'automated kickstart' - which involves creating the
necessary config files on the 191 (or other addressed) disk, punching the
install kernel to the reader and ipling the reader.   The config file points
to a kickstart config on an install server -- and the automated install
takes off.  A new server is created this way rather than cloning..

Don't want to open a debate on this, but there are some benefits to doing
things this way instead of using a clone source and flashcopy/ddr:

-  Network is already configured by the config file and the kickstart
-  It can support any size disk(s) rather than a 3390-3/9 ..  the kickstart
can also contain the partitioning info to support different needs
-  It forces the install to a repeatable, scriptable set of steps - rather
than copying something several people have been tromping on (clone source).
-  Forces product installs to repeatable, scriptable set of steps - use
different kickstart configs rather than different clone sources.
-  Time is still low:  5 minutes  (if DASD is pre-formatted - big if) ..

Anyway -- to 'kick things off' - you need some writable space for the unique
config files..   Most customers that do it I've encouraged to use TDISK -
it's the safest, least complicated way to get temp space - and you don't
have to worry about concurrent installs grabbing some disk.   I've also seen
each guest given it's own 191 - but I'm sure we'd all agree that while it's
safe and uncomplicated -- it's a nightmare --  you want to maintain one
common PROFILE EXEC - not hundreds.   So my vote is:

-  Use common 191 with read only link (minidisk or SFS)
-  Use TDISK when writable area is necessary (like an automatic install or
kickstart)

Scott Rohling


Re: Linux guest 191/200 disk question

2008-10-29 Thread Kris Buelens
For the VMSYS issue: you can also ly and have that same filepool
available both as VMSYS and as some other name.
1. Change the real filepool id as explained in my previous note.  Any
name not starting with VMSYS makes it a candidate for access from
anywhere within the CSE. eg: MYSFS
2. Add REMOTE in the SFS servers's DMSPARMS file
3. Code an SCOMDIR NAMES on MAINT 190 reading
:nick VMSYS :tpn.MYSFS
From then on, this filepool can be reached both as VMSYS: or as
MYSFS:, but not concurrently in a single CMS session.  That is between
IPL CMS commands, you must always use the same name.  If you try the
other name, you'll get for example
 q limits * MYSFS:
 UseridStorage Group  4K Block Limit  4K Blocks Committed  Threshold
 KRIS  3  16 64313-40%   95%
 Ready;
 q limits * VMSYS
 DMSQRQ2524E Concurrent use of multiple file pool identifiers
 DMSQRQ2524E that resolve to file pool MYSFS
 Ready(40);

As for saved segments: you can save segments into a CMS file with the
DCSSBKUP command (on MAINT 193) and restore it with DCSSRSAV.  I had a
server to mange it all.  No tapes used.  For CMS and GCS, I always
saved the text deck that the VMFBLD procedure creates, these files
could then be SENDFILEd for example to remote systems to generate
CMS/GCS remotely.  The server managing the DCSSBKUP files understood
how to handle GCS and CMS too.

2008/10/29 RPN01 [EMAIL PROTECTED]:
 But because I share my res volume among the CSE'd systems, I can't install
 any of the products in SFS, because I may need to build one or more of the
 products on each of the various systems. So everything gets put in
 minidisks, and the vmsys: filepool remains fairly empty.

 If I could share vmsys: across systems within a CSE environment, then I
 could install the products there, and would be able to build things on any
 of the systems. It would greatly simplify maintenance within CSE, because
 you'd have fewer minidisks to keep track of, and minidisk size becomes less
 of an issue as more and more maintenance is applied to the products (not
 that I've had to increase a minidisk since going to 5.0...)

 The only product I install to vmsys: is RACF; we don't use it.

 Why couldn't vmsys: be localized by default, but allow the option of sharing
 it among systems, where it makes sense in the customer's environment? Don't
 be so headstrong in protecting me from myself; I may have thought of
 something you missed.

 On a similar but disparate subject: Why do we have to use tape to move SDF
 type files from one system to another? I just want to move CMS, GCS and the
 various system files from one system within CSE to another... But to do it,
 I have to have a tape drive. It's the only use I have for a tape drive now,
 and it keeps us from getting rid of otherwise unneeded hardware in a data
 center with no space or power to install new systems.

 The other problem with this is that we only have a tape drive on one of the
 two z/VM LPARs, so to do the transfer at all, I have to bring up the second
 system second-level on the first system. Give SPXTAPE another media, or come
 up with another tool for moving these files, please! This is one of the
 biggest headaches I have to deal with; thank goodness it only occurs when we
 want to upgrade z/VM, but should a problem ever occur that needed SDF
 quickly rebuilt on the tapeless system, it'd be chaos.

 A question comes to mind here... I can easily build CMS and somewhat easily
 build GCS. What is, or where are, the procedures for rebuilding all the
 other SDF files? There's likely documentation for the various shared
 segments, but what about the IMG and NLS files? I haven't gone on a search
 yet, but is there somewhere that these procedures are documented?

 FREE SDF AND ITS SPOOL MINIONS FROM THE TAPE TYRANNY!! FREE VMSYS: FROM ITS
 SINGLE SYSTEM CELL!!

 Tongue in cheek, but these are real issues for us here. Whenever there's
 absolutely only one way to do things, everyone suffers.

 --
 Robert P. Nix  Mayo Foundation.~.
 RO-OE-5-55 200 First Street SW/V\
 507-284-0844   Rochester, MN 55905   /( )\
 -^^-^^
 In theory, theory and practice are the same, but
  in practice, theory and practice are different.




 On 10/28/08 2:42 PM, Alan Altmark [EMAIL PROTECTED] wrote:

 On Tuesday, 10/28/2008 at 03:28 EDT, RPN01 [EMAIL PROTECTED] wrote:
 If IBM would allow the vmsys: pool to be shared between systems, we'd be
 more likely to use it.

 Say more.  The VMSYS filepool was intended to contain information that is
 used ONLY for THIS system (inventory, service, etc.).  When you establish
 a collection with ISFC, each system's VMSYS filepool remains active and
 private to each system.

 Information that you intend to share requires you to set up your own
 filepool and then connect the systems with ISFC (or use IPGATE).

 I do recognize that in a clustered environment like CSE it 

Re: Linux guest 191/200 disk question

2008-10-29 Thread Mike Harding
The IBM z/VM Operating System IBMVM@LISTSERV.UARK.EDU wrote on 
10/29/2008 05:51:10 AM:

 On a similar but disparate subject: Why do we have to use tape to move 
SDF
 type files from one system to another? I just want to move CMS, GCS and 
the
 various system files from one system within CSE to another... But to do 
it,
 I have to have a tape drive. It's the only use I have for a tape drive 
now,
 and it keeps us from getting rid of otherwise unneeded hardware in a 
data
 center with no space or power to install new systems.
 
 The other problem with this is that we only have a tape drive on one of 
the
 two z/VM LPARs, so to do the transfer at all, I have to bring up the 
second
 system second-level on the first system. Give SPXTAPE another media, or 
come
 up with another tool for moving these files, please! This is one of the
 biggest headaches I have to deal with; thank goodness it only occurs 
when we
 want to upgrade z/VM, but should a problem ever occur that needed SDF
 quickly rebuilt on the tapeless system, it'd be chaos.

Look at the V/Seg-V/Spool feature of CA's VM:Spool.  It might be cheaper 
than a tape drive and has a utility to back up and restore any SDF to/from 
disk.  Can also be set up automatically to restore them all, a great 
feature for DR.


Re: Linux guest 191/200 disk question

2008-10-29 Thread Mark Post
 On 10/29/2008 at  9:49 AM, Scott Rohling [EMAIL PROTECTED] wrote: 
-snip-
 True for cloning -- not true if you use the RedHat 'kickstart' method (or
 SuSE autoyast, which I haven't tried, personally).   I've helped several
 clients implement an 'automated kickstart' - which involves creating the
 necessary config files on the 191 (or other addressed) disk, punching the
 install kernel to the reader and ipling the reader.   The config file points
 to a kickstart config on an install server -- and the automated install
 takes off.  A new server is created this way rather than cloning..

Not to elongate this thread too much more, but none of that requires a 
read-write 191 disk (or any other local disk).  You can send the three files 
(kernel/parmfile/initrd) from another userid and just IPL from the reader.  If 
that other userid has some automation built into it, you can do things like 
select DASD devices and IP addresses from a predefined pool, craft a custom 
parmfile and kickstart/AutoYaST file and away you go.


Mark Post


Re: Linux guest 191/200 disk question

2008-10-29 Thread Alan Altmark
On Wednesday, 10/29/2008 at 08:51 EDT, RPN01 [EMAIL PROTECTED] wrote:

 Why couldn't vmsys: be localized by default, but allow the option of 
sharing
 it among systems, where it makes sense in the customer's environment? 
Don't
 be so headstrong in protecting me from myself; I may have thought of
 something you missed.

As has been said, you can rename VMSYS to anything you want.  As long as 
the name doesn't start with VMSYS it will be a global filepool (but 
subject to the setting in the DMSPARMS file).  Just be sure you do PPF 
overrides to change the filepool name.

To elaborate a bit on my previous explanation of why? is is because SES 
does not recognize that there could be another system trying to manipulate 
or use the data in the VMSYS filepool.  For example, there's no explicit 
file locking when SES is going to update more than one file.  VMSYS really 
was designed to be accessed by a single system.

Again, I recognize that this is not sufficient for a clustered environment 
that has a mixture of shared and private data needed by SES.  We intend to 
fix that as we improve our clustering capabilities.

 On a similar but disparate subject: Why do we have to use tape to move 
SDF
 type files from one system to another? I just want to move CMS, GCS and 
the
 various system files from one system within CSE to another... But to do 
it,
 I have to have a tape drive. It's the only use I have for a tape drive 
now,
 and it keeps us from getting rid of otherwise unneeded hardware in a 
data
 center with no space or power to install new systems.

Another why? question.  :-)  Because it was designed during a time when 
people were not so reluctant to provide their VM systems with a tape 
drive?  z/OS needs a tape and no one complains.  Harrumph.  (Doesn't 
anyone share anymore?)  There are virtual tapes, whether provided by h/w 
or s/w.  The s/w version doesn't take up space on your RF!

 A question comes to mind here... I can easily build CMS and somewhat 
easily
 build GCS. What is, or where are, the procedures for rebuilding all the
 other SDF files? There's likely documentation for the various shared
 segments, but what about the IMG and NLS files? I haven't gone on a 
search
 yet, but is there somewhere that these procedures are documented?

z/VM Service Guide.  Chapter 4. 

Alan Altmark
z/VM Development
IBM Endicott


Re: Linux guest 191/200 disk question

2008-10-29 Thread Scott Rohling
Yes, that's one way to do it..  another is to use a temp disk and avoid
involvement of 'yet another' userid..   ;-)  You're right - it doesn't
require use of a r/w 191..  but a r/w address somewhere a long the way...


Scott Rohling

On Wed, Oct 29, 2008 at 11:26 AM, Mark Post [EMAIL PROTECTED] wrote:

  On 10/29/2008 at  9:49 AM, Scott Rohling [EMAIL PROTECTED]
 wrote:
 -snip-
  True for cloning -- not true if you use the RedHat 'kickstart' method (or
  SuSE autoyast, which I haven't tried, personally).   I've helped several
  clients implement an 'automated kickstart' - which involves creating the
  necessary config files on the 191 (or other addressed) disk, punching the
  install kernel to the reader and ipling the reader.   The config file
 points
  to a kickstart config on an install server -- and the automated install
  takes off.  A new server is created this way rather than cloning..

 Not to elongate this thread too much more, but none of that requires a
 read-write 191 disk (or any other local disk).  You can send the three files
 (kernel/parmfile/initrd) from another userid and just IPL from the reader.
  If that other userid has some automation built into it, you can do things
 like select DASD devices and IP addresses from a predefined pool, craft a
 custom parmfile and kickstart/AutoYaST file and away you go.


 Mark Post



Linux guest 191/200 disk question

2008-10-28 Thread Mary Anne Matyaz
Hello all. We're bouncing around an idea to change the way we allocate Linux
guests. Currently, we have a mdisk that
has all of the Linux 191 disks on. We then have separate 200 disks (mod9's).
We're thinking of combining the two, such
that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200 disks.
This would allow us to move the linuxes from
one lpar to another as needed. It would also make them more self-contained.
We're facing a dasd upgrade in the near future,
and this would make that a little easier.
Other than the fact that the 200 disk is backed up by TSM and the 191's via
MVS's FDR, can you guys shoot some holes
in this theory? Let me know if you see any other problem areas that I
haven't thought of?

Thanks!
MA


Re: Linux guest 191/200 disk question

2008-10-28 Thread Rich Smrcina

Mary Anne Matyaz wrote:
Hello all. We're bouncing around an idea to change the way we allocate 
Linux guests. Currently, we have a mdisk that
has all of the Linux 191 disks on. We then have separate 200 disks 
(mod9's). We're thinking of combining the two, such
that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200 
disks. This would allow us to move the linuxes from
one lpar to another as needed. It would also make them more 
self-contained. We're facing a dasd upgrade in the near future,

and this would make that a little easier.
Other than the fact that the 200 disk is backed up by TSM and the 191's 
via MVS's FDR, can you guys shoot some holes
in this theory? Let me know if you see any other problem areas that I 
haven't thought of?


Thanks!
MA


If you need to make a change to all of the PROFILE EXECs then you'll need to chase down 
each one to do it.  That's one reason why I like the shared 191 idea.  Other than that 
allocating alot of small minidisks is just a pain.


--
Rich Smrcina
VM Assist, Inc.
Phone: 414-491-6001
Ans Service:  360-715-2467
rich.smrcina at vmassist.com
http://www.linkedin.com/in/richsmrcina

Catch the WAVV!  http://www.wavv.org
WAVV 2009 - Orlando, FL - May 15-19, 2009


Re: Linux guest 191/200 disk question

2008-10-28 Thread Dean, David (I/S)
Small thing, we back up all of our drives, including 200's, through MVS
and then do the Linux minidisks through TSM.  This allows us the ability
to easily retrieve individual files, but the MVS DASD backups are the
way to go when a Linux box goes belly up.

 

David Dean

Information Systems

*bcbstauthorized*

 

 

 



From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of Mary Anne Matyaz
Sent: Tuesday, October 28, 2008 12:13 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Linux guest 191/200 disk question

 

Hello all. We're bouncing around an idea to change the way we allocate
Linux guests. Currently, we have a mdisk that
has all of the Linux 191 disks on. We then have separate 200 disks
(mod9's). We're thinking of combining the two, such
that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
disks. This would allow us to move the linuxes from
one lpar to another as needed. It would also make them more
self-contained. We're facing a dasd upgrade in the near future, 
and this would make that a little easier. 
Other than the fact that the 200 disk is backed up by TSM and the 191's
via MVS's FDR, can you guys shoot some holes
in this theory? Let me know if you see any other problem areas that I
haven't thought of? 

Thanks!
MA

Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Linux guest 191/200 disk question

2008-10-28 Thread Mary Anne Matyaz
Well, they just have a small profile exec that executes the more detailed
one off of a shared disk. So I'm ok there.
MA

On Tue, Oct 28, 2008 at 12:25 PM, Rich Smrcina [EMAIL PROTECTED] wrote:

 Mary Anne Matyaz wrote:

 Hello all. We're bouncing around an idea to change the way we allocate
 Linux guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks
 (mod9's). We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
 disks. This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more
 self-contained. We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's
 via MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I
 haven't thought of?

 Thanks!
 MA


 If you need to make a change to all of the PROFILE EXECs then you'll need
 to chase down each one to do it.  That's one reason why I like the shared
 191 idea.  Other than that allocating alot of small minidisks is just a
 pain.

 --
 Rich Smrcina
 VM Assist, Inc.
 Phone: 414-491-6001
 Ans Service:  360-715-2467
 rich.smrcina at vmassist.com
 http://www.linkedin.com/in/richsmrcina

 Catch the WAVV!  http://www.wavv.org
 WAVV 2009 - Orlando, FL - May 15-19, 2009



Re: Linux guest 191/200 disk question

2008-10-28 Thread Mary Anne Matyaz
Sorry, I see that you think I have a shared 191. I don't, I just have them
all smooshed onto
one volume, versus being on the 200 volume.
MA

On Tue, Oct 28, 2008 at 12:25 PM, Rich Smrcina [EMAIL PROTECTED] wrote:

 Mary Anne Matyaz wrote:

 Hello all. We're bouncing around an idea to change the way we allocate
 Linux guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks
 (mod9's). We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
 disks. This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more
 self-contained. We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's
 via MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I
 haven't thought of?

 Thanks!
 MA


 If you need to make a change to all of the PROFILE EXECs then you'll need
 to chase down each one to do it.  That's one reason why I like the shared
 191 idea.  Other than that allocating alot of small minidisks is just a
 pain.

 --
 Rich Smrcina
 VM Assist, Inc.
 Phone: 414-491-6001
 Ans Service:  360-715-2467
 rich.smrcina at vmassist.com
 http://www.linkedin.com/in/richsmrcina

 Catch the WAVV!  http://www.wavv.org
 WAVV 2009 - Orlando, FL - May 15-19, 2009



Re: Linux guest 191/200 disk question

2008-10-28 Thread RPN01
If you¹re just IPLing CMS to set things up and then IPL Linux, is there
really a reason to have multiple 191 minidisks? We share a single read/only
191 minidisk among all the Linux guests, in both LPARs. They all end up
IPLing 391, and we¹ve added a piece to the profile that looks for userid()
exec, and executes it, if found, as part of the process, allowing for the
more odd of the Linux images to still share the one 191 minidisk.

If you can do it with one, it seems a shame to have all those one cyl
minidisks hanging around everywhere. Plus, if you need to make a change to
something in the way they¹re brought up, you can do it in one place, instead
of having to link and fix hundreds of them.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 11:13 AM, Mary Anne Matyaz [EMAIL PROTECTED] wrote:

 Hello all. We're bouncing around an idea to change the way we allocate Linux
 guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks (mod9's).
 We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200 disks.
 This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more self-contained.
 We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's via
 MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I haven't
 thought of? 
 
 Thanks!
 MA
 




Re: Linux guest 191/200 disk question

2008-10-28 Thread Mary Anne Matyaz
Well, two things. I thought you had to have a writable A disk for CMS? And
we do need
a redhat.conf file on there when we kickstart the linux, not so much
afterwards.
MA

On Tue, Oct 28, 2008 at 12:45 PM, RPN01 [EMAIL PROTECTED] wrote:

  If you're just IPLing CMS to set things up and then IPL Linux, is there
 really a reason to have multiple 191 minidisks? We share a single read/only
 191 minidisk among all the Linux guests, in both LPARs. They all end up
 IPLing 391, and we've added a piece to the profile that looks for userid()
 exec, and executes it, if found, as part of the process, allowing for the
 more odd of the Linux images to still share the one 191 minidisk.

 If you can do it with one, it seems a shame to have all those one cyl
 minidisks hanging around everywhere. Plus, if you need to make a change to
 something in the way they're brought up, you can do it in one place, instead
 of having to link and fix hundreds of them.

 --
 Robert P. Nix  Mayo Foundation.~.
 RO-OE-5-55 200 First Street SW/V\
 507-284-0844   Rochester, MN 55905  /( )\
 -^^-^^
 In theory, theory and practice are the same, but
  in practice, theory and practice are different.




 On 10/28/08 11:13 AM, Mary Anne Matyaz [EMAIL PROTECTED] wrote:

 Hello all. We're bouncing around an idea to change the way we allocate
 Linux guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks
 (mod9's). We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
 disks. This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more self-contained.
 We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's via
 MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I
 haven't thought of?

 Thanks!
 MA





Re: Linux guest 191/200 disk question

2008-10-28 Thread Scott Rohling
No - CMS doesn't need a writable disk to IPL..Most of the customers I've
worked with use a common disk (LNXMAINT 192, for example) that they LINK as
the guests 191:

LINK LNXMAINT 192 191 RR  in the directory


For installs - you can either define a writable 191 manually with TDISK  --
or put something like this on LNXMAINT 192:

/* AUTOSTRT:  Auto start install */
Address Command
'CP DETACH 191'
'CP DEF T3390 191 1'
If rc  0 Then Exit rc
'MAKEBUF'
buf = rc
Queue 'TEMP'
Queue 'YES'
'FORMAT 191 A'
/*  Run your automatic install code now, which makes the REDHAT CONF,  IPLs
the RDR, etc */
..


Then you can XAUTOLOG newguy#AUTOSTRTto do an install  The common
PROFILE EXEC on LNXMAINT 192 will need to recognize AUTOSTRT has been passed
and 'not' try and IPL the 200, but just exit and allow the AUTOSTRT EXEC to
run.

This of course depends on having TDISK available!

But I highly recommend using a common 191 disk and common PROFILE EXEC
rather than propogating dozens or hundreds of little 191 disks all over the
place (or even on one volume).

Scott Rohling


On Tue, Oct 28, 2008 at 10:50 AM, Mary Anne Matyaz
[EMAIL PROTECTED]wrote:

 Well, two things. I thought you had to have a writable A disk for CMS? And
 we do need
 a redhat.conf file on there when we kickstart the linux, not so much
 afterwards.
 MA


 On Tue, Oct 28, 2008 at 12:45 PM, RPN01 [EMAIL PROTECTED] wrote:

  If you're just IPLing CMS to set things up and then IPL Linux, is there
 really a reason to have multiple 191 minidisks? We share a single read/only
 191 minidisk among all the Linux guests, in both LPARs. They all end up
 IPLing 391, and we've added a piece to the profile that looks for userid()
 exec, and executes it, if found, as part of the process, allowing for the
 more odd of the Linux images to still share the one 191 minidisk.

 If you can do it with one, it seems a shame to have all those one cyl
 minidisks hanging around everywhere. Plus, if you need to make a change to
 something in the way they're brought up, you can do it in one place, instead
 of having to link and fix hundreds of them.

 --
 Robert P. Nix  Mayo Foundation.~.
 RO-OE-5-55 200 First Street SW/V\
 507-284-0844   Rochester, MN 55905  /( )\
 -^^-^^
 In theory, theory and practice are the same, but
  in practice, theory and practice are different.




 On 10/28/08 11:13 AM, Mary Anne Matyaz [EMAIL PROTECTED] wrote:

 Hello all. We're bouncing around an idea to change the way we allocate
 Linux guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks
 (mod9's). We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
 disks. This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more
 self-contained. We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's
 via MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I
 haven't thought of?

 Thanks!
 MA






Re: Linux guest 191/200 disk question

2008-10-28 Thread Tom Duerbusch
1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.  SFS 
directory is even better.
2.  Once you IPL Linux, you are not in CMS anymore.  You won't be doing 
anything with your a-disk anymore.  So make it easy on your self, when you need 
to make changes to the profile exec.  Put it in a SFS directory.

Tom Duerbusch
THD Consulting

 Scott Rohling [EMAIL PROTECTED] 10/28/2008 12:16 PM 
No - CMS doesn't need a writable disk to IPL..Most of the customers I've
worked with use a common disk (LNXMAINT 192, for example) that they LINK as
the guests 191:

LINK LNXMAINT 192 191 RR  in the directory


For installs - you can either define a writable 191 manually with TDISK  --
or put something like this on LNXMAINT 192:

/* AUTOSTRT:  Auto start install */
Address Command
'CP DETACH 191'
'CP DEF T3390 191 1'
If rc  0 Then Exit rc
'MAKEBUF'
buf = rc
Queue 'TEMP'
Queue 'YES'
'FORMAT 191 A'
/*  Run your automatic install code now, which makes the REDHAT CONF,  IPLs
the RDR, etc */
..


Then you can XAUTOLOG newguy#AUTOSTRTto do an install  The common
PROFILE EXEC on LNXMAINT 192 will need to recognize AUTOSTRT has been passed
and 'not' try and IPL the 200, but just exit and allow the AUTOSTRT EXEC to
run.

This of course depends on having TDISK available!

But I highly recommend using a common 191 disk and common PROFILE EXEC
rather than propogating dozens or hundreds of little 191 disks all over the
place (or even on one volume).

Scott Rohling


On Tue, Oct 28, 2008 at 10:50 AM, Mary Anne Matyaz
[EMAIL PROTECTED]wrote:

 Well, two things. I thought you had to have a writable A disk for CMS? And
 we do need
 a redhat.conf file on there when we kickstart the linux, not so much
 afterwards.
 MA


 On Tue, Oct 28, 2008 at 12:45 PM, RPN01 [EMAIL PROTECTED] wrote:

  If you're just IPLing CMS to set things up and then IPL Linux, is there
 really a reason to have multiple 191 minidisks? We share a single read/only
 191 minidisk among all the Linux guests, in both LPARs. They all end up
 IPLing 391, and we've added a piece to the profile that looks for userid()
 exec, and executes it, if found, as part of the process, allowing for the
 more odd of the Linux images to still share the one 191 minidisk.

 If you can do it with one, it seems a shame to have all those one cyl
 minidisks hanging around everywhere. Plus, if you need to make a change to
 something in the way they're brought up, you can do it in one place, instead
 of having to link and fix hundreds of them.

 --
 Robert P. Nix  Mayo Foundation.~.
 RO-OE-5-55 200 First Street SW/V\
 507-284-0844   Rochester, MN 55905  /( )\
 -^^-^^
 In theory, theory and practice are the same, but
  in practice, theory and practice are different.




 On 10/28/08 11:13 AM, Mary Anne Matyaz [EMAIL PROTECTED] wrote:

 Hello all. We're bouncing around an idea to change the way we allocate
 Linux guests. Currently, we have a mdisk that
 has all of the Linux 191 disks on. We then have separate 200 disks
 (mod9's). We're thinking of combining the two, such
 that we have a 1 cylinder 191 mdisk, then 10015 cylinders for the 200
 disks. This would allow us to move the linuxes from
 one lpar to another as needed. It would also make them more
 self-contained. We're facing a dasd upgrade in the near future,
 and this would make that a little easier.
 Other than the fact that the 200 disk is backed up by TSM and the 191's
 via MVS's FDR, can you guys shoot some holes
 in this theory? Let me know if you see any other problem areas that I
 haven't thought of?

 Thanks!
 MA






Re: Linux guest 191/200 disk question

2008-10-28 Thread Adam Thornton

On Oct 28, 2008, at 12:32 PM, Tom Duerbusch wrote:

1.  As has been said, you don't need a R/W disk to IPL.  R/O is  
good.  SFS directory is even better.
2.  Once you IPL Linux, you are not in CMS anymore.  You won't be  
doing anything with your a-disk anymore.  So make it easy on your  
self, when you need to make changes to the profile exec.  Put it in  
a SFS directory.


And then export SFS via NFS?  Linux doesn't speak SFS either.  With  
minidisks you can use cmsfs to read what's on them.


A port of IPGATE to Linux would be sort of cool, but way more effort  
than just export SFS via NFS.


Adam


Re: Linux guest 191/200 disk question

2008-10-28 Thread Scott Rohling
I think the point is that once Linux boots - an A disk isn't relevant .. not
that Linux needs to read anything on the 191.

Scott Rohling

On Tue, Oct 28, 2008 at 11:48 AM, Adam Thornton [EMAIL PROTECTED]wrote:

 On Oct 28, 2008, at 12:32 PM, Tom Duerbusch wrote:

  1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.  SFS
 directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be doing
 anything with your a-disk anymore.  So make it easy on your self, when you
 need to make changes to the profile exec.  Put it in a SFS directory.


 And then export SFS via NFS?  Linux doesn't speak SFS either.  With
 minidisks you can use cmsfs to read what's on them.

 A port of IPGATE to Linux would be sort of cool, but way more effort than
 just export SFS via NFS.

 Adam



Re: Linux guest 191/200 disk question

2008-10-28 Thread Tom Duerbusch
I must of missed the first part of the conversation

Why would you want Linux to have access to your A-disk?  
There might be reasons, but inquiring minds want to know, and deleted the 
original posts G.

If it is an occasional access, then the Linux guest can just FTP to/from the 
SFS system.

Tom Duerbusch
THD Consulting

 Adam Thornton [EMAIL PROTECTED] 10/28/2008 12:48 PM 
On Oct 28, 2008, at 12:32 PM, Tom Duerbusch wrote:

 1.  As has been said, you don't need a R/W disk to IPL.  R/O is  
 good.  SFS directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be  
 doing anything with your a-disk anymore.  So make it easy on your  
 self, when you need to make changes to the profile exec.  Put it in  
 a SFS directory.

And then export SFS via NFS?  Linux doesn't speak SFS either.  With  
minidisks you can use cmsfs to read what's on them.

A port of IPGATE to Linux would be sort of cool, but way more effort  
than just export SFS via NFS.

Adam


Re: Linux guest 191/200 disk question

2008-10-28 Thread RPN01
CMS doesn¹t need a writable 191, as others have already said. Also, Linux
doesn¹t use the 191 at all, so the only moment that the 191 needs to be
stable is when the guest(s) login. This means that you can likely grab it
r/w to add things like kickstart files without affecting any of the guests.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 11:50 AM, Mary Anne Matyaz [EMAIL PROTECTED] wrote:

 Well, two things. I thought you had to have a writable A disk for CMS? And we
 do need
 a redhat.conf file on there when we kickstart the linux, not so much
 afterwards. 
 MA
 
 On Tue, Oct 28, 2008 at 12:45 PM, RPN01 [EMAIL PROTECTED] wrote:
 If you're just IPLing CMS to set things up and then IPL Linux, is there
 really a reason to have multiple 191 minidisks? We share a single read/only
 191 minidisk among all the Linux guests, in both LPARs. They all end up
 IPLing 391, and we've added a piece to the profile that looks for userid()
 exec, and executes it, if found, as part of the process, allowing for the
 more odd of the Linux images to still share the one 191 minidisk.
 
 If you can do it with one, it seems a shame to have all those one cyl
 minidisks hanging around everywhere. Plus, if you need to make a change to
 something in the way they're brought up, you can do it in one place, instead
 of having to link and fix hundreds of them.




Re: Linux guest 191/200 disk question

2008-10-28 Thread Scott Rohling
Just curious why you think SFS is better than a 1 cylinder shared minidisk?
To me - it's a point of failure as an SFS pool server must be running just
to get to the PROFILE EXEC...

Scott Rohling

On Tue, Oct 28, 2008 at 11:32 AM, Tom Duerbusch
[EMAIL PROTECTED]wrote:

 1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.  SFS
 directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be doing
 anything with your a-disk anymore.  So make it easy on your self, when you
 need to make changes to the profile exec.  Put it in a SFS directory.

 Tom Duerbusch
 THD Consulting





Re: Linux guest 191/200 disk question

2008-10-28 Thread Tom Duerbusch
True about another point of failure.

However, how many times a year is your SFS server(s) down?  
I find an occasional crash (usually due to me) about once every year or two.
It's really a pain, as my CMS type servers, don't auto reconnect.  So I have to 
manually force off the servers and let the be brought up by AUDITOR.  (easiest 
way to do this)

But, for a guest, such as Linux, when you (x)autolog them, they connect to SFS, 
access the PROFILE EXEC and disconnect (via IPL) in a matter of a second or two.

However, your point, is good, especially in a near 24X7 Linux shop.  A shared 
191 minidisk is better.  Until you have two users, access the shared disk in 
R/W mode, to update it.  No protection.  SFS will always protect you.  Manual 
procedures can minimized the R/W problem, but can't eliminate it.  Just like 
SFS problems can be minimized but not eliminated.

But thinking of this...
There is one SFS combination of problems, which would be a major concern.
Backing up SFS via the VM supplied utilities and the backup (or VM) crashes.
SFS will come up, but that storage pool is locked.  It is easy to unlock it, 
when you know to do that.  
During this time, if a guest tries to access their SFS directory that is on a 
SFS pool that is locked (would be a much more frequent occurrence if there was 
a VM crash), it could lead to a lot of heart burn.

A 191 minidisk can be much better.  And of course, not to IPL CMS, but to IPL 
190, just in case the CMS saved segment is lost G.

Tom Duerbusch
THD Consulting

 Scott Rohling [EMAIL PROTECTED] 10/28/2008 1:56 PM 
Just curious why you think SFS is better than a 1 cylinder shared minidisk?
To me - it's a point of failure as an SFS pool server must be running just
to get to the PROFILE EXEC...

Scott Rohling

On Tue, Oct 28, 2008 at 11:32 AM, Tom Duerbusch
[EMAIL PROTECTED]wrote:

 1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.  SFS
 directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be doing
 anything with your a-disk anymore.  So make it easy on your self, when you
 need to make changes to the profile exec.  Put it in a SFS directory.

 Tom Duerbusch
 THD Consulting





Re: Linux guest 191/200 disk question

2008-10-28 Thread RPN01
One problem w/ SFS is that we don't run it on our second LPAR at all.
Anything that we want to be able to run on both systems has to reside on a
minidisk. SFS isn't a choice.

If IBM would allow the vmsys: pool to be shared between systems, we'd be
more likely to use it.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 2:13 PM, Tom Duerbusch [EMAIL PROTECTED] wrote:

 True about another point of failure.
 
 However, how many times a year is your SFS server(s) down?
 I find an occasional crash (usually due to me) about once every year or two.
 It's really a pain, as my CMS type servers, don't auto reconnect.  So I have
 to manually force off the servers and let the be brought up by AUDITOR.
 (easiest way to do this)
 
 But, for a guest, such as Linux, when you (x)autolog them, they connect to
 SFS, access the PROFILE EXEC and disconnect (via IPL) in a matter of a second
 or two.
 
 However, your point, is good, especially in a near 24X7 Linux shop.  A shared
 191 minidisk is better.  Until you have two users, access the shared disk in
 R/W mode, to update it.  No protection.  SFS will always protect you.  Manual
 procedures can minimized the R/W problem, but can't eliminate it.  Just like
 SFS problems can be minimized but not eliminated.
 
 But thinking of this...
 There is one SFS combination of problems, which would be a major concern.
 Backing up SFS via the VM supplied utilities and the backup (or VM) crashes.
 SFS will come up, but that storage pool is locked.  It is easy to unlock it,
 when you know to do that.
 During this time, if a guest tries to access their SFS directory that is on a
 SFS pool that is locked (would be a much more frequent occurrence if there was
 a VM crash), it could lead to a lot of heart burn.
 
 A 191 minidisk can be much better.  And of course, not to IPL CMS, but to IPL
 190, just in case the CMS saved segment is lost G.
 
 Tom Duerbusch
 THD Consulting
 
 Scott Rohling [EMAIL PROTECTED] 10/28/2008 1:56 PM 
 Just curious why you think SFS is better than a 1 cylinder shared minidisk?
 To me - it's a point of failure as an SFS pool server must be running just
 to get to the PROFILE EXEC...
 
 Scott Rohling
 
 On Tue, Oct 28, 2008 at 11:32 AM, Tom Duerbusch
 [EMAIL PROTECTED]wrote:
 
 1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.  SFS
 directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be doing
 anything with your a-disk anymore.  So make it easy on your self, when you
 need to make changes to the profile exec.  Put it in a SFS directory.
 
 Tom Duerbusch
 THD Consulting
 
 
 


Re: Linux guest 191/200 disk question

2008-10-28 Thread Alan Altmark
On Tuesday, 10/28/2008 at 03:28 EDT, RPN01 [EMAIL PROTECTED] wrote:
 If IBM would allow the vmsys: pool to be shared between systems, we'd be
 more likely to use it.

Say more.  The VMSYS filepool was intended to contain information that is 
used ONLY for THIS system (inventory, service, etc.).  When you establish 
a collection with ISFC, each system's VMSYS filepool remains active and 
private to each system.

Information that you intend to share requires you to set up your own 
filepool and then connect the systems with ISFC (or use IPGATE).

I do recognize that in a clustered environment like CSE it would be good 
to have a VMSYS-like filepool that handles SESesque system data for all 
members of the cluster and is shared.

Alan Altmark
z/VM Development
IBM Endicott


Re: Linux guest 191/200 disk question

2008-10-28 Thread Scott Rohling
Well - technically true if MW is used on the LINK instead of MR -- that's
such a big no no in general I guess I assume people won't do it -- but good
point.

Scott Rohling


   Until you have two users, access the shared disk in
  R/W mode, to update it.  No protection.  SFS will always protect you.





Re: Linux guest 191/200 disk question

2008-10-28 Thread O'Brien, Dennis L
Robert, 
You don't have to use the VMSYS filepool.  You can create a new filepool
that doesn't start with VMSYS and share it between systems.  The only
drawback is that if the system that hosts the filepool server isn't up,
the filepool isn't accessible to the other system.

We have filepool servers on every system.  They have unique names that
don't start with VMSYS.  If we had production Linux on multiple
systems, we'd use SFS A-disks in a filepool that's on the same system as
the Linux guests.  Because the pools are sharable, if we had to make a
change to PROFILE EXEC, we could do that for all systems from one place.
For our z/OS guests, we have one PROFILE EXEC on each system that has an
alias for each guest.  If I were setting up Linux guests, I'd do them
the same way.

   Dennis 

We are Borg of America.  You will be assimilated.  Resistance is futile.

-Original Message-
From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED] On
Behalf Of RPN01
Sent: Tuesday, October 28, 2008 12:28
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Linux guest 191/200 disk question

One problem w/ SFS is that we don't run it on our second LPAR at all.
Anything that we want to be able to run on both systems has to reside on
a
minidisk. SFS isn't a choice.

If IBM would allow the vmsys: pool to be shared between systems, we'd be
more likely to use it.

-- 
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 10/28/08 2:13 PM, Tom Duerbusch [EMAIL PROTECTED] wrote:

 True about another point of failure.
 
 However, how many times a year is your SFS server(s) down?
 I find an occasional crash (usually due to me) about once every year
or two.
 It's really a pain, as my CMS type servers, don't auto reconnect.  So
I have
 to manually force off the servers and let the be brought up by
AUDITOR.
 (easiest way to do this)
 
 But, for a guest, such as Linux, when you (x)autolog them, they
connect to
 SFS, access the PROFILE EXEC and disconnect (via IPL) in a matter of a
second
 or two.
 
 However, your point, is good, especially in a near 24X7 Linux shop.  A
shared
 191 minidisk is better.  Until you have two users, access the shared
disk in
 R/W mode, to update it.  No protection.  SFS will always protect you.
Manual
 procedures can minimized the R/W problem, but can't eliminate it.
Just like
 SFS problems can be minimized but not eliminated.
 
 But thinking of this...
 There is one SFS combination of problems, which would be a major
concern.
 Backing up SFS via the VM supplied utilities and the backup (or VM)
crashes.
 SFS will come up, but that storage pool is locked.  It is easy to
unlock it,
 when you know to do that.
 During this time, if a guest tries to access their SFS directory that
is on a
 SFS pool that is locked (would be a much more frequent occurrence if
there was
 a VM crash), it could lead to a lot of heart burn.
 
 A 191 minidisk can be much better.  And of course, not to IPL CMS, but
to IPL
 190, just in case the CMS saved segment is lost G.
 
 Tom Duerbusch
 THD Consulting
 
 Scott Rohling [EMAIL PROTECTED] 10/28/2008 1:56 PM 
 Just curious why you think SFS is better than a 1 cylinder shared
minidisk?
 To me - it's a point of failure as an SFS pool server must be running
just
 to get to the PROFILE EXEC...
 
 Scott Rohling
 
 On Tue, Oct 28, 2008 at 11:32 AM, Tom Duerbusch
 [EMAIL PROTECTED]wrote:
 
 1.  As has been said, you don't need a R/W disk to IPL.  R/O is good.
SFS
 directory is even better.
 2.  Once you IPL Linux, you are not in CMS anymore.  You won't be
doing
 anything with your a-disk anymore.  So make it easy on your self,
when you
 need to make changes to the profile exec.  Put it in a SFS directory.
 
 Tom Duerbusch
 THD Consulting
 
 
 


Re: Linux guest 191/200 disk question

2008-10-28 Thread Adam Thornton

On Oct 28, 2008, at 1:36 PM, Tom Duerbusch wrote:


I must of missed the first part of the conversation

Why would you want Linux to have access to your A-disk?
There might be reasons, but inquiring minds want to know, and  
deleted the original posts G.


Handy for building systems where you can change Linux behavior without  
the user knowing much of anything about Linux, by editing files in CMS.


Adam