> On a similar but disparate subject: Why do we have to use tape to move
SDF
> type files from one system to another? I just want to move CMS, GCS
and
> the
> various system files from one system within CSE to another... But to
do
> it,
> I have to have a tape drive. It's the only use I have for a
Yes, that's one way to do it.. another is to use a temp disk and avoid
involvement of 'yet another' userid.. ;-) You're right - it doesn't
require use of a r/w 191.. but a r/w address somewhere a long the way...
Scott Rohling
On Wed, Oct 29, 2008 at 11:26 AM, Mark Post <[EMAIL PROTECTED]> w
On Wednesday, 10/29/2008 at 08:51 EDT, RPN01 <[EMAIL PROTECTED]> wrote:
> Why couldn't vmsys: be localized by default, but allow the option of
sharing
> it among systems, where it makes sense in the customer's environment?
Don't
> be so headstrong in protecting me from myself; I may have thought
>>> On 10/29/2008 at 9:49 AM, Scott Rohling <[EMAIL PROTECTED]> wrote:
-snip-
> True for cloning -- not true if you use the RedHat 'kickstart' method (or
> SuSE autoyast, which I haven't tried, personally). I've helped several
> clients implement an 'automated kickstart' - which involves creati
The IBM z/VM Operating System wrote on
10/29/2008 05:51:10 AM:
> On a similar but disparate subject: Why do we have to use tape to move
SDF
> type files from one system to another? I just want to move CMS, GCS and
the
> various system files from one system within CSE to another... But to do
i
For the VMSYS issue: you can also ly and have that same filepool
available both as VMSYS and as some other name.
1. Change the real filepool id as explained in my previous note. Any
name not starting with VMSYS makes it a candidate for access from
anywhere within the CSE. eg: MYSFS
2. Add REMOTE i
|The point was that the actual Linux guests certainly never need write
access to their own |191 minidisk
True for cloning -- not true if you use the RedHat 'kickstart' method (or
SuSE autoyast, which I haven't tried, personally). I've helped several
clients implement an 'automated kickstart' -
The only thing I would really use SFS for would be the product disks (CP,
CMS, GCS, etc), and trying to move those to another pool would mean having
to edit many of the control files that come with the install and maintenance
that contain the VMSYS: filepool name. Too big a headache to make it
wort
I generally use M, since if I can¹t get write access, I don¹t really need it
at all at the moment.
The whole issue isn¹t that great here, as we have only four actual users
that would ever attempt to get write access to the Linux guest 191 shared
disk, and two of us sit within shouting distance (mu
But because I share my res volume among the CSE'd systems, I can't install
any of the products in SFS, because I may need to build one or more of the
products on each of the various systems. So everything gets put in
minidisks, and the vmsys: filepool remains fairly empty.
If I could share vmsys:
I'd not use SFS for Linuxes A-disk. The benefits SFS surely has for
CMS users, are not enough for Linux guests to outweight the chances of
an SFS that is down.
But, if you insist: renaming the VMSYS filepool to something else is a
task done on 30 seconds (I did that often in my previous live):
-
On Oct 28, 2008, at 1:36 PM, Tom Duerbusch wrote:
I must of missed the first part of the conversation
Why would you want Linux to have access to your A-disk?
There might be reasons, but inquiring minds want to know, and
deleted the original posts .
Handy for building systems where you c
Robert,
You don't have to use the VMSYS filepool. You can create a new filepool
that doesn't start with "VMSYS" and share it between systems. The only
drawback is that if the system that hosts the filepool server isn't up,
the filepool isn't accessible to the other system.
We have filepool serv
Well - technically true if MW is used on the LINK instead of MR -- that's
such a big no no in general I guess I assume people won't do it -- but good
point.
Scott Rohling
> > Until you have two users, access the shared disk in
> > R/W mode, to update it. No protection. SFS will always protect
On Tuesday, 10/28/2008 at 03:28 EDT, RPN01 <[EMAIL PROTECTED]> wrote:
> If IBM would allow the vmsys: pool to be shared between systems, we'd be
> more likely to use it.
Say more. The VMSYS filepool was intended to contain information that is
used ONLY for THIS system (inventory, service, etc.).
One problem w/ SFS is that we don't run it on our second LPAR at all.
Anything that we want to be able to run on both systems has to reside on a
minidisk. SFS isn't a choice.
If IBM would allow the vmsys: pool to be shared between systems, we'd be
more likely to use it.
--
Robert P. Nix
True about another point of failure.
However, how many times a year is your SFS server(s) down?
I find an occasional crash (usually due to me) about once every year or two.
It's really a pain, as my CMS type servers, don't auto reconnect. So I have to
manually force off the servers and let the
Just curious why you think SFS is better than a 1 cylinder shared minidisk?
To me - it's a point of failure as an SFS pool server must be running just
to get to the PROFILE EXEC...
Scott Rohling
On Tue, Oct 28, 2008 at 11:32 AM, Tom Duerbusch
<[EMAIL PROTECTED]>wrote:
> 1. As has been said, you
CMS doesn¹t need a writable 191, as others have already said. Also, Linux
doesn¹t use the 191 at all, so the only moment that the 191 needs to be
stable is when the guest(s) login. This means that you can likely grab it
r/w to add things like kickstart files without affecting any of the guests.
--
I must of missed the first part of the conversation
Why would you want Linux to have access to your A-disk?
There might be reasons, but inquiring minds want to know, and deleted the
original posts .
If it is an occasional access, then the Linux guest can just FTP to/from the
SFS system.
I think the point is that once Linux boots - an A disk isn't relevant .. not
that Linux needs to read anything on the 191.
Scott Rohling
On Tue, Oct 28, 2008 at 11:48 AM, Adam Thornton <[EMAIL PROTECTED]>wrote:
> On Oct 28, 2008, at 12:32 PM, Tom Duerbusch wrote:
>
> 1. As has been said, you d
On Oct 28, 2008, at 12:32 PM, Tom Duerbusch wrote:
1. As has been said, you don't need a R/W disk to IPL. R/O is
good. SFS directory is even better.
2. Once you IPL Linux, you are not in CMS anymore. You won't be
doing anything with your a-disk anymore. So make it easy on your
self, w
1. As has been said, you don't need a R/W disk to IPL. R/O is good. SFS
directory is even better.
2. Once you IPL Linux, you are not in CMS anymore. You won't be doing
anything with your a-disk anymore. So make it easy on your self, when you need
to make changes to the profile exec. Put i
No - CMS doesn't need a writable disk to IPL..Most of the customers I've
worked with use a common disk (LNXMAINT 192, for example) that they LINK as
the guests 191:
LINK LNXMAINT 192 191 RR< in the directory
For installs - you can either define a writable 191 manually with TDISK --
or
Well, two things. I thought you had to have a writable A disk for CMS? And
we do need
a redhat.conf file on there when we kickstart the linux, not so much
afterwards.
MA
On Tue, Oct 28, 2008 at 12:45 PM, RPN01 <[EMAIL PROTECTED]> wrote:
> If you're just IPLing CMS to set things up and then IPL L
If you¹re just IPLing CMS to set things up and then IPL Linux, is there
really a reason to have multiple 191 minidisks? We share a single read/only
191 minidisk among all the Linux guests, in both LPARs. They all end up
IPLing 391, and we¹ve added a piece to the profile that looks for userid()
exec
Sorry, I see that you think I have a shared 191. I don't, I just have them
all smooshed onto
one volume, versus being on the 200 volume.
MA
On Tue, Oct 28, 2008 at 12:25 PM, Rich Smrcina <[EMAIL PROTECTED]> wrote:
> Mary Anne Matyaz wrote:
>
>> Hello all. We're bouncing around an idea to change t
Well, they just have a small profile exec that executes the more detailed
one off of a shared disk. So I'm ok there.
MA
On Tue, Oct 28, 2008 at 12:25 PM, Rich Smrcina <[EMAIL PROTECTED]> wrote:
> Mary Anne Matyaz wrote:
>
>> Hello all. We're bouncing around an idea to change the way we allocate
>
Small thing, we back up all of our drives, including 200's, through MVS
and then do the Linux minidisks through TSM. This allows us the ability
to easily retrieve individual files, but the MVS DASD backups are the
way to go when a Linux box goes belly up.
David Dean
Information Systems
*bcbs
Mary Anne Matyaz wrote:
Hello all. We're bouncing around an idea to change the way we allocate
Linux guests. Currently, we have a mdisk that
has all of the Linux 191 disks on. We then have separate 200 disks
(mod9's). We're thinking of combining the two, such
that we have a 1 cylinder 191 mdisk,
30 matches
Mail list logo