Re: multiple z/OS sharing considerations.

2007-10-23 Thread Farley, Peter x23353
> -Original Message-
> From: Eric Bielefeld [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 23, 2007 3:59 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> Thanks.  I read part of the chapter.  That brings up one question - how
> much extra overhead is there for the CICS during normal operation?  Also,
> I gather that any transactions being processed within a CICS are lost if
> an Lpar fails, and that batch jobs, if restarted on another system start
> over from the beginning again.
> 
> - Original Message -
> From: "Farley, Peter x23353" <[EMAIL PROTECTED]>
> > Check out ARM (Automated Restart Management) in the manual:
> > z/OS V1R8.0 MVS Sysplex Services Guide
> > Chapter 3.  Using the Automatic Restart Management Function of XCF
> > URL to the contents page, but watch the line wrap:
> > http://publibz.boulder.ibm.com/cgi-
> bin/bookmgr_OS390/BOOKS/iea2i660/CONTENTS
> > ?SHELF=EZ2ZO10I.bks&DT=20060620144434#CONTENTS
> > > Peter

No idea about the overhead.  I don't think anyone here even measured it, or
if they did they didn't tell us about it.  I don't think there is much, but
we have so much running I might not see it even if it was significant.

I believe our SLA's to our clients were not affected, if that's any help.

For your other questions, I believe the answers are Yes and Yes.  "Batch"
does need to be aware of the possibility of restarting (I assume we're
talking "long running production/QA batch" here, not compiles or unit
tests).  We have several varieties of those, which fortunately already had
some "restart" awareness built previously.  Not much change was needed to
handle ARM restarts for those.  Obviously YMMV.

Peter

This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Eric Bielefeld
Thanks.  I read part of the chapter.  That brings up one question - how much 
extra overhead is there for the CICS during normal operation?  Also, I 
gather that any transactions being processed within a CICS are lost if an 
Lpar fails, and that batch jobs, if restarted on another system start over 
from the beginning again.


Eric Bielefeld
Sr. z/OS Systems Programmer
Milwaukee, Wisconsin
414-475-7434

- Original Message - 
From: "Farley, Peter x23353" <[EMAIL PROTECTED]>

Check out ARM (Automated Restart Management) in the manual:
z/OS V1R8.0 MVS Sysplex Services Guide
Chapter 3.  Using the Automatic Restart Management Function of XCF
URL to the contents page, but watch the line wrap:
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/iea2i660/CONTENTS
?SHELF=EZ2ZO10I.bks&DT=20060620144434#CONTENTS
> Peter


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Farley, Peter x23353
> -Original Message-
> From: Farley, Peter x23353
> Sent: Tuesday, October 23, 2007 1:38 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
 
> Check out ARM (Automated Restart Management) in the manual:
> 
> z/OS V1R8.0 MVS Sysplex Services Guide
> 
> Chapter 3.  Using the Automatic Restart Management Function of XCF
> 
> URL to the contents page, but watch the line wrap:
> 
> http://publibz.boulder.ibm.com/cgi-
> bin/bookmgr_OS390/BOOKS/iea2i660/CONTENTS
> ?SHELF=EZ2ZO10I.bks&DT=20060620144434#CONTENTS

I forgot to also say that we have used it quite successfully to move
multiple CICS's and other complex "online" applications from a failed lpar
to a "designated backup" lpar.  It was kind of fun to see them come back to
life almost magically when the sysprogs deliberately stopped the original
lpar on the test weekend.  A recovery ballet, as it were.

One "gotcha" that we found: The JES spool output which is outstanding at the
failure point is NOT released from the spool.  It is retained as part of the
"recovered" job on the backup lpar (using the same MAS in our case).  When
the recovered job ends, THEN all the spool output will be released.
FREE=CLOSE does not help here, because nothing is ever officially "closed"
because of the recovery mechanism.

Another potential "gotcha": If there are dependencies between CICS's or
between CICS's and other "online" work also being recovered, make sure that
the order of recovery is carefully controlled.  In our case, there were EXCI
dependencies which required certain CICS's to be recovered first, and then
the other work that needed those CICS's for EXCI.  Obviously MQ managers
also fit into the "do me first" category, as do DB2 functions.

As I said, it's a lovely ballet to watch when it happens.

HTH

Peter

This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Farley, Peter x23353
> -Original Message-
> From: Eric Bielefeld [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, October 23, 2007 12:43 PM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
 
> That brings up a question I had on the part of the post I quoted below.
> If you have 2 Lpars in a sysplex, and each Lpar runs say 10 CICSs, how do
> you prevent shutting down the 10 CICSs on the system you want to IPL?  I
> know when I worked at my last job, they had 2 separate z/900s in a
> sysplex, and when either one was IPL'd, there was an outage.  The bigger
> machine ran all the CICSs, and the other ran lots of DB2 batch.  They had
> outages for each IPL when we were installing z/OS 1.7.  Is there a way to
> migrate the work from one Lpar to another?

Check out ARM (Automated Restart Management) in the manual:

z/OS V1R8.0 MVS Sysplex Services Guide

Chapter 3.  Using the Automatic Restart Management Function of XCF

URL to the contents page, but watch the line wrap:

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/iea2i660/CONTENTS
?SHELF=EZ2ZO10I.bks&DT=20060620144434#CONTENTS

HTH

Peter

This message and any attachments are intended only for the use of the addressee 
and
may contain information that is privileged and confidential. If the reader of 
the 
message is not the intended recipient or an authorized representative of the
intended recipient, you are hereby notified that any dissemination of this
communication is strictly prohibited. If you have received this communication in
error, please notify us immediately by e-mail and delete the message and any
attachments from your system.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Jon Brock
It kinda depends on how your work is separated.  We have a production
LPAR and a development/test LPAR.  We IPL production and off it goes
whether test is up or not; there's no waiting for test.

Jon




If you have one machine, and 
it crashes or loses power, then your whole sysplex crashes, and your
outage 
is probably longer because you have to start 2 Lpars and connect them
before 
starting your work back up.  Granted, if one Lpar crashes, the other
will 
keep going.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Staller, Allan

That brings up a question I had on the part of the post I quoted below.
If you have 2 Lpars in a sysplex, and each Lpar runs say 10 CICSs, how
do you prevent shutting down the 10 CICSs on the system you want to IPL?
I know when I worked at my last job, they had 2 separate z/900s in a
sysplex, and when either one was IPL'd, there was an outage.  The bigger
machine ran all the CICSs, and the other ran lots of DB2 batch.  They
had outages for each IPL when we were installing z/OS 1.7.  

Is there a way to migrate the work from one Lpar to another?



Check out VTAM APPN and MNPS (Multi-Node Persistent Sessions) also VIPA.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Staller, Allan
My overhead numbers have historically been closer to 10% than 20% for 
"the OS and its friends" (excluding subsystems e.g. CICS, IMS)

BTW, that's 10%-20% of the LPAR consumption. E.G. If the LPAR consumes
20% of the CEC, that would be 10-20% of the 20%. IOTW 2-4%. My 3-5% is
in addition to that.


>IIRC, 3-5% utilization is the ROT for parallel sysplex overhead (maybe
for basic, but I am not sure).

BUT, you forget the biggest overhead when you have multiple images on
the same box.

Each MVS image is going to eat 15-25% of the processor for the OS and
its friends.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Eric Bielefeld
Having worked in a smaller environment for the last 21 years, I can 
understand John's reluctance to run a sysplex.  If you have one machine, and 
it crashes or loses power, then your whole sysplex crashes, and your outage 
is probably longer because you have to start 2 Lpars and connect them before 
starting your work back up.  Granted, if one Lpar crashes, the other will 
keep going.


A lot depends on how easy it is for a shop to do IPLs.  At P&H Mining, as a 
manufacturer, it was very easy.  We IPL'd every week.  On Saturday at 6:00 
P.M., all the CICSs were shut down.  It didn't take much more to shut down 
batch and IPL, especially since we did backups right after the IPL, and 
didn't want anything running anyway.  Now that contrasts sharply with my 
last job, where they IPL'd once a quarter or even longer.


That brings up a question I had on the part of the post I quoted below.  If 
you have 2 Lpars in a sysplex, and each Lpar runs say 10 CICSs, how do you 
prevent shutting down the 10 CICSs on the system you want to IPL?  I know 
when I worked at my last job, they had 2 separate z/900s in a sysplex, and 
when either one was IPL'd, there was an outage.  The bigger machine ran all 
the CICSs, and the other ran lots of DB2 batch.  They had outages for each 
IPL when we were installing z/OS 1.7.  Is there a way to migrate the work 
from one Lpar to another?


Another comment.  I know they could probably get one z/9 box a lot cheaper 
than the 2 z/900s, but they really wanted the redundancy of 2 separate 
systems.  At least it would probably be cheaper considering software costs. 
I suspect the z9 is enough more reliable than 2 z/900s and that it would be 
worthwhile to make that switch.


Eric Bielefeld
Sr. z/OS Systems Programmer
Milwaukee, Wisconsin
414-475-7434

- Original Message - 
From: "Skip Robinson" <[EMAIL PROTECTED]>



John,

I don't understand the fervor of your opposition to sysplex. Even on a
single CEC, sysplex offers a degree of increased availability that cannot
be achieved any other way. Even basic sysplex provides benefits, although 
I
would strenuously hit up the 'multi-system advocate' for the cost of an 
ICF

engine.

SNIP

> The main benefit of a well configured sysplex is that you can bring down
one image for software maintenance while the other one stays up. With a
single CEC, you may still have outages because of hardware changes or
anything requiring POR, but those cases should be much rarer than 
scheduled

PTF refreshes.



JO.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
[EMAIL PROTECTED]


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Ted MacNEIL
>IIRC, 3-5% utilization is the ROT for parallel sysplex overhead (maybe for 
>basic, but I am not sure).

BUT, you forget the biggest overhead when you have multiple images on the same 
box.

Each MVS image is going to eat 15-25% of the processor for the OS and its 
friends.

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Jon Brock
Another possible -- although perhaps a bit questionable --
benefit of multiple images is that it gives you another layer of control
over system resources allocated to your production work.  WLM is a fine
thing, but it does have its gotchas; LPAR weighting can assure that the
production systems *always* outweigh the others.  

Jon

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Skip Robinson
John,

I don't understand the fervor of your opposition to sysplex. Even on a
single CEC, sysplex offers a degree of increased availability that cannot
be achieved any other way. Even basic sysplex provides benefits, although I
would strenuously hit up the 'multi-system advocate' for the cost of an ICF
engine.

Some clarification of terms.

- Whether a sysplex is 'parallel' has nothing to do with whether CF LPARs
are internal or external. If you have a CF, your sysplex is parallel.

- A 'basic' sysplex uses only CTC to communicate among members, but there
is most definitely sharing of resources: GRS, console, JES MAS. The only
justification for *not* running parallel is lack of a CF.

The main benefit of a well configured sysplex is that you can bring down
one image for software maintenance while the other one stays up. With a
single CEC, you may still have outages because of hardware changes or
anything requiring POR, but those cases should be much rarer than scheduled
PTF refreshes.

What's missing from the discussion so far is what your user community
thinks of all this. Or would think if the discussion went public. They're
probably resigned to the ho-hum availability they have become accustomed
to. People's expectations typically sink to the level of their experience.
If they learn that life can be better, they begin to demand better. You
have a chance to lead them out of the dinosaur wilderness here. It's not
their old man's mainframe anymore, but the past is an unimaginative tutor.

If I were you, I'd be tickled pink at the chance to leap into current
technology.

.
.
JO.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler
SHARE MVS Program Co-Manager
626-302-7535 Office
323-715-0595 Mobile
[EMAIL PROTECTED]


   
 "McKown, John"
 <[EMAIL PROTECTED] 
 THMARKETS.COM> To 
 Sent by: IBM  IBM-MAIN@BAMA.UA.EDU
 Mainframe  cc 
 Discussion List   
 <[EMAIL PROTECTED]             Subject 
 .EDU>     Re: multiple z/OS sharing   
   considerations. 
   
 10/23/2007 08:17  
 AM
   
   
 Please respond to 
   IBM Mainframe   
  Discussion List  
 <[EMAIL PROTECTED] 
   .EDU>   
   
   




> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:[EMAIL PROTECTED] On Behalf Of Staller, Allan
> Sent: Tuesday, October 23, 2007 10:06 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
>
>
> I can think of nothing that can be done in a single image
> that cannot be
> done in a basic (or parallel) sysplex. Some things are more
> complicated,
> some things are easier. IIRC, 3-5% utilization is the ROT for parallel
> sysplex overhead (maybe for basic, but I am not sure).
>
> Parallel sysplex = stand-alone CF = $ (is most likely not going to
> happen for you (based on previous posts)

Why not a CF LPAR and a CFL in the current box? Not as expensive. I have
the memory.

>
> Basic sysplex #1 = CF engine =  (is most likely not going
> to happen
> for you)

If I can get a CF, I see no reason to not go Parallel Sysplex.

>
> Basic sysplex #2 = CTC communication With 2 LPARS, not too
> difficult, -
> exponential increase in complexity  as lpars are added (2**N)

I already have the CTCs set up between the current production system and
our "sandbox". I once had 4 systems on this box interconnected with
CTCs. Yes, I got headaches keeping them sorted.

>
> Inter-lpar communications/control are the big issue here. GRS/MIM,
> VTAM/APPN/EE, share *EVERYTHING*, naming conventions.

Yea

Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Dana Mitchell
> Sent: Tuesday, October 23, 2007 10:31 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> 
> John,
> 
> You might have a look at this redbook:
> http://www.redbooks.ibm.com/abstracts/SG246818.html?Open
> 
> It looks at the problem from the other side, i.e. combining disparate
> systems into a sysplex, but it's still a good read, and a 
> good collection of
> info from many different products gathered into one place, 
> and should give
> you some food for thought.
> 
> Dana

Thanks!

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Dana Mitchell
John,

You might have a look at this redbook:
http://www.redbooks.ibm.com/abstracts/SG246818.html?Open

It looks at the problem from the other side, i.e. combining disparate
systems into a sysplex, but it's still a good read, and a good collection of
info from many different products gathered into one place, and should give
you some food for thought.

Dana

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Staller, Allan
> Sent: Tuesday, October 23, 2007 10:06 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> 
> I can think of nothing that can be done in a single image 
> that cannot be
> done in a basic (or parallel) sysplex. Some things are more 
> complicated,
> some things are easier. IIRC, 3-5% utilization is the ROT for parallel
> sysplex overhead (maybe for basic, but I am not sure).
> 
> Parallel sysplex = stand-alone CF = $ (is most likely not going to
> happen for you (based on previous posts)

Why not a CF LPAR and a CFL in the current box? Not as expensive. I have
the memory.

> 
> Basic sysplex #1 = CF engine =  (is most likely not going 
> to happen
> for you)

If I can get a CF, I see no reason to not go Parallel Sysplex.

> 
> Basic sysplex #2 = CTC communication With 2 LPARS, not too 
> difficult, -
> exponential increase in complexity  as lpars are added (2**N)

I already have the CTCs set up between the current production system and
our "sandbox". I once had 4 systems on this box interconnected with
CTCs. Yes, I got headaches keeping them sorted.

> 
> Inter-lpar communications/control are the big issue here. GRS/MIM,
> VTAM/APPN/EE, share *EVERYTHING*, naming conventions.

Yeah! I want to put up every possible, valid, roadblock and have
management sign off on it. Otherwise it will all be my fault when the
thing tanks.

> 
> If you are WLM Goal mode, you are (at least) a monoplex.
> 

Yes, single image monoplex. z/OS 1.8.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Jon Brock
> Sent: Tuesday, October 23, 2007 9:49 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> 
> A couple of questions, to try to avoid the risk of entering a 
> "teaching
> grandma to suck eggs" scenario:
> 
> How do you upgrade z/OS or ISV software currently if you are only
> running one image?  

Usually using STEPLIBs for testing. We only have a few things in the
LNKLST. We do have a sandbox for testing things such a CA-OPS/MVS,
Mainview, etc. This push is to separate Production Work from Other Work
in a separate system To Increase And Enhance Manageability. Or some such
thing.

> 
> Do you have a Coupling Facility in this CEC?

No, put I'm pushing for it if we do this (highly likely). An LPAR and a
new CFL is not all that expensive.

> 
> FWIW, I can't see any advantages at all to going with monoplexes.  If
> you don't have a CF then I don't think I'd parallel sysplex. 

I don't see any advantage to doing this at all.

> 
> Jon


--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Staller, Allan
I can think of nothing that can be done in a single image that cannot be
done in a basic (or parallel) sysplex. Some things are more complicated,
some things are easier. IIRC, 3-5% utilization is the ROT for parallel
sysplex overhead (maybe for basic, but I am not sure).

Parallel sysplex = stand-alone CF = $ (is most likely not going to
happen for you (based on previous posts)

Basic sysplex #1 = CF engine =  (is most likely not going to happen
for you)

Basic sysplex #2 = CTC communication With 2 LPARS, not too difficult, -
exponential increase in complexity  as lpars are added (2**N)

Inter-lpar communications/control are the big issue here. GRS/MIM,
VTAM/APPN/EE, share *EVERYTHING*, naming conventions.

If you are WLM Goal mode, you are (at least) a monoplex.




I know that is vague. What has happened is that somebody with political
clout is strongly pushing the idea that having two separate z/OS images
on a single CEC (separate LPARs of course) would be "better" than having
a single z/OS system. They have not defined "better". We have not
decided on how to implement this. I know of three possibilities, in my
order of desirability: (1) Parallel Sysplex; (2) basic sysplex; (3)
separate monoplexes.

Is there anything that documents, in a single place, what CANNOT be done
in the various environments? As an example, a JES2 MAS requires at least
a basic sysplex. If we go to separate monoplexes, then the JES2s could
only communicate via NJE. Is there ANYTHING that I can do in a single
image that I cannot do in a parallel sysplex (other than share memory,
VTAM LU names, and IP addresses)?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of J Ellis
> Sent: Tuesday, October 23, 2007 9:46 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> 
> You are probably aware of this, I would check out software 
> costs of all 
> options. You may be able to direct them better if can quote 
> dollars for A, B, C
> 

My manager has supposedly looked into this. All of our software is
apparently licensed so that it doesn't matter whether the software is in
one LPAR or multiple, so long as it is on the same physical machine (as
in this case).

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Bob Shannon
> Sent: Tuesday, October 23, 2007 9:43 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 
> 
> >I should have mentioned that we already run MIMIT (MIM 
> Integrity) to stop >programmers from destroying datasets 
> (such as by linking into a source >PDS).
> 
> In 1.9 the Binder will not link into a PDS unless it is 
> RECFM=U. This was done for PDSEs in an earlier release.
> 
> Bob Shannon

That's good. But I still have wippo's who have BLKSIZE= in
their JCL which sometimes attempts to reblock dataset smaller. Gotta
protect from them as well. And if anything goes wrong, it is always my
fault for not protecting them. (victim mentality).

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Jon Brock
A couple of questions, to try to avoid the risk of entering a "teaching
grandma to suck eggs" scenario:

How do you upgrade z/OS or ISV software currently if you are only
running one image?  

Do you have a Coupling Facility in this CEC?

FWIW, I can't see any advantages at all to going with monoplexes.  If
you don't have a CF then I don't think I'd parallel sysplex. 

Jon




I know that is vague. What has happened is that somebody with political
clout is strongly pushing the idea that having two separate z/OS images
on a single CEC (separate LPARs of course) would be "better" than having
a single z/OS system. They have not defined "better". We have not
decided on how to implement this. I know of three possibilities, in my
order of desirability: (1) Parallel Sysplex; (2) basic sysplex; (3)
separate monoplexes.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread J Ellis
You are probably aware of this, I would check out software costs of all 
options. You may be able to direct them better if can quote dollars for A, B, C

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Bob Shannon
>I should have mentioned that we already run MIMIT (MIM Integrity) to stop 
>>programmers from destroying datasets (such as by linking into a source >PDS).

In 1.9 the Binder will not link into a PDS unless it is RECFM=U. This was done 
for PDSEs in an earlier release.

Bob Shannon
Rocket Software

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:[EMAIL PROTECTED] On Behalf Of Mark Jacobs
> Sent: Tuesday, October 23, 2007 9:26 AM
> To: IBM-MAIN@BAMA.UA.EDU
> Subject: Re: multiple z/OS sharing considerations.
> 



> >   
> Not exactly an answer to your question but;
> 
> If you are sharing DASD you will have to implement GRS or a GRS like 
> product. In a parallel sysplex you can use the GRSSTAR functionally 
> which is vastly improved over GRS ring processing.
> 
> With a basic sysplex you can implement GRS ring with sysplex support 
> while in two separate monoplexes you will have to use the 
> original GRS 
> (BCTC's) without the assistance of XCF communications.
> 
> There are also tape sharing considerations between sysplex 
> and monoplex 
> environments.
> 
> -- 
> Mark Jacobs

I should have mentioned that we already run MIMIT (MIM Integrity) to
stop programmers from destroying datasets (such as by linking into a
source PDS). This could also do many of the GRS type functions between
the two systems. We have also used MIM Allocation to share tape drives
in the past.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: multiple z/OS sharing considerations.

2007-10-23 Thread Mark Jacobs

McKown, John wrote:

I know that is vague. What has happened is that somebody with political
clout is strongly pushing the idea that having two separate z/OS images
on a single CEC (separate LPARs of course) would be "better" than having
a single z/OS system. They have not defined "better". We have not
decided on how to implement this. I know of three possibilities, in my
order of desirability: (1) Parallel Sysplex; (2) basic sysplex; (3)
separate monoplexes.

Is there anything that documents, in a single place, what CANNOT be done
in the various environments? As an example, a JES2 MAS requires at least
a basic sysplex. If we go to separate monoplexes, then the JES2s could
only communicate via NJE. Is there ANYTHING that I can do in a single
image that I cannot do in a parallel sysplex (other than share memory,
VTAM LU names, and IP addresses)?

Many thanks from a rather upset sysprog.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology


  

Not exactly an answer to your question but;

If you are sharing DASD you will have to implement GRS or a GRS like 
product. In a parallel sysplex you can use the GRSSTAR functionally 
which is vastly improved over GRS ring processing.


With a basic sysplex you can implement GRS ring with sysplex support 
while in two separate monoplexes you will have to use the original GRS 
(BCTC's) without the assistance of XCF communications.


There are also tape sharing considerations between sysplex and monoplex 
environments.


--
Mark Jacobs
Time Customer Service
Tampa, FL 
--


A desire not to butt into other people's business is at 
least eighty percent of all human wisdom...and the other

twenty percent isn't very important.

Jubal Harshaw (Stranger in a Strange Land)

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


multiple z/OS sharing considerations.

2007-10-23 Thread McKown, John
I know that is vague. What has happened is that somebody with political
clout is strongly pushing the idea that having two separate z/OS images
on a single CEC (separate LPARs of course) would be "better" than having
a single z/OS system. They have not defined "better". We have not
decided on how to implement this. I know of three possibilities, in my
order of desirability: (1) Parallel Sysplex; (2) basic sysplex; (3)
separate monoplexes.

Is there anything that documents, in a single place, what CANNOT be done
in the various environments? As an example, a JES2 MAS requires at least
a basic sysplex. If we go to separate monoplexes, then the JES2s could
only communicate via NJE. Is there ANYTHING that I can do in a single
image that I cannot do in a parallel sysplex (other than share memory,
VTAM LU names, and IP addresses)?

Many thanks from a rather upset sysprog.

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

The information contained in this e-mail message may be privileged
and/or confidential.  It is for intended addressee(s) only.  If you are
not the intended recipient, you are hereby notified that any disclosure,
reproduction, distribution or other use of this communication is
strictly prohibited and could, in certain circumstances, be a criminal
offense.  If you have received this e-mail in error, please notify the
sender by reply and delete this message without copying or disclosing
it.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html