Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-13 Thread Scott Rowe
IIRC, when running in Ring mode, each GRS keeps a copy of all ENQs from all 
systems in memory, while in Star mode each GRS only keeps ENQs for his own 
system, so it seems logical that there would be a point at which Star uses less 
memory overall.  
 
Try being an all SAP shop, I am going to be going from 131584K to 264192K when 
I get my z10.  I have 23 DB2 subsystems on my box, with about 375K outstanding 
ENQs at any one time, but with very little ENQ/DEQ activity to speak of, yet I 
expect I am a much smaller shop overall than you.  Before I went to Star mode, 
the GRS in all my LPARs were constantly using a few percent of the CPU, even 
when there was no ENQ activity, but now GRS+ICFs is typically under 1% for all 
LPARs combined.  Granted this is not scientific, just casual observation, but I 
think there is something to it.  However, it is certainly possible that shops 
like yours, which have higher ENQ/DEQ activity, will see a cost, rather than a 
savings.

>>> Arthur Gutowski  8/13/2009 1:31 PM >>>
On the processor storage point, I was talking about the CF structure to cache 
the global table.  Based on health-check type warnings, we are just now 
increasing our structure from 133,120K (for CF15, 325,000 outstanding ENQs) 
to 264,192K (for CF16, 415,000 oustanding ENQs) - 128M!  We have to 
configure more storage to our CFs to accomodate the growth.  Yes, each GRS 
shrinks, but I don't know if the net savings makes up for ISGLOCK.  Yes, we 
are a heavy batch shop (lots of BMPs), but also heavy online (CICS, IMS/DC & 
DB, DB2, and some TSO/E).

To the CPU point, we have also seen dramatic CPU spikes and sustained 
consumption in GRS and XCF, but part of that is our own doing.  One of our 
CFs is on a processor that shares a CF CP with a separate (for now) sysplex, 
so CF dynamic dispatch is on.  I know IBM recommends against this, but when 
we tried turning it off and weighting the CFs appropriately, both CFs suffered 
tremendously.  It is not in our best interest to enable a second CF CPU, as 
this these separate plexes will merge sometime next year.  We're living with it 
for now.  It did help to move ISGLOCK to the other CF where we disabled 
dynamic dispatch (only one PROD and one sandbox CF there).

I don't know how this would compare to Ring performance - we made a 
deliberate decision to go to Star first, then merge.  On our other sysplex, we 
still see an increase in GRS CPU (again, due in part to dynamic dispatch), but 
there was a definite drop in ENQ response times.  Some of the CPU increase 
may also be due to lower response times enabling GRS to process more 
requests (latent demand).  The trade-off is acceptable to us.

Our European counterparts had it even worse when they attempted Star some 
time ago.  Again, I don't have any hard data, but I trust the contractor who 
had the project (brilliant guy, very thorough, ex-IBMer, I think, though so am 
I, so that last bit is not necessarily contributory) when he told me that GRS 
consumption crippled the system.  They had a hard time even getting all their 
regions and initiators opened up, let alone running full production batch 
cycle.  

Bottom line, I don't disagree with you, nor IBM's observations and statements, 
but our experience demonstrates there are trade-offs, and YMMV, but 
TAENSTAAFL (E for Ever).

Regards,
Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-13 Thread Arthur Gutowski
On Thu, 13 Aug 2009 10:46:44 -0400, Scott Rowe 
 wrote:

>Arthur, I snipped most of your text, which I whole-heartedly agree with, but 
I have one comment about the "cost" of GRS-Star:  I don't know if anyone 
has truly studied this, but I have doubts as to whether GRS Star has a CPU 
cost, at least in an environment with relatively low ENQ activity.  I have seen 
GRS address space CPU time decrease significantly when going to Star, since 
it does not have to wake up frequently to do nothing but pass the token 
around the ring.  The story may change significantly in a sysplex with high 
ENQ activity (like a large batch environment).  I also wonder whether there is 
a true storage cost, since I think the GRS address space working set seems to 
be smaller in Star mode.  I wish I had the time to do an in-depth analysis of 
this, but that's not likely when I'm a one-man shop ;-) 

On the processor storage point, I was talking about the CF structure to cache 
the global table.  Based on health-check type warnings, we are just now 
increasing our structure from 133,120K (for CF15, 325,000 outstanding ENQs) 
to 264,192K (for CF16, 415,000 oustanding ENQs) - 128M!  We have to 
configure more storage to our CFs to accomodate the growth.  Yes, each GRS 
shrinks, but I don't know if the net savings makes up for ISGLOCK.  Yes, we 
are a heavy batch shop (lots of BMPs), but also heavy online (CICS, IMS/DC & 
DB, DB2, and some TSO/E).

To the CPU point, we have also seen dramatic CPU spikes and sustained 
consumption in GRS and XCF, but part of that is our own doing.  One of our 
CFs is on a processor that shares a CF CP with a separate (for now) sysplex, 
so CF dynamic dispatch is on.  I know IBM recommends against this, but when 
we tried turning it off and weighting the CFs appropriately, both CFs suffered 
tremendously.  It is not in our best interest to enable a second CF CPU, as 
this these separate plexes will merge sometime next year.  We're living with it 
for now.  It did help to move ISGLOCK to the other CF where we disabled 
dynamic dispatch (only one PROD and one sandbox CF there).

I don't know how this would compare to Ring performance - we made a 
deliberate decision to go to Star first, then merge.  On our other sysplex, we 
still see an increase in GRS CPU (again, due in part to dynamic dispatch), but 
there was a definite drop in ENQ response times.  Some of the CPU increase 
may also be due to lower response times enabling GRS to process more 
requests (latent demand).  The trade-off is acceptable to us.

Our European counterparts had it even worse when they attempted Star some 
time ago.  Again, I don't have any hard data, but I trust the contractor who 
had the project (brilliant guy, very thorough, ex-IBMer, I think, though so am 
I, so that last bit is not necessarily contributory) when he told me that GRS 
consumption crippled the system.  They had a hard time even getting all their 
regions and initiators opened up, let alone running full production batch 
cycle.  

Bottom line, I don't disagree with you, nor IBM's observations and statements, 
but our experience demonstrates there are trade-offs, and YMMV, but 
TAENSTAAFL (E for Ever).

Regards,
Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-13 Thread Scott Rowe
Arthur, I snipped most of your text, which I whole-heartedly agree with, but I 
have one comment about the "cost" of GRS-Star:  I don't know if anyone has 
truly studied this, but I have doubts as to whether GRS Star has a CPU cost, at 
least in an environment with relatively low ENQ activity.  I have seen GRS 
address space CPU time decrease significantly when going to Star, since it does 
not have to wake up frequently to do nothing but pass the token around the 
ring.  The story may change significantly in a sysplex with high ENQ activity 
(like a large batch environment).  I also wonder whether there is a true 
storage cost, since I think the GRS address space working set seems to be 
smaller in Star mode.  I wish I had the time to do an in-depth analysis of 
this, but that's not likely when I'm a one-man shop ;-) 

>>> Arthur Gutowski aguto...@ford.com> 8/12/2009 6:45 PM >> ( 
>>> mailto:aguto...@ford.com> )

As I said in a prior post, TANSTAAFL (There Ain't No Such Thing As A Free 
Lunch).  This does come at the cost of CPU and processor storage, plus a 
little bit of DASD, and a lot of planning.  We run parallel sysplex fairly 
successfully in our US datacenters, but our European counterparts were so 
CPU constrained that GRS Star hurt them more than it helped.  We ended up 
dismantling their parallel sysplex, but still run basic sysplex.


CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-13 Thread Arthur Gutowski
With others making judicious mention of GRS RNLs, I realized I omitted another 
important reference:  "z/OS MVS Planning: Global Resource Serialization", 
SA22-7600, as well as sections in the various components' (HSM, RACF, 
etc.) "sysprog" or "reference" manuals that speak to GRS requirements.

>You're on the right path.  I also strongly recommend you read "z/OS MVS:
>Setting Up a Sysplex", SA22-7625, "Achieving the Highest Levels of Parallel
>Sysplex Availability", SG24-6061, "z/OS Systems Programmers Guide to:
>Sysplex Aggregation".  The latter is an almost indispensible RedPaper (AFAIK,
>not yet a RedBook), and thoroughly describes the various sysplex
>configurations and levels of sharing or isolation.  Multi-CEC, multi-site may 
>or
>may not apply to you, but the concepts are relevant, nonetheless.
>
>The "ABCs of Systems Programming" also contains chapters on sysplex, I do
>believe.  There used to be a couple of "Software Management" and "Sysplex
>Configuruation" "cookbooks" out there, but I believe they have been
>superseded in large part by the aforementioned references.
>
>Also, take a look at the IBM System z Parallel Sysplex site:
>http://www-03.ibm.com/systems/z/advantages/pso/index.html
>and Resource Link, CF Sizer, etc.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-13 Thread Walter Marguccio
>> - Original Message 
>> From: Mark Zelden mark.zel...@zurichna.com
>>
>> Typical size is 956 byte, at list at our shop. My RMF reports shows i.e.
>> 1,868,451 for the Default Class (956) and only 4,539 for the Big Class 
>> (16,316)
>> as outbound traffic from one LPAR to another in the plex.

> I know I can look at RMF reports to see what sizes are going where and
> how many.  My question is:  How do you know what percentage of the
> default class messages are GRS (although I would suspect they all
> fit in the 1K class)?  What if all the big ones were GRS (relative to this
> current discussion on GRS ring)?

You are right. The 'XCF usage by member' report does not tell which Transport 
Class
a given member uses. Going back to the numbers I gave yesterday, SYSGRS alone 
sent
1,825,205 messages. Being the messages in the Big Class only 4,539, I assume all
messages belonging to SYSGRS fit in the Default Class.

> The only way you could tell is if you set up a transport class(s) just for 
> GRS,
> which hasn't been recommended in a long time.

I haven't seen a recommendation for a 'GRS-only' Transport Class either.

Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany


  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Frank Swarbrick
Thank you!  As the original poster, I got a bit confused with Allan's 
recommendation, as I could have sworn others were recommending one Sysplex 
owning both PROD and TEST/DEV.  Thought maybe I had misinterpreted everything 
and had to start over!  :-)
-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075


On 8/12/2009 at 8:45 AM, in message <4a829d2c.8489.00d...@joann.com>, Scott
Rowe  wrote:
> Well then, you don't agree with me!
>  
> There are many smaller shops which share DASD across PROD/DEV/TEST, and a 
> Sysplex can be a very appropriate configuration for them.
> 
 "Staller, Allan"  8/12/2009 10:01 AM >>>
> I agree. 1 sysplex for all is not a good design point for the O/P. I
> believe he could still benefit from a 2 sysplex design in the same box.
> 1 for Prod, and 1 for DEV/TEST/QA. Two LPARs for each.
> 
> 
> The OP was asking about using Sysplex to join his three monoplexes into
> a single Sysplex.  Availability may have been the original purpose of
> Sysplex, but it can and has been used for other purposes since it's
> inception.  You have obviously not been following the recent discussion
> that spawned this thread ;-)
> 
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html 
> 
> 
> 
> CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
> confidential and privileged information intended only for the addressee.  If 
> you are not the intended recipient, please be advised that you have received 
> this material in error and that any forwarding, copying, printing, 
> distribution, use or disclosure of the material is strictly prohibited.  If 
> you have received this material in error, please (i) do not read it, (ii) 
> reply to the sender that you received the message in error, and (iii) erase 
> or destroy the material. Emails are not secure and can be intercepted, 
> amended, lost or destroyed, or contain viruses. You are deemed to have 
> accepted these risks if you communicate with us by email. Thank you.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Arthur Gutowski
On Tue, 11 Aug 2009 14:34:17 -1000, Stephen Y Odo 
 wrote:

>We have 3 LPARs configured as monoplex.  One for our Production
>environment, one for our Test/QA environment, and one for our
>Application Development environment.
>
>From this discussion, it sounds like everybody is saying even we would
>benefit from Sysplex.

Quite possibly yes, but worth the research you are already beginning...

>If we did do Sysplex, would we still be able to maintain that separation
>between the environments?  How?  Does anybody have a "cookbook" on how
>to go from where we are to where we should be?  And how does the change
>affect the way we do things? i.e. what do we need to warn our customers
>about?
>
>I've looked at a couple of Redbooks ("Merging systems into a Sysplex",
>etc.) and I know there's lots of others out there.

You're on the right path.  I also strongly recommend you read "z/OS MVS: 
Setting Up a Sysplex", SA22-7625, "Achieving the Highest Levels of Parallel 
Sysplex Availability", SG24-6061, "z/OS Systems Programmers Guide to:  
Sysplex Aggregation".  The latter is an almost indispensible RedPaper (AFAIK, 
not yet a RedBook), and thoroughly describes the various sysplex 
configurations and levels of sharing or isolation.  Multi-CEC, multi-site may 
or 
may not apply to you, but the concepts are relevant, nonetheless.

The "ABCs of Systems Programming" also contains chapters on sysplex, I do 
believe.  There used to be a couple of "Software Management" and "Sysplex 
Configuruation" "cookbooks" out there, but I believe they have been 
superseded in large part by the aforementioned references.

Also, take a look at the IBM System z Parallel Sysplex site:
http://www-03.ibm.com/systems/z/advantages/pso/index.html
and Resource Link, CF Sizer, etc.

You will definitely need to keep your customers in the loop.  Dataset naming 
conflicts are likely the biggest hurdle to merging systems, from a customer 
perspective.  For each component you exploit sharing, there will be additional 
concerns, and each has its own chapter in the RedPaper I mentioned.

>Also, most of the stuff out there talks about parallel sysplex.  What
>are the differences for a Basic Sysplex?  Which parts do I do differently?

Parallel sysplex offers a superset of functionality over Basic sysplex.  What 
you do differently in a Parallel Sysplex is detailed in "Setting Up a Sysplex", 
but 
in short, you set up at least one CF with at least one link to your z/OS 
images.  If you are z9 (z990?) or better, use Peer-mode links.  At z10 or 
better, and multi-CEC, single-site, it would pay to investigate Infiband (IFB) 
links.  If you are single-CEC ("sysplex-in-a-box"), use an ICP (internal) link. 
 
Format CFRM couple datasets and define one or more policies.  Most, dare I 
say, all of this can be done dynamically.  What you can exploit that you 
cannot in basic includes GRS Star, Enhanced Catalog Sharing, VSAM RLS, 
caching of log streams and JES2 checkpoint, just to name a few.

I do not believe it to be a waste to configure a parallel sysplex with only one 
CF LPAR.  If you have only one CEC, that's all you need (if the LPAR or the 
internal links go down - you've got bigger problems).  You can still reap 
availability benefits (rolling IPLs) and performance benefits (GRS Star does 
tend to have better response time tan Ring, even in a two-system sysplex).

As I said in a prior post, TANSTAAFL (There Ain't No Such Thing As A Free 
Lunch).  This does come at the cost of CPU and processor storage, plus a 
little bit of DASD, and a lot of planning.  We run parallel sysplex fairly 
successfully in our US datacenters, but our European counterparts were so 
CPU constrained that GRS Star hurt them more than it helped.  We ended up 
dismantling their parallel sysplex, but still run basic sysplex.

>And what about the non-IBM components?  We run ADABAS/Natural from
>Software Ag.

Can't speak to Software/AG, but as noted by others, most ISV's will at least 
tolerate a sysplex.  Many will truly exploit it.  Depends on your 
configuration.  
Best to talk to each of your vendors.  We do have one ISV product we're 
having some trouble with, but I cannot go into details.  Suffice it to say the 
product is sysplex-aware, and ours is not a technical issue.

Good luck and Godspeed.

Art Gutowski
Ford Motor Company

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Scott Rowe
I am getting ratios of 30:1 or better of small messages to large, and IIRC
it is even higher when using GRS-Ring.  So, ESCON CTCs do very well in a
Basic Sysplex, where the majority of messages are GRS ENQ traffic.

If I had a FICON switch in my environment, I would use single FICON channels
in bounce-back mode for CTCs, which works like a charm, but I can't see
wasting two FICON channels to create a single CTC connection right now, I
would have to waste 4 FICON channels just for CTCs, and I really can't spare
the channels right now.  I have over a hundred un-used ESCON channels in my
current configuration, but that will change when I get a z10.

On Wed, 12 Aug 2009 09:43:01 -0500, Mark Zelden 
wrote:
>So FICON is better with current technology regardless.  But for those
>using "old" technology, does anyone know what the typical  / average size
>of GRS XCF messages are in a ring?  Scott, are you lurking?  :-)
>
>Mark
>--
>Mark Zelden
>Sr. Software and Systems Architect - z/OS Team Lead
>Zurich North America / Farmers Insurance Group - ZFUS G-ITO
>mailto:mark.zel...@zurichna.com
>z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
>Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Mark Zelden
On Wed, 12 Aug 2009 08:08:53 -0700, Walter Marguccio
 wrote:

>> - Original Message 
>> From: Mark Zelden mark.zel...@zurichna.com
>
>> So FICON is better with current technology regardless.  But for those
>> using "old" technology, does anyone know what the typical  / average size
>> of GRS XCF messages are in a ring?  Scott, are you lurking?  :-)
>
>Typical size is 956 byte, at list at our shop. My RMF reports shows i.e.
>1,868,451 for the Default Class (956) and only 4,539 for the Big Class (16,316)
>as outbound traffic from one LPAR to another in the plex.
>
>Which makes the migration from "old" technology to new FICON CTC not
>really urgent. Good news. :-)
>
>Walter Marguccio
>z/OS Systems Programmer
>BELENUS LOB Informatic GmbH
>Munich - Germany

I know I can look at RMF reports to see what sizes are going where and
how many.  My question is:  How do you know what percentage of the
default class messages are GRS (although I would suspect they all
fit in the 1K class)?  What if all the big ones were GRS (relative to this
current discussion on GRS ring)?

The only way you could tell is if you set up a transport class(s) just for GRS,
which hasn't been recommended in a long time.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Staller, Allan

I am curious why you would need to take an outage to "harden any dynamic
changes to the IODF"?


It was IODF and microcode. I am limited to a single CEC (a Z/9).
Eventually, dynamic IODF changes will fail when the allocated HSA fills
up. Either that, or an LPAR won't activate because the HSA has
encroached on the available storage for that LPAR and will require a
unplanned update via the HMC. I prefer to avoid both situations. There a
still a few (very few) microcode changes that require a POR to fully
implement.

The issue is alleviated, but not eliminated, w/a z/10 because there is,
IIRC, a 16 MB dedicated HSA. Eventually this too will fill up and
require a POR.

This is a non-issue in a parallel sysplex because there are 2 or more
CEC's

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Walter Marguccio
> - Original Message 
> From: Mark Zelden mark.zel...@zurichna.com

> So FICON is better with current technology regardless.  But for those
> using "old" technology, does anyone know what the typical  / average size
> of GRS XCF messages are in a ring?  Scott, are you lurking?  :-)

Typical size is 956 byte, at list at our shop. My RMF reports shows i.e.
1,868,451 for the Default Class (956) and only 4,539 for the Big Class (16,316)
as outbound traffic from one LPAR to another in the plex. 

Which makes the migration from "old" technology to new FICON CTC not
really urgent. Good news. :-)

Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany


  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Scott Rowe
Ah yes, that was it.  That is why I use ESCON for 1k messages, and use my ICF 
for larger messages, since I don't have a FICON CTC set up.  The vast majority 
of the messages in my Sysplex fir in the 1k class.

>>> Walter Marguccio  8/12/2009 10:34 AM >>>
> - Original Message 
> From: Scott Rowe scott.r...@joann.com 

> BTW, for small packets of data, I believe ESCON links are actually faster 
> than FICON. 
> I think I have seen a paper on this, but I can't remember where.

You are right:

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100743 

"A comparison of these examples shows the ESCON CTCs are slightly faster
than the FICON Express CTCs for 1K messages but ESCON is slower when
compared to FICON Express 2 as well as the newer ISC3 and ICB4 links."

Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany


  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Scott Rowe
Well then, you don't agree with me!
 
There are many smaller shops which share DASD across PROD/DEV/TEST, and a 
Sysplex can be a very appropriate configuration for them.

>>> "Staller, Allan"  8/12/2009 10:01 AM >>>
I agree. 1 sysplex for all is not a good design point for the O/P. I
believe he could still benefit from a 2 sysplex design in the same box.
1 for Prod, and 1 for DEV/TEST/QA. Two LPARs for each.


The OP was asking about using Sysplex to join his three monoplexes into
a single Sysplex.  Availability may have been the original purpose of
Sysplex, but it can and has been used for other purposes since it's
inception.  You have obviously not been following the recent discussion
that spawned this thread ;-)



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Scott Rowe
Allan,
 
I am curious why you would need to take an outage to "harden any dynamic 
changes to the IODF"?

Scott

>>> "Staller, Allan"  8/12/2009 8:54 AM >>>
The purpose of a sysplex is z/OS availability. I commonly go 3 months
between outages (production IPL's) and I am in the middle of creating 2
base sysplex's. 1 for test/dev/QA, and 1 for production. After this
project is complete, I expect to be able to go to 1 production outage
annually (mainly to harden any dynamic changes to the IODF and/or
processor microcode) and perhaps longer. A parallel sysplex would be
wasted money unless you have 2 or more CECs.

The technical difference between a base and a parallel sysplex is the
Coupling Facility (whether it be internal or stand-alone) and a sysplex
time reference (external) and all of the functions that depend on a CF
(GDPS, VTAM generic resources, hot failover, multi-node persistent
sessions, log streams, JES Checkpoint, ...). The design difference is in
the degree of availability. In a properly designed parallel sysplex, one
could expect 99.% availability of *ALL* applications and z/OS itself
(about 31 sec of down time annually). The best I expect with a base
sysplex is somewhere between 99.99% and 99.999%  (5 min and 52 min
respectively). Many of the things I would *like* to do are only
available in a parallel sysplex, so I have to do without those
functions. I can do everything I *have* to do in a base sysplex.

With deference to Barbara, RACF is not a problem. It runs just fine
without the CF, but can exploit one if available.

One last note, without a CF, many sysplex functions do not scale well to
more than a few images. GRS is a notorious example, as would the JES
checkpoint. With only 2 systems in each plex (PROD & DEV/TEST/QA) you
should not have any significant issues. The Merging Systems into a
Sysplex Redbook (SG24-6818) is a great reference, but does omit some
functions that have become available since it was published in 2002.

HTH, 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Mark Zelden
On Wed, 12 Aug 2009 07:34:16 -0700, Walter Marguccio
 wrote:

>> - Original Message 
>> From: Scott Rowe scott.r...@joann.com
>
>> BTW, for small packets of data, I believe ESCON links are actually faster
than FICON. 
>> I think I have seen a paper on this, but I can't remember where.
>
>You are right:
>
>http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100743
>
>"A comparison of these examples shows the ESCON CTCs are slightly faster
>than the FICON Express CTCs for 1K messages but ESCON is slower when
>compared to FICON Express 2 as well as the newer ISC3 and ICB4 links."
>
>Walter Marguccio
>z/OS Systems Programmer
>BELENUS LOB Informatic GmbH
>Munich - Germany
>

So FICON is better with current technology regardless.  But for those
using "old" technology, does anyone know what the typical  / average size
of GRS XCF messages are in a ring?  Scott, are you lurking?  :-)

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-12 Thread Walter Marguccio
> - Original Message 
> From: Scott Rowe scott.r...@joann.com

> BTW, for small packets of data, I believe ESCON links are actually faster 
> than FICON. 
> I think I have seen a paper on this, but I can't remember where.

You are right:

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100743

"A comparison of these examples shows the ESCON CTCs are slightly faster
than the FICON Express CTCs for 1K messages but ESCON is slower when
compared to FICON Express 2 as well as the newer ISC3 and ICB4 links."

Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany


  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Staller, Allan
I agree. 1 sysplex for all is not a good design point for the O/P. I
believe he could still benefit from a 2 sysplex design in the same box.
1 for Prod, and 1 for DEV/TEST/QA. Two LPARs for each.


The OP was asking about using Sysplex to join his three monoplexes into
a single Sysplex.  Availability may have been the original purpose of
Sysplex, but it can and has been used for other purposes since it's
inception.  You have obviously not been following the recent discussion
that spawned this thread ;-)

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Scott Rowe
Allan,
 
The OP was asking about using Sysplex to join his three monoplexes into a 
single Sysplex.  Availability may have been the original purpose of Sysplex, 
but it can and has been used for other purposes since it's inception.  You have 
obviously not been following the recent discussion that spawned this thread ;-)
 
Scott

>>> "Staller, Allan"  8/12/2009 8:54 AM >>>

We have 3 LPARs configured as monoplex.  One for our Production
environment, one for our Test/QA environment, and one for our
Application Development environment.

>From this discussion, it sounds like everybody is saying even we would
benefit from Sysplex.


The purpose of a sysplex is z/OS availability. I commonly go 3 months
between outages (production IPL's) and I am in the middle of creating 2
base sysplex's. 1 for test/dev/QA, and 1 for production. After this
project is complete, I expect to be able to go to 1 production outage
annually (mainly to harden any dynamic changes to the IODF and/or
processor microcode) and perhaps longer. A parallel sysplex would be
wasted money unless you have 2 or more CECs.

The technical difference between a base and a parallel sysplex is the
Coupling Facility (whether it be internal or stand-alone) and a sysplex
time reference (external) and all of the functions that depend on a CF
(GDPS, VTAM generic resources, hot failover, multi-node persistent
sessions, log streams, JES Checkpoint, ...). The design difference is in
the degree of availability. In a properly designed parallel sysplex, one
could expect 99.% availability of *ALL* applications and z/OS itself
(about 31 sec of down time annually). The best I expect with a base
sysplex is somewhere between 99.99% and 99.999%  (5 min and 52 min
respectively). Many of the things I would *like* to do are only
available in a parallel sysplex, so I have to do without those
functions. I can do everything I *have* to do in a base sysplex.

With deference to Barbara, RACF is not a problem. It runs just fine
without the CF, but can exploit one if available.

One last note, without a CF, many sysplex functions do not scale well to
more than a few images. GRS is a notorious example, as would the JES
checkpoint. With only 2 systems in each plex (PROD & DEV/TEST/QA) you
should not have any significant issues. The Merging Systems into a
Sysplex Redbook (SG24-6818) is a great reference, but does omit some
functions that have become available since it was published in 2002.

HTH, 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Staller, Allan
Afterword:

And what about the non-IBM components?  We run ADABAS/Natural from
Software Ag.


Most major software vendors have had to coexist in or with a SYSPLEX
environment for at least 10 years. I would expect them to be able to
cope!

Check with the vendor for specific information.

HTH,

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-12 Thread Staller, Allan

We have 3 LPARs configured as monoplex.  One for our Production
environment, one for our Test/QA environment, and one for our
Application Development environment.

>From this discussion, it sounds like everybody is saying even we would
benefit from Sysplex.


The purpose of a sysplex is z/OS availability. I commonly go 3 months
between outages (production IPL's) and I am in the middle of creating 2
base sysplex's. 1 for test/dev/QA, and 1 for production. After this
project is complete, I expect to be able to go to 1 production outage
annually (mainly to harden any dynamic changes to the IODF and/or
processor microcode) and perhaps longer. A parallel sysplex would be
wasted money unless you have 2 or more CECs.

The technical difference between a base and a parallel sysplex is the
Coupling Facility (whether it be internal or stand-alone) and a sysplex
time reference (external) and all of the functions that depend on a CF
(GDPS, VTAM generic resources, hot failover, multi-node persistent
sessions, log streams, JES Checkpoint, ...). The design difference is in
the degree of availability. In a properly designed parallel sysplex, one
could expect 99.% availability of *ALL* applications and z/OS itself
(about 31 sec of down time annually). The best I expect with a base
sysplex is somewhere between 99.99% and 99.999%  (5 min and 52 min
respectively). Many of the things I would *like* to do are only
available in a parallel sysplex, so I have to do without those
functions. I can do everything I *have* to do in a base sysplex.

With deference to Barbara, RACF is not a problem. It runs just fine
without the CF, but can exploit one if available.

One last note, without a CF, many sysplex functions do not scale well to
more than a few images. GRS is a notorious example, as would the JES
checkpoint. With only 2 systems in each plex (PROD & DEV/TEST/QA) you
should not have any significant issues. The Merging Systems into a
Sysplex Redbook (SG24-6818) is a great reference, but does omit some
functions that have become available since it was published in 2002.

HTH, 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


how-to Sysplex? [was "Re: DASD: to share or not to share"]

2009-08-11 Thread Stephen Y Odo
We have 3 LPARs configured as monoplex.  One for our Production
environment, one for our Test/QA environment, and one for our
Application Development environment.

>From this discussion, it sounds like everybody is saying even we would
benefit from Sysplex.

If we did do Sysplex, would we still be able to maintain that separation
between the environments?  How?  Does anybody have a "cookbook" on how
to go from where we are to where we should be?  And how does the change
affect the way we do things? i.e. what do we need to warn our customers
about?

I've looked at a couple of Redbooks ("Merging systems into a Sysplex",
etc.) and I know there's lots of others out there.

Also, most of the stuff out there talks about parallel sysplex.  What
are the differences for a Basic Sysplex?  Which parts do I do differently?

And what about the non-IBM components?  We run ADABAS/Natural from
Software Ag.

Thanks for any pointers you can provide.

--Stephen





Scott Rowe wrote:
> I think the real question here is: Why don't you have a SYSPLEX? 
There is little to no cost involved,
> and many benefits.

Arthur Gutowski wrote:
> Still, Sysplex or MIM are the way to go (GRS ring oustide the Sysplex
won't
> live much longer).

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Gibney, Dave
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of R.S.
> Sent: Monday, August 10, 2009 4:53 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> Gibney, Dave pisze:
> > VSAM (and all SMS datasets) can't exist without being cataloged.
> 
> Well... To be accurate: they cannot be *used* without being cataloged.
> But they can exist.
> 
> 
> > Shared
> > Catalogs are not "safe" without integrity protection.
> 
> Huh? AFAIK It is safe and fully supported to share an UCAT without GRS
> or MIM. That means RESERVEs and can hurt performance, but it is
> possible
> to safely and effectively share low-used UCAT. BTDT.

OK, I'll take yours and Mark's word for it. I'm sure I remember a problem with 
shared Catalogs sometime in the past. This memory could be a figment. 

I can have any volume online to any LPAR, IF I NEED TO, but in general, I 
isolate most application data volumes (SMS pools) to the correct LPAR. 


> 
> So, it is possible to share volumes, including SMS managed volumes and
> VSAM files. Application files.
> I would be very careful when talking about sharing system datasets and
> volumes because the system is the most breaking rules user of the
> datasets.
> Last but not least: sharing datasets doesn't mean that many
> applications
> can update them concurrently. It can or (more likely) cannot be don
> even
> within single MVS image.
> 
> 
> --
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> --
> BRE Bank SA
> ul. Senatorska 18
> 00-950 Warszawa
> www.brebank.pl
> 
> Sd Rejonowy dla m. st. Warszawy
> XII Wydzia Gospodarczy Krajowego Rejestru Sdowego,
> nr rejestru przedsibiorców KRS 025237
> NIP: 526-021-50-88
> Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w
> caoci wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj
> warunkowego podwyszenia kapitau zakadowego, na podstawie uchway XXI
> WZ z dnia 16 marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika
> 2008r., moe ulec podwyszeniu do kwoty 123.763.528 z. Akcje w
> podwyszonym kapitale zakadowym BRE Banku SA bd w caoci opacone.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of R.S.
> Sent: Monday, August 10, 2009 11:41 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share

> 
> In fact VSAM datasets are safe as long as an application respects 
> shareoptions. For PS datasets - there is no system-supported way to 
> share it within single image (EXCEPT DISP=SHR), however it is 
> worth to 
> mention that there is no GRS ENQ. That means the ENQ is not 
> visible on 
> another system.

Enhanced Data Integrity,

http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2DI30/4.4


Enhanced data integrity prevents accidental data loss when concurrent users are 
writing to a shared sequential data set. 


http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/DGT2D470/3.4.1


 You can concurrently access a shared sequential data set for output or update 
processing. In some cases, you can lose or destroy data when you update the 
data set, because one user could, at the same time, overwrite another user's 
updates.

The enhanced data integrity function prevents this type of data loss. This data 
integrity function either ends the program that is opening a sequential data 
set that is already opened for writing, or it writes only a warning message but 
allows the data set to open. Only sequential data sets can use the enhanced 
data integrity function. 



> 
> -- 
> Radoslaw Skorupka
> Lodz, Poland



John McKown 

Systems Engineer IV

IT

 

Administrative Services Group

 

HealthMarkets(r)

 

9151 Boulevard 26 * N. Richland Hills * TX 76010

(817) 255-3225 phone * (817)-961-6183 cell

john.mck...@healthmarkets.com * www.HealthMarkets.com

 

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread R.S.

Mark Zelden pisze:
[[[.]

It is safe and fully supported to share an UCAT without GRS
or MIM. That means RESERVEs and can hurt performance, but it is possible
to safely and effectively share low-used UCAT. BTDT.



Somehow I missed commenting on that part in my last response.  Yes,
RESERVE will protect the catalog (but again, not the data sets cataloged in the 
catalog).  [...]


In fact VSAM datasets are safe as long as an application respects 
shareoptions. For PS datasets - there is no system-supported way to 
share it within single image (EXCEPT DISP=SHR), however it is worth to 
mention that there is no GRS ENQ. That means the ENQ is not visible on 
another system.


--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego 
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 
2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec 
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym 
BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Walter Marguccio
> - Original Message 
> From: Mark Zelden mark.zel...@zurichna.com

> Out of curiosity:  What type of XCF inks (ESCON or FICON)?  How many?

we have ESCON XCF links. Every LPAR has 4 PATHIN and 4 PATHOUT
which connect to/from either LPARs.

> Do you have XCF transport classes set up / tuned?

I have a DEFAULT TC (956 Class length) and a BIG TC (16316 CL).
Only one PATHOUT amongs the 4 is associated to the BIG TC.


Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany

Send instant messages to your online friends http://uk.messenger.yahoo.com 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Scott Rowe
Yes, GRS using XCF is far superior in many ways, I would certainly not advise 
using GRS ring (without XCF) for a 4 way GRSplex, but using XCF it would 
certainly work.  Of course, GRS-Star would be even better, but that is a little 
trickier.
 
BTW, for small packets of data, I believe ESCON links are actually faster than 
FICON.  I think I have seen a paper on this, but I can't remember where.

>>> Mark Zelden  8/10/2009 9:41 AM >>>
On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave  wrote:


>
>True, and last time we revamped the IODF I had the required CTC(s)
>added. That was over a year ago when we brought in the z9-BC and is the
>last time I had a chance to take a baby step that direction. Also, on
>this list, I'm led to understand that our four would LPARs push or
>exceed GRS ring performance.
>

Yes, for the "old" GRS ring, 4 systems would probably be a bigger performance
hit than you would ever want.   However, if you set up a basic sysplex, GRS 
performance using XCF is better than dedicated CTC links.  Especially since
you can have FICON CTCs.  Dedicated GRS links (what I referred to as "old"
above) can't get any faster than ESCON in basic mode.  So even XCF links
using ESCON in CTC mode is faster and may give you acceptable performance
with 4 systems.  As usual, YMMV.I can't say I have been in any 4 system
GRS ring (using XCF) shops in the last 10 years, so perhaps someone with
that configuration can share.  I do have plenty of experience with GRS ring
prior to sysplex and I can say that even a 3 system ring presented some 
performance.. um... challenges.  :-)

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com 
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ 
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Mark Zelden
On Mon, 10 Aug 2009 07:20:35 -0700, Walter Marguccio
 wrote:

>>>On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave  wrote:
>
>>> I'm led to understand that our four would LPARs push or exceed GRS ring
performance.
>
>>From: Mark Zelden mark.zel...@zurichna.com
>
>> I can't say I have been in any 4 system GRS ring (using XCF) shops
>> in the last 10 years, so perhaps someone with that configuration can share.
>
>we have such configuration. We recently added a 4th LPAR to our existing
>3-LPARs basic sysplex. I had the same concerns as Dave's, but after more than
>one month I can say GRS ring (exploiting XCF) is not a problem.
>I use RMF PM to keep on eye on XCF delays, and the average delay GRS shows
>is around 4 %.  Which is acceptable to me. We have still a 2086-230, z/OS 1.7
>everywhere except z/OS 1.9 on the sandbox LPAR.
>
>
>Walter Marguccio
>z/OS Systems Programmer
>BELENUS LOB Informatic GmbH
>Munich - Germany
>

Out of curiosity:  What type of XCF inks (ESCON or FICON)?   How many?
Do you have XCF transport classes set up / tuned?

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Walter Marguccio
>>On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave  wrote:

>> I'm led to understand that our four would LPARs push or exceed GRS ring 
>>performance.

>From: Mark Zelden mark.zel...@zurichna.com

> I can't say I have been in any 4 system GRS ring (using XCF) shops
> in the last 10 years, so perhaps someone with that configuration can share.

we have such configuration. We recently added a 4th LPAR to our existing 
3-LPARs basic sysplex. I had the same concerns as Dave's, but after more than 
one month I can say GRS ring (exploiting XCF) is not a problem.
I use RMF PM to keep on eye on XCF delays, and the average delay GRS shows
is around 4 %.  Which is acceptable to me. We have still a 2086-230, z/OS 1.7
everywhere except z/OS 1.9 on the sandbox LPAR.


Walter Marguccio
z/OS Systems Programmer
BELENUS LOB Informatic GmbH
Munich - Germany 


  

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Scott Rowe
As the only Sysprog here (real or otherwise), I understand your time concerns.  
However, in my situation I decided that the benefits outweighed the cost (in 
time).  I find that many of the features of SYSPLEX can save me quite a bit of 
time (once it is set up).  Just the simplicity of having all DASD online to all 
systems can be a benefit.  Of course, YMMV, but I think many small shops decide 
against SYSLEX without ever considering the benefits.
 
I am curious why you still put a $ next to SYSPLEX?

>>> "Gibney, Dave"  8/9/2009 5:39 PM >>>
VSAM (and all SMS datasets) can't exist without being cataloged. Shared
Catalogs are not "safe" without integrity protection. Integrity
protection takes MIM ($) or Sysplex ($) and both require time to
configure and maintain. As the only real z/OS Sysprog here, I'm hard
pressed to keep up as it is, so it's unlikely I'll ever plex and I know
we'll never buy MIM.

I should have been more specific. I only share a limited set of non-SMS
volumes and I do not put VSAM on them. Our CIS guy uses his shared
volumes for some VSAM, but the data is not really shared as the Catalogs
are not shared.

I don't share any SMS pools. If they want Production data for testing,
they have to make a copy (generally using a couple of shared volumes
designated for that specific purpose). Most of the shared datasets are
JCL, PROC, LOAD, ISPxLIB, PARMLIB(s), etc. 

I am fully aware of the risks I'm taking (I think so anyway). 

What I need to do to remain employed here until retirement is learn then
convince the Powers that Be, that zLinux is the best platform for the
Oracle based ERP that's almost inevitable.

> 
> Thanks for you patience in educating this lowly applications
developer.
> 
> Frank
> --
> 
> Frank Swarbrick
> Applications Architect - Mainframe Applications Development
> FirstBank Data Corporation
> Lakewood, CO  USA
> P: 303-235-1403
> F: 303-235-2075
> 
> 
> On 8/9/2009 at 11:45 AM, in message
> , Mark Zelden
>  wrote:
> > If children play with fire, they will eventually get burned!
> >
> >
> > On Sat, 8 Aug 2009 22:42:04 -0600, Frank Swarbrick
> >  wrote:
> >
> >>That is exactly what I did.  Well, "as quickly as I could type", in
any
> case.
> >>We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that
> might
> > be worth.
> >>
> >
> > It's worth nothing in regards to sharing PDSE across sysplex
boundaries
> for
> > anything but READ ONLY functions.
> >
> >>On 8/8/2009 at 1:31 AM, in message
> >>,
> "Gibney,
> >>Dave"  wrote:
> >>> Actually I was speculating about the ability to "refresh" in
memory
> >>> knowledge of the PDSE(s) in the other LPAR(s).
> >
> > There is no command or facility to do that.  There is the sledge
hammer
> > approach of IPLing.   :-)   Well... it may not be all that bad -
more
> below.
> >
> >
> >>> What you describe is not
> >>> guaranteed.
> >>>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly
update
> >>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you
will
> >>> always get the new version.
> >
> > Apparently he did.  But why?  (more below)
> >
> >>>
> >>>> -Original Message-
> >>>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] 
On
> >>>> Behalf Of Frank Swarbrick
> >>>> Sent: Friday, August 07, 2009 1:34 PM
> >>>> To: IBM-MAIN@bama.ua.edu 
> >>>> Subject: Re: DASD: to share or not to share
> >>>>
> >>>> So for example, if our change control process for applications
runs
> in
> >>> DEV
> >>>> (which is how we have it in VSE) we should be able to update our
> >>>> production application loadlib PDSE from DEV exclusively and this
> will
> >>> not
> >>>> be a problem, even without a Sysplex?  I am curious as to where I
> find
> >>>> this PDSE address space refresh command, and if it's really
needed.
> I
> >>>> just compiled a program in to a PDSE in DEV and ran it in PROD
and it
> >>> ran
> >>>> the new version just fine.  Did it twice just to make sure.  No
> >>> problem
> >>>> either time.
> >>>>
> >
> > This probably worked for 2 reasons:
> >
> > 1) Nothing else had the target loadlib allocated and / or opened.
> >
> > 2) PDSE(1)_BUFFER_BEYOND_CLOSE was not set to YES.
> >
> > Try the sa

Re: DASD: to share or not to share

2009-08-10 Thread Mark Zelden
On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave  wrote:


>
>True, and last time we revamped the IODF I had the required CTC(s)
>added. That was over a year ago when we brought in the z9-BC and is the
>last time I had a chance to take a baby step that direction. Also, on
>this list, I'm led to understand that our four would LPARs push or
>exceed GRS ring performance.
>

Yes, for the "old" GRS ring, 4 systems would probably be a bigger performance
hit than you would ever want.   However, if you set up a basic sysplex, GRS 
performance using XCF is better than dedicated CTC links.  Especially since
you can have FICON CTCs.  Dedicated GRS links (what I referred to as "old"
above) can't get any faster than ESCON in basic mode.  So even XCF links
using ESCON in CTC mode is faster and may give you acceptable performance
with 4 systems.  As usual, YMMV.I can't say I have been in any 4 system
GRS ring (using XCF) shops in the last 10 years, so perhaps someone with
that configuration can share.  I do have plenty of experience with GRS ring
prior to sysplex and I can say that even a 3 system ring presented some 
performance.. um... challenges.  :-)

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread Mark Zelden
On Mon, 10 Aug 2009 13:53:29 +0200, R.S.  wrote:

>Gibney, Dave pisze:
>> VSAM (and all SMS datasets) can't exist without being cataloged.
>
>Well... To be accurate: they cannot be *used* without being cataloged.
>But they can exist.
>
>
>> Shared
>> Catalogs are not "safe" without integrity protection.
>
>Huh? AFAIK It is safe and fully supported to share an UCAT without GRS
>or MIM. That means RESERVEs and can hurt performance, but it is possible
>to safely and effectively share low-used UCAT. BTDT.
>

Somehow I missed commenting on that part in my last response.  Yes,
RESERVE will protect the catalog (but again, not the data sets cataloged in the 
catalog).  From the Managing Catalogs manual:

"CATALOG MANAGEMENT uses the SYSIGGV2 reserve while serializing
access to catalogs. The SYSIGGV2 reserve is used to serialize the entire 
catalog BCS component across all I/O as well as to serialize access to
specific catalog entries. The SYSZVVDS reserve is used to serialize 
access to associated VVDS records. The SYSZVVDS reserve along
with the SYSIGGV2 reserve provide an essential mechanism to facilitate 
cross system sharing of catalogs."

--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-10 Thread R.S.

Gibney, Dave pisze:
VSAM (and all SMS datasets) can't exist without being cataloged. 


Well... To be accurate: they cannot be *used* without being cataloged. 
But they can exist.




Shared
Catalogs are not "safe" without integrity protection. 


Huh? AFAIK It is safe and fully supported to share an UCAT without GRS 
or MIM. That means RESERVEs and can hurt performance, but it is possible 
to safely and effectively share low-used UCAT. BTDT.


So, it is possible to share volumes, including SMS managed volumes and 
VSAM files. Application files.
I would be very careful when talking about sharing system datasets and 
volumes because the system is the most breaking rules user of the datasets.
Last but not least: sharing datasets doesn't mean that many applications 
can update them concurrently. It can or (more likely) cannot be don even 
within single MVS image.



--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego 
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 
2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec 
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym 
BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-09 Thread Gibney, Dave
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Mark Zelden
> Sent: Sunday, August 09, 2009 8:17 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> On Sun, 9 Aug 2009 14:39:32 -0700, Gibney, Dave 
wrote:
> 
> >
> >VSAM (and all SMS datasets) can't exist without being cataloged.
Shared
> >Catalogs are not "safe" without integrity protection. Integrity
> >protection takes MIM ($) or Sysplex ($) and both require time to
> >configure and maintain. As the only real z/OS Sysprog here, I'm hard
> >pressed to keep up as it is, so it's unlikely I'll ever plex and I
know
> >we'll never buy MIM.
> >
> 
> 
> GRS is free and it doesn't require a sysplex.   Or you can use a basic
> sysplex.  Yes, there is some configuration steps, but once set up it
> isn't likely to eat into your time.


True, and last time we revamped the IODF I had the required CTC(s)
added. That was over a year ago when we brought in the z9-BC and is the
last time I had a chance to take a baby step that direction. Also, on
this list, I'm led to understand that our four would LPARs push or
exceed GRS ring performance. 

> 
> Mark
> --
> Mark Zelden
> Sr. Software and Systems Architect - z/OS Team Lead
> Zurich North America / Farmers Insurance Group - ZFUS G-ITO
> mailto:mark.zel...@zurichna.com
> z/OS Systems Programming expert at
> http://expertanswercenter.techtarget.com/
> Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-09 Thread Mark Zelden
On Sun, 9 Aug 2009 14:39:32 -0700, Gibney, Dave  wrote:

>
>VSAM (and all SMS datasets) can't exist without being cataloged. Shared
>Catalogs are not "safe" without integrity protection. Integrity
>protection takes MIM ($) or Sysplex ($) and both require time to
>configure and maintain. As the only real z/OS Sysprog here, I'm hard
>pressed to keep up as it is, so it's unlikely I'll ever plex and I know
>we'll never buy MIM.
>


GRS is free and it doesn't require a sysplex.   Or you can use a basic
sysplex.  Yes, there is some configuration steps, but once set up it
isn't likely to eat into your time.  

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-09 Thread Gibney, Dave
CIS <=> CICS, sorry about the typo

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Gibney, Dave
> Sent: Sunday, August 09, 2009 2:40 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> > Behalf Of Frank Swarbrick
> > Sent: Sunday, August 09, 2009 12:57 PM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: DASD: to share or not to share
> >
> > I believe you!
> >
> > Can I restate as follows?
> > If we do not have a sysplex we should not be sharing PDSE datasets
> between
> > LPARs because an update to the PDSE in one LPAR is not guaranteed to
> be
> > seen immediately (or ever?) by another LPAR.  (Read only PDSEs are
OK
> > because they are not updated.)
> >
> > I assume we are still fine with sharing regular PDS datasets between
> LPARs
> > without a sysplex?  What about other types, such as regular
sequential
> > datasets and VSAM datasets?  Having a non-sysplex are we OK here?
> 
> VSAM (and all SMS datasets) can't exist without being cataloged.
Shared
> Catalogs are not "safe" without integrity protection. Integrity
> protection takes MIM ($) or Sysplex ($) and both require time to
> configure and maintain. As the only real z/OS Sysprog here, I'm hard
> pressed to keep up as it is, so it's unlikely I'll ever plex and I
know
> we'll never buy MIM.
> 
> I should have been more specific. I only share a limited set of
non-SMS
> volumes and I do not put VSAM on them. Our CIS guy uses his shared
> volumes for some VSAM, but the data is not really shared as the
Catalogs
> are not shared.
> 
> I don't share any SMS pools. If they want Production data for testing,
> they have to make a copy (generally using a couple of shared volumes
> designated for that specific purpose). Most of the shared datasets are
> JCL, PROC, LOAD, ISPxLIB, PARMLIB(s), etc.
> 
> I am fully aware of the risks I'm taking (I think so anyway).
> 
> What I need to do to remain employed here until retirement is learn
then
> convince the Powers that Be, that zLinux is the best platform for the
> Oracle based ERP that's almost inevitable.
> 
> >
> > Thanks for you patience in educating this lowly applications
> developer.
> >
> > Frank
> > --
> >
> > Frank Swarbrick
> > Applications Architect - Mainframe Applications Development
> > FirstBank Data Corporation
> > Lakewood, CO  USA
> > P: 303-235-1403
> > F: 303-235-2075
> >
> >
> > On 8/9/2009 at 11:45 AM, in message
> > , Mark Zelden
> >  wrote:
> > > If children play with fire, they will eventually get burned!
> > >
> > >
> > > On Sat, 8 Aug 2009 22:42:04 -0600, Frank Swarbrick
> > >  wrote:
> > >
> > >>That is exactly what I did.  Well, "as quickly as I could type",
in
> any
> > case.
> > >>We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever
that
> > might
> > > be worth.
> > >>
> > >
> > > It's worth nothing in regards to sharing PDSE across sysplex
> boundaries
> > for
> > > anything but READ ONLY functions.
> > >
> > >>On 8/8/2009 at 1:31 AM, in message
> > >>,
> > "Gibney,
> > >>Dave"  wrote:
> > >>> Actually I was speculating about the ability to "refresh" in
> memory
> > >>> knowledge of the PDSE(s) in the other LPAR(s).
> > >
> > > There is no command or facility to do that.  There is the sledge
> hammer
> > > approach of IPLing.   :-)   Well... it may not be all that bad -
> more
> > below.
> > >
> > >
> > >>> What you describe is not
> > >>> guaranteed.
> > >>>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly
> update
> > >>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe
you
> will
> > >>> always get the new version.
> > >
> > > Apparently he did.  But why?  (more below)
> > >
> > >>>
> > >>>> -Original Message-
> > >>>> From: IBM Mainframe Discussion List
[mailto:ibm-m...@bama.ua.edu]
> On
> > >>>> Behalf Of Frank Swarbrick
> > >>>> Sent: Friday, August 07, 2009 1:34 PM
> > >>>> To: IBM-MAIN@bama.ua.edu
> > >>>> Subject

Re: DASD: to share or not to share

2009-08-09 Thread Gibney, Dave
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Sunday, August 09, 2009 12:57 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> I believe you!
> 
> Can I restate as follows?
> If we do not have a sysplex we should not be sharing PDSE datasets
between
> LPARs because an update to the PDSE in one LPAR is not guaranteed to
be
> seen immediately (or ever?) by another LPAR.  (Read only PDSEs are OK
> because they are not updated.)
> 
> I assume we are still fine with sharing regular PDS datasets between
LPARs
> without a sysplex?  What about other types, such as regular sequential
> datasets and VSAM datasets?  Having a non-sysplex are we OK here?

VSAM (and all SMS datasets) can't exist without being cataloged. Shared
Catalogs are not "safe" without integrity protection. Integrity
protection takes MIM ($) or Sysplex ($) and both require time to
configure and maintain. As the only real z/OS Sysprog here, I'm hard
pressed to keep up as it is, so it's unlikely I'll ever plex and I know
we'll never buy MIM.

I should have been more specific. I only share a limited set of non-SMS
volumes and I do not put VSAM on them. Our CIS guy uses his shared
volumes for some VSAM, but the data is not really shared as the Catalogs
are not shared.

I don't share any SMS pools. If they want Production data for testing,
they have to make a copy (generally using a couple of shared volumes
designated for that specific purpose). Most of the shared datasets are
JCL, PROC, LOAD, ISPxLIB, PARMLIB(s), etc. 

I am fully aware of the risks I'm taking (I think so anyway). 

What I need to do to remain employed here until retirement is learn then
convince the Powers that Be, that zLinux is the best platform for the
Oracle based ERP that's almost inevitable.

> 
> Thanks for you patience in educating this lowly applications
developer.
> 
> Frank
> --
> 
> Frank Swarbrick
> Applications Architect - Mainframe Applications Development
> FirstBank Data Corporation
> Lakewood, CO  USA
> P: 303-235-1403
> F: 303-235-2075
> 
> 
> On 8/9/2009 at 11:45 AM, in message
> , Mark Zelden
>  wrote:
> > If children play with fire, they will eventually get burned!
> >
> >
> > On Sat, 8 Aug 2009 22:42:04 -0600, Frank Swarbrick
> >  wrote:
> >
> >>That is exactly what I did.  Well, "as quickly as I could type", in
any
> case.
> >>We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that
> might
> > be worth.
> >>
> >
> > It's worth nothing in regards to sharing PDSE across sysplex
boundaries
> for
> > anything but READ ONLY functions.
> >
> >>On 8/8/2009 at 1:31 AM, in message
> >>,
> "Gibney,
> >>Dave"  wrote:
> >>> Actually I was speculating about the ability to "refresh" in
memory
> >>> knowledge of the PDSE(s) in the other LPAR(s).
> >
> > There is no command or facility to do that.  There is the sledge
hammer
> > approach of IPLing.   :-)   Well... it may not be all that bad -
more
> below.
> >
> >
> >>> What you describe is not
> >>> guaranteed.
> >>>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly
update
> >>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you
will
> >>> always get the new version.
> >
> > Apparently he did.  But why?  (more below)
> >
> >>>
> >>>> -Original Message-
> >>>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]
On
> >>>> Behalf Of Frank Swarbrick
> >>>> Sent: Friday, August 07, 2009 1:34 PM
> >>>> To: IBM-MAIN@bama.ua.edu
> >>>> Subject: Re: DASD: to share or not to share
> >>>>
> >>>> So for example, if our change control process for applications
runs
> in
> >>> DEV
> >>>> (which is how we have it in VSE) we should be able to update our
> >>>> production application loadlib PDSE from DEV exclusively and this
> will
> >>> not
> >>>> be a problem, even without a Sysplex?  I am curious as to where I
> find
> >>>> this PDSE address space refresh command, and if it's really
needed.
> I
> >>>> just compiled a program in to a PDSE in DEV and ran it in PROD
and it
> >>> ran
> >>>> the new version just fine.  Did it twice just to make sure.  No
> >>> problem
> >>>> either time.
> >>>>
>

Re: DASD: to share or not to share

2009-08-09 Thread Mark Zelden
On Sun, 9 Aug 2009 13:56:53 -0600, Frank Swarbrick
 wrote:

>I believe you!
>
>Can I restate as follows?
>If we do not have a sysplex we should not be sharing PDSE datasets between
LPARs because an update to the PDSE in one LPAR is not guaranteed to be seen
immediately (or ever?) by another LPAR.  (Read only PDSEs are OK because
they are not updated.)

Worse!  If the library was open / in use on the other LPAR, you will likely get
an 0F4 abend at the next open or soon after and not be able to open / read 
the library at all after the update has been made.

>
>I assume we are still fine with sharing regular PDS datasets between LPARs
without a sysplex?  What about other types, such as regular sequential
datasets and VSAM datasets?  Having a non-sysplex are we OK here?
>

It can be done with many caveats.  I jumped in the middle of this thread but
I'm sure some of it has been discussed already.   Reserve will protect the
VTOC from corruption, but not individual data sets.VSAM has some of its
own protection mechanisms depending on the share options.   The bottom
line is you need an integrity manager like GRS or MII (MIM) to propegate
ENQs to the other system(s) for safe sharing.  But PDSE and HFS are a 
special case and can't be shared outside the sysplex (except read only)
regardless of an integrity manager like GRS or MIM/MII.  Oh, BTW, you won't
get any RESERVE protection for SYSVTOC, SYSZVVDS or others if you don't
gen the DASD as shared to begin with. 

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-09 Thread Frank Swarbrick
I believe you!

Can I restate as follows?
If we do not have a sysplex we should not be sharing PDSE datasets between 
LPARs because an update to the PDSE in one LPAR is not guaranteed to be seen 
immediately (or ever?) by another LPAR.  (Read only PDSEs are OK because they 
are not updated.)

I assume we are still fine with sharing regular PDS datasets between LPARs 
without a sysplex?  What about other types, such as regular sequential datasets 
and VSAM datasets?  Having a non-sysplex are we OK here?

Thanks for you patience in educating this lowly applications developer.

Frank
-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075


On 8/9/2009 at 11:45 AM, in message
, Mark Zelden
 wrote:
> If children play with fire, they will eventually get burned!
> 
> 
> On Sat, 8 Aug 2009 22:42:04 -0600, Frank Swarbrick
>  wrote:
> 
>>That is exactly what I did.  Well, "as quickly as I could type", in any case.
>>We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that might
> be worth.
>>
> 
> It's worth nothing in regards to sharing PDSE across sysplex boundaries for 
> anything but READ ONLY functions.
> 
>>On 8/8/2009 at 1:31 AM, in message
>>, "Gibney,
>>Dave"  wrote:
>>> Actually I was speculating about the ability to "refresh" in memory
>>> knowledge of the PDSE(s) in the other LPAR(s). 
> 
> There is no command or facility to do that.  There is the sledge hammer
> approach of IPLing.   :-)   Well... it may not be all that bad - more below.
> 
> 
>>> What you describe is not
>>> guaranteed.
>>>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly update
>>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you will
>>> always get the new version.
> 
> Apparently he did.  But why?  (more below)
>  
>>>
>>>> -Original Message-----
>>>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>>>> Behalf Of Frank Swarbrick
>>>> Sent: Friday, August 07, 2009 1:34 PM
>>>> To: IBM-MAIN@bama.ua.edu 
>>>> Subject: Re: DASD: to share or not to share
>>>>
>>>> So for example, if our change control process for applications runs in
>>> DEV
>>>> (which is how we have it in VSE) we should be able to update our
>>>> production application loadlib PDSE from DEV exclusively and this will
>>> not
>>>> be a problem, even without a Sysplex?  I am curious as to where I find
>>>> this PDSE address space refresh command, and if it's really needed.  I
>>>> just compiled a program in to a PDSE in DEV and ran it in PROD and it
>>> ran
>>>> the new version just fine.  Did it twice just to make sure.  No
>>> problem
>>>> either time.
>>>>
> 
> This probably worked for 2 reasons:
> 
> 1) Nothing else had the target loadlib allocated and / or opened.
> 
> 2) PDSE(1)_BUFFER_BEYOND_CLOSE was not set to YES.
> 
> Try the same test again with the target (output) PDSE loadlib that is in
> use by a long running address space, say... a CICS region.  Or how about
> a library that happens to be in LLA or the LNKLST.
> 
> Changes to PDSE data sets in a sysplex are communicated via XCF.  Which
> means if you don't have a sysplex, you are S.O.L. when it comes to sharing
> PDSEs that need to have changes made (update). 
> 
> I mentioned the sledge hammer approach of IPLing above.   The same goal 
> can probably be achieved (reading a "fresh copy" of the PDSE) if you can
> make sure no address spaces are using the PDSE and you aren't using the
> PDSE(1)_BUFFER_BEYOND_CLOSE=YES option. But that could be a
> really difficult task in a production environment depending on the library.
> 
> Mark
> --
> Mark Zelden
> Sr. Software and Systems Architect - z/OS Team Lead
> Zurich North America / Farmers Insurance Group - ZFUS G-ITO
> mailto:mark.zel...@zurichna.com 
> z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ 
> Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

>>> 

The information contained in this electronic communication and any document 
attached h

Re: DASD: to share or not to share

2009-08-09 Thread Mark Zelden
If children play with fire, they will eventually get burned!


On Sat, 8 Aug 2009 22:42:04 -0600, Frank Swarbrick
 wrote:

>That is exactly what I did.  Well, "as quickly as I could type", in any case.
>We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that might
be worth.
>

It's worth nothing in regards to sharing PDSE across sysplex boundaries for 
anything but READ ONLY functions.

>On 8/8/2009 at 1:31 AM, in message
>, "Gibney,
>Dave"  wrote:
>> Actually I was speculating about the ability to "refresh" in memory
>> knowledge of the PDSE(s) in the other LPAR(s). 

There is no command or facility to do that.  There is the sledge hammer
approach of IPLing.   :-)   Well... it may not be all that bad - more below.


>> What you describe is not
>> guaranteed.
>>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly update
>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you will
>> always get the new version.

Apparently he did.  But why?  (more below)
 
>>
>>> -Original Message-
>>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>>> Behalf Of Frank Swarbrick
>>> Sent: Friday, August 07, 2009 1:34 PM
>>> To: IBM-MAIN@bama.ua.edu
>>> Subject: Re: DASD: to share or not to share
>>>
>>> So for example, if our change control process for applications runs in
>> DEV
>>> (which is how we have it in VSE) we should be able to update our
>>> production application loadlib PDSE from DEV exclusively and this will
>> not
>>> be a problem, even without a Sysplex?  I am curious as to where I find
>>> this PDSE address space refresh command, and if it's really needed.  I
>>> just compiled a program in to a PDSE in DEV and ran it in PROD and it
>> ran
>>> the new version just fine.  Did it twice just to make sure.  No
>> problem
>>> either time.
>>>

This probably worked for 2 reasons:

1) Nothing else had the target loadlib allocated and / or opened.

2) PDSE(1)_BUFFER_BEYOND_CLOSE was not set to YES.

Try the same test again with the target (output) PDSE loadlib that is in
use by a long running address space, say... a CICS region.  Or how about
a library that happens to be in LLA or the LNKLST.

Changes to PDSE data sets in a sysplex are communicated via XCF.  Which
means if you don't have a sysplex, you are S.O.L. when it comes to sharing
PDSEs that need to have changes made (update). 

I mentioned the sledge hammer approach of IPLing above.   The same goal 
can probably be achieved (reading a "fresh copy" of the PDSE) if you can
make sure no address spaces are using the PDSE and you aren't using the
PDSE(1)_BUFFER_BEYOND_CLOSE=YES option. But that could be a
really difficult task in a production environment depending on the library.

Mark
--
Mark Zelden
Sr. Software and Systems Architect - z/OS Team Lead
Zurich North America / Farmers Insurance Group - ZFUS G-ITO
mailto:mark.zel...@zurichna.com
z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/
Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-09 Thread Gibney, Dave
  I'm glad it worked, I still wouldn't trust it to do so. I trust
sharing PDSE in static read-only mode because some folks I trust on this
list who are far more knowledgeable than I assure me.
  I do not trust the in memory aspects of PDSE in one LPAR to notice
(and so continue in future releases) update made in another LPAR without
the benefit of Sysplex. If the artist, or some of the others were to
give me such assurances, I might change my mind. But with OCO and the
documented unsupported status of such behavior, I don't think any of
them would be so willing.

Your mileage may vary and other comments from Phil's disclaimer :)

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Saturday, August 08, 2009 9:42 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> That is exactly what I did.  Well, "as quickly as I could type", in
any
> case.
> We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that
might
> be worth.
> 
> Frank
> 
> On 8/8/2009 at 1:31 AM, in message
> ,
> "Gibney,
> Dave"  wrote:
> > Actually I was speculating about the ability to "refresh" in memory
> > knowledge of the PDSE(s) in the other LPAR(s). What you describe is
not
> > guaranteed.
> >   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly
update
> > it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you
will
> > always get the new version.
> >
> >> -Original Message-
> >> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]
On
> >> Behalf Of Frank Swarbrick
> >> Sent: Friday, August 07, 2009 1:34 PM
> >> To: IBM-MAIN@bama.ua.edu
> >> Subject: Re: DASD: to share or not to share
> >>
> >> So for example, if our change control process for applications runs
in
> > DEV
> >> (which is how we have it in VSE) we should be able to update our
> >> production application loadlib PDSE from DEV exclusively and this
will
> > not
> >> be a problem, even without a Sysplex?  I am curious as to where I
find
> >> this PDSE address space refresh command, and if it's really needed.
I
> >> just compiled a program in to a PDSE in DEV and ran it in PROD and
it
> > ran
> >> the new version just fine.  Did it twice just to make sure.  No
> > problem
> >> either time.
> >>
> >> Frank
> >>
> >> On 8/7/2009 at 3:27 AM, in message
> >> ,
> >> "Gibney,
> >> Dave"  wrote:
> >> > I agree, but I doubt this applies to someone converting from VSE
to
> >> > their first z/OS LPAR(s). And, I think with "careful" change
> > management,
> >> > you could update even shared PDSE without sysplex. Do the update
> > from
> >> > only one system ever and cause that PDSE address space (can't
> > remember
> >> > it right off) to refresh in the other LPARs.
> >> >
> >> >> -Original Message-
> >> >> From: IBM Mainframe Discussion List
[mailto:ibm-m...@bama.ua.edu]
> > On
> >> >> Behalf Of Bruce Hewson
> >> >> Sent: Thursday, August 06, 2009 10:51 PM
> >> >> To: IBM-MAIN@bama.ua.edu
> >> >> Subject: Re: DASD: to share or not to share
> >> >>
> >> >> Hi Dave,,
> >> >>
> >> >> with systems that are only IPL'd every 3-5 months, YES, there
will
> > be
> >> >> updates
> >> >> to those volumes. Changes still happen, and many fixes do not
> > require
> >> > an
> >> >> IPL
> >> >> to implement, only careful change management.
> >> >>
> >> >> On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave 
> >> > wrote:
> >> >>
> >> >> >It may not be "supported", but PDSE that are only read are safe
to
> >> > share
> >> >> >betwixt anything. And, since it's a "live" resvol, it is of
course
> >> > not
> >> >> >subject to updates, right?
> >> >> >
> >> >> >Dave Gibney
> >> >> >Information Technology Services
> >> >> >Washington State University
> >> >> >
> >> >>
> >> >>
> >> >> Regards
> >> >> Bruce Hewson
> >> >>
> >> >>
> >
--

Re: DASD: to share or not to share

2009-08-08 Thread Frank Swarbrick
That is exactly what I did.  Well, "as quickly as I could type", in any case.
We have PDSESHARING(NORMAL) in our IGDSMSxx file, for whatever that might be 
worth.

Frank  

On 8/8/2009 at 1:31 AM, in message
, "Gibney,
Dave"  wrote:
> Actually I was speculating about the ability to "refresh" in memory
> knowledge of the PDSE(s) in the other LPAR(s). What you describe is not
> guaranteed.
>   Try 1. Run existing copy of the program in LPAR-P. 2. Quickly update
> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you will
> always get the new version.
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>> Behalf Of Frank Swarbrick
>> Sent: Friday, August 07, 2009 1:34 PM
>> To: IBM-MAIN@bama.ua.edu 
>> Subject: Re: DASD: to share or not to share
>> 
>> So for example, if our change control process for applications runs in
> DEV
>> (which is how we have it in VSE) we should be able to update our
>> production application loadlib PDSE from DEV exclusively and this will
> not
>> be a problem, even without a Sysplex?  I am curious as to where I find
>> this PDSE address space refresh command, and if it's really needed.  I
>> just compiled a program in to a PDSE in DEV and ran it in PROD and it
> ran
>> the new version just fine.  Did it twice just to make sure.  No
> problem
>> either time.
>> 
>> Frank
>> 
>> On 8/7/2009 at 3:27 AM, in message
>> ,
>> "Gibney,
>> Dave"  wrote:
>> > I agree, but I doubt this applies to someone converting from VSE to
>> > their first z/OS LPAR(s). And, I think with "careful" change
> management,
>> > you could update even shared PDSE without sysplex. Do the update
> from
>> > only one system ever and cause that PDSE address space (can't
> remember
>> > it right off) to refresh in the other LPARs.
>> >
>> >> -Original Message-
>> >> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] 
> On
>> >> Behalf Of Bruce Hewson
>> >> Sent: Thursday, August 06, 2009 10:51 PM
>> >> To: IBM-MAIN@bama.ua.edu 
>> >> Subject: Re: DASD: to share or not to share
>> >>
>> >> Hi Dave,,
>> >>
>> >> with systems that are only IPL'd every 3-5 months, YES, there will
> be
>> >> updates
>> >> to those volumes. Changes still happen, and many fixes do not
> require
>> > an
>> >> IPL
>> >> to implement, only careful change management.
>> >>
>> >> On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave 
>> > wrote:
>> >>
>> >> >It may not be "supported", but PDSE that are only read are safe to
>> > share
>> >> >betwixt anything. And, since it's a "live" resvol, it is of course
>> > not
>> >> >subject to updates, right?
>> >> >
>> >> >Dave Gibney
>> >> >Information Technology Services
>> >> >Washington State University
>> >> >
>> >>
>> >>
>> >> Regards
>> >> Bruce Hewson
>> >>
>> >>
> --
>> >> For IBM-MAIN subscribe / signoff / archive access instructions,
>> >> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
> INFO
>> >> Search the archives at http://bama.ua.edu/archives/ibm-main.html 
>> >
>> >
> --
>> > For IBM-MAIN subscribe / signoff / archive access instructions,
>> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
> INFO
>> > Search the archives at http://bama.ua.edu/archives/ibm-main.html 
>> 
>> >>>
>> 
>> The information contained in this electronic communication and any
>> document attached hereto or transmitted herewith is confidential and
>> intended for the exclusive use of the individual or entity named
> above.
>> If the reader of this message is not the intended recipient or the
>> employee or agent responsible for delivering it to the intended
> recipient,
>> you are hereby notified that any examination, use, dissemination,
>> distribution or copying of this communication or any part thereof is
>> strictly prohibited.  If you have received this communication in
> error,
>> please immediately notify the sender by re

Re: DASD: to share or not to share

2009-08-08 Thread Gibney, Dave
  We have very little VSAM and non-shared between the monplexes. Everyone who 
should be messing with data on the shared volumes knows the naming convention 
for those volumes and is expected to know better than to be updating PDS or PS 
overlapping from each LPAR. Reserve protects us sufficiently also. I've never 
had a corrupted PDS. GRS would be nice, but lack is not a show stopper. 
 The only shared PDSE(s) are the system resvol, which has been read only for 
several years. SMPE points to a separate volume which is only IPLed once after 
an APPLY and then cloned. I submit receive order almost every day, and APPLY 
check, but I only do a full APPLY when needed, usually only for upgrade of 
compatibility PTFS prior to an Upgrade. 
 None of the HFS is shared.
 Small shop One production LPAR, one Dev, two sandbox. The only way I could 
justify plexing would be to provide closer to 24x7 and since current thinking 
is an ERP soon or at least within a decade, there is not interest in anything 
but maintaining the status quo.

 -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Scott Rowe
> Sent: Saturday, August 08, 2009 3:09 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> I was not referring to Paralell Sysplex in this case, only to basic
> sysplex.  If they are going to be sharing DASD at all then they need
> GRS, and GRS is much slower without XCF.
> 
> >>> "R.S."  08/08/09 12:55 PM >>>
> Scott Rowe pisze:
> > I think the real question here is: Why don't you have a SYSPLEX?
> There is little to no cost involved, and many benefits.
> 
> I dare to disagree. Strongly disagree.
> At least you have to pay for memory for CF, ICF engine, more CP cycles,
> GRS delays, significantly more sysprog effort.
> 
> Looking from benefits perspective: depending on your regulations and
> laws you would have to have production and test separate, so you need
> more than one sysplex. Every serious shop I know keeps test/dev/sandbox
> separate from production. That means several sysplexes or monoplexes.
> 
> --
> Radoslaw Skorupka
> Lodz, Poland
> 
> 
> --
> BRE Bank SA
> ul. Senatorska 18
> 00-950 Warszawa
> www.brebank.pl
> 
> Sd Rejonowy dla m. st. Warszawy
> XII Wydzia Gospodarczy Krajowego Rejestru Sdowego,
> nr rejestru przedsibiorców KRS 025237
> NIP: 526-021-50-88
> Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci
> wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego
> podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16
> marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec
> podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale
> zakadowym BRE Banku SA bd w caoci opacone.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> 
> 
> CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains
> confidential and privileged information intended only for the addressee.
>  If you are not the intended recipient, please be advised that you have
> received this material in error and that any forwarding, copying,
> printing, distribution, use or disclosure of the material is strictly
> prohibited.  If you have received this material in error, please (i) do
> not read it, (ii) reply to the sender that you received the message in
> error, and (iii) erase or destroy the material. Emails are not secure
> and can be intercepted, amended, lost or destroyed, or contain viruses.
> You are deemed to have accepted these risks if you communicate with us
> by email. Thank you.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Robert A. Rosenberg
At 12:57 -0400 on 08/06/2009, Bob Shannon wrote about Re: DASD: to 
share or not to share:



If the PDSE limitation is onerous, go back to PDSs.


This option is not always available since while it works for Text 
Libraries, you lost some PDSE features with Load Libraries (since 
there are Load Module features that only exist if the the module 
lives in a PDSE and some Load Modules MUST be stored in a PDSE due to 
their structure).


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Scott Rowe
I was not referring to Paralell Sysplex in this case, only to basic
sysplex.  If they are going to be sharing DASD at all then they need
GRS, and GRS is much slower without XCF.

>>> "R.S."  08/08/09 12:55 PM >>>
Scott Rowe pisze:
> I think the real question here is: Why don't you have a SYSPLEX? 
There is little to no cost involved, and many benefits.

I dare to disagree. Strongly disagree.
At least you have to pay for memory for CF, ICF engine, more CP cycles, 
GRS delays, significantly more sysprog effort.

Looking from benefits perspective: depending on your regulations and 
laws you would have to have production and test separate, so you need 
more than one sysplex. Every serious shop I know keeps test/dev/sandbox 
separate from production. That means several sysplexes or monoplexes.

-- 
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237
NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16
marca 2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale
zakadowym BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains
confidential and privileged information intended only for the addressee.
 If you are not the intended recipient, please be advised that you have
received this material in error and that any forwarding, copying,
printing, distribution, use or disclosure of the material is strictly
prohibited.  If you have received this material in error, please (i) do
not read it, (ii) reply to the sender that you received the message in
error, and (iii) erase or destroy the material. Emails are not secure
and can be intercepted, amended, lost or destroyed, or contain viruses.
You are deemed to have accepted these risks if you communicate with us
by email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Ted MacNEIL
>I agree that capping a shared (Production) CF could be dangerous, and I have a 
>hard time imagining a configuration where I would consider such a thing.

We tried it quite a while back.
We never had any success.
Even the TESTPLEX suffered
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Scott Rowe
I agree that capping a shared (Production) CF could be dangerous, and I have a 
hard time imagining a configuration where I would consider such a thing.

>>> Joel Wolpert  08/08/09 12:36 PM >>>
I agree that ones has to make intelligent decisions based upon their 
environment. I do not endorse capping a shared CF, but if you do not have 
the available CPU capacity to dedicated a full CP it might work. However, 
you have to be careful or else you can kill production.

- Original Message - 
From: "Scott Rowe" 
Newsgroups: bit.listserv.ibm-main
To: 
Sent: Saturday, August 08, 2009 11:44 AM
Subject: Re: DASD: to share or not to share


>I do not cap my CF LPARs, except that they are limited to only 1 CP.  I 
>define a weight such that they are guaranteed a full CP, yet they never use 
>a full CP, since they are using dynamic dispatching.  This configuration is 
>not going to hang my production LPAR, and I believe it is silly to suggest 
>such a thing.
>
> Would I get better CF response time if I had dedicated CPUs for the CFs 
> (ICF or CP)?  Yes, of course.  Would I try to do DB2 data sharing in this 
> configuration?  No, of course not.  One has to make informed intelligent 
> decisions in this business, isn't that our job?
>
>>>> "Gibney, Dave"  08/08/09 3:27 AM >>>
>
> Does not "cap it CF LPAR" ask for or equal "hang production LPAR(s)"
> waiting for CF response?
>
>
> CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
> confidential and privileged information intended only for the addressee. 
> If you are not the intended recipient, please be advised that you have 
> received this material in error and that any forwarding, copying, 
> printing, distribution, use or disclosure of the material is strictly 
> prohibited.  If you have received this material in error, please (i) do 
> not read it, (ii) reply to the sender that you received the message in 
> error, and (iii) erase or destroy the material. Emails are not secure and 
> can be intercepted, amended, lost or destroyed, or contain viruses. You 
> are deemed to have accepted these risks if you communicate with us by 
> email. Thank you.
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread R.S.

Scott Rowe pisze:

I think the real question here is: Why don't you have a SYSPLEX?  There is 
little to no cost involved, and many benefits.


I dare to disagree. Strongly disagree.
At least you have to pay for memory for CF, ICF engine, more CP cycles, 
GRS delays, significantly more sysprog effort.


Looking from benefits perspective: depending on your regulations and 
laws you would have to have production and test separate, so you need 
more than one sysplex. Every serious shop I know keeps test/dev/sandbox 
separate from production. That means several sysplexes or monoplexes.


--
Radoslaw Skorupka
Lodz, Poland


--
BRE Bank SA
ul. Senatorska 18
00-950 Warszawa
www.brebank.pl

Sd Rejonowy dla m. st. Warszawy 
XII Wydzia Gospodarczy Krajowego Rejestru Sdowego, 
nr rejestru przedsibiorców KRS 025237

NIP: 526-021-50-88
Wedug stanu na dzie 01.01.2009 r. kapita zakadowy BRE Banku SA (w caoci 
wpacony) wynosi 118.763.528 zotych. W zwizku z realizacj warunkowego 
podwyszenia kapitau zakadowego, na podstawie uchway XXI WZ z dnia 16 marca 
2008r., oraz uchway XVI NWZ z dnia 27 padziernika 2008r., moe ulec 
podwyszeniu do kwoty 123.763.528 z. Akcje w podwyszonym kapitale zakadowym 
BRE Banku SA bd w caoci opacone.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Joel Wolpert
I agree that ones has to make intelligent decisions based upon their 
environment. I do not endorse capping a shared CF, but if you do not have 
the available CPU capacity to dedicated a full CP it might work. However, 
you have to be careful or else you can kill production.


- Original Message - 
From: "Scott Rowe" 

Newsgroups: bit.listserv.ibm-main
To: 
Sent: Saturday, August 08, 2009 11:44 AM
Subject: Re: DASD: to share or not to share


I do not cap my CF LPARs, except that they are limited to only 1 CP.  I 
define a weight such that they are guaranteed a full CP, yet they never use 
a full CP, since they are using dynamic dispatching.  This configuration is 
not going to hang my production LPAR, and I believe it is silly to suggest 
such a thing.


Would I get better CF response time if I had dedicated CPUs for the CFs 
(ICF or CP)?  Yes, of course.  Would I try to do DB2 data sharing in this 
configuration?  No, of course not.  One has to make informed intelligent 
decisions in this business, isn't that our job?



"Gibney, Dave"  08/08/09 3:27 AM >>>


Does not "cap it CF LPAR" ask for or equal "hang production LPAR(s)"
waiting for CF response?


CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee. 
If you are not the intended recipient, please be advised that you have 
received this material in error and that any forwarding, copying, 
printing, distribution, use or disclosure of the material is strictly 
prohibited.  If you have received this material in error, please (i) do 
not read it, (ii) reply to the sender that you received the message in 
error, and (iii) erase or destroy the material. Emails are not secure and 
can be intercepted, amended, lost or destroyed, or contain viruses. You 
are deemed to have accepted these risks if you communicate with us by 
email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Scott Rowe
I do not cap my CF LPARs, except that they are limited to only 1 CP.  I define 
a weight such that they are guaranteed a full CP, yet they never use a full CP, 
since they are using dynamic dispatching.  This configuration is not going to 
hang my production LPAR, and I believe it is silly to suggest such a thing.

Would I get better CF response time if I had dedicated CPUs for the CFs (ICF or 
CP)?  Yes, of course.  Would I try to do DB2 data sharing in this 
configuration?  No, of course not.  One has to make informed intelligent 
decisions in this business, isn't that our job?

>>> "Gibney, Dave"  08/08/09 3:27 AM >>>

Does not "cap it CF LPAR" ask for or equal "hang production LPAR(s)"
waiting for CF response?


CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Joel Wolpert
Depends what weight you are capping it at. If you only have minimal 
structures in the CF then you do not need a full CP for the CF. You might be 
able to cap it at 25% and still get the required performance; and the 
performance might still be better than turning off the always polling. As 
always, your mileage may vary so you need to understand your environment and 
how you are using the CF. I also recommended not sharing a CF with the 
production GP processors because it can have the potential to impact 
production if you are not careful.



- Original Message - 
From: "Gibney, Dave" 

Newsgroups: bit.listserv.ibm-main
To: 
Sent: Saturday, August 08, 2009 3:27 AM
Subject: Re: DASD: to share or not to share



-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Joel Wolpert
Sent: Friday, August 07, 2009 12:17 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

You can configure it so that it is not always polling, and therefore

not

soaking up the CPU. In addition, you can set up an lpar weight and cap

the

Does not "cap it CF LPAR" ask for or equal "hang production LPAR(s)"
waiting for CF response?


ICF lpar. This will also prevent the ICF from consuming more of the

CEC

than
you want. All of this being said: I would not recommend that you share

a

CF
with the production lpars. If someone changes the config of the CF you
might
end up with a performance problem.

- Original Message -
From: "Gibney, Dave" 
Newsgroups: bit.listserv.ibm-main
To: 
Sent: Friday, August 07, 2009 2:12 PM
Subject: Re: DASD: to share or not to share


>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]

On

>> Behalf Of Scott Rowe
>> Sent: Friday, August 07, 2009 8:51 AM
>> To: IBM-MAIN@bama.ua.edu
>> Subject: Re: DASD: to share or not to share
>>
>> Heck, I just use a CF partition running on my shared CP.  As long

as

> you
>
> I've heard lots of scary stories about sharing CPs for CF. I

understood

> the CF code was basically a non-terminating loop.
>
> I've even heard of folks using a CF lpar as a MSU soaker to keep

z/OS

> MSU's down for SCRT purposes.
>
> I certainly don't want any hang of my production system because I

shared

> with a CF.
>
>
>> are only doing GRS, and maybe HSM CRQm etc, there is no problem.

My

> two
>> CF partitions combined average about 0.2% of my CPUs.  It may not

be

> as
>> fast as dedicated CPs, but it is still faster than GRS ring.
>>
>> >>> "Gibney, Dave"  08/06/09 5:23 PM >>>
>> When ICF's are free, maybe we'll consider it :) I share resvols and
>> other volumes with software load and other read only type

operational

>> data between some or all of our 4 LPARs with no appreciable impact.
>>
>> Dave Gibney
>> Information Technology Services
>> Washington State University
>>
>>
>> > -Original Message-
>> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]

On

>> > Behalf Of Chase, John
>> > Sent: Thursday, August 06, 2009 12:32 PM
>> > To: IBM-MAIN@bama.ua.edu
>> > Subject: Re: DASD: to share or not to share
>> >
>> > > -Original Message-
>> > > From: IBM Mainframe Discussion List On Behalf Of McKown, John
>> > >
>> > > [ snip ]
>> > >
>> > > A basic SYSPLEX (what we are running) does not require a

Coupling

>> > Facility (CF). With a CF, you have a
>> > > Parallel Sysplex, which has some better facilities. If you only
> have
>> > an single CEC (box), then you
>> > > don't need a Sysplex Timer, either. And, if you're on a single
> box,
>> > you could get an CFL speciality
>> > > engine and run a single CF in a separate LPAR. In this case,

there

>> is
>> > no cabling required - it used
>> > > "pseudo cabling" via the PR/SM hipervisor. Much like

hipersockets.

>> >
>> > You must be wishing really hard for an IFL.  :-)
>> >
>> > The "Integrated Coupling Facility" engine is an "ICF" (more

overload

>> > for
>> > ICF).
>> >
>> > -jc-
>> >
>> >
>

--

>> > For IBM-MAIN subscribe / signoff / archive access instructions,
>> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
> INFO
>> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
>>
>>


Re: DASD: to share or not to share

2009-08-08 Thread Gibney, Dave
  Actually I was speculating about the ability to "refresh" in memory
knowledge of the PDSE(s) in the other LPAR(s). What you describe is not
guaranteed.
  Try 1. Run existing copy of the program in LPAR-P. 2. Quickly update
it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you will
always get the new version.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Friday, August 07, 2009 1:34 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> So for example, if our change control process for applications runs in
DEV
> (which is how we have it in VSE) we should be able to update our
> production application loadlib PDSE from DEV exclusively and this will
not
> be a problem, even without a Sysplex?  I am curious as to where I find
> this PDSE address space refresh command, and if it's really needed.  I
> just compiled a program in to a PDSE in DEV and ran it in PROD and it
ran
> the new version just fine.  Did it twice just to make sure.  No
problem
> either time.
> 
> Frank
> 
> On 8/7/2009 at 3:27 AM, in message
> ,
> "Gibney,
> Dave"  wrote:
> > I agree, but I doubt this applies to someone converting from VSE to
> > their first z/OS LPAR(s). And, I think with "careful" change
management,
> > you could update even shared PDSE without sysplex. Do the update
from
> > only one system ever and cause that PDSE address space (can't
remember
> > it right off) to refresh in the other LPARs.
> >
> >> -Original Message-
> >> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]
On
> >> Behalf Of Bruce Hewson
> >> Sent: Thursday, August 06, 2009 10:51 PM
> >> To: IBM-MAIN@bama.ua.edu
> >> Subject: Re: DASD: to share or not to share
> >>
> >> Hi Dave,,
> >>
> >> with systems that are only IPL'd every 3-5 months, YES, there will
be
> >> updates
> >> to those volumes. Changes still happen, and many fixes do not
require
> > an
> >> IPL
> >> to implement, only careful change management.
> >>
> >> On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave 
> > wrote:
> >>
> >> >It may not be "supported", but PDSE that are only read are safe to
> > share
> >> >betwixt anything. And, since it's a "live" resvol, it is of course
> > not
> >> >subject to updates, right?
> >> >
> >> >Dave Gibney
> >> >Information Technology Services
> >> >Washington State University
> >> >
> >>
> >>
> >> Regards
> >> Bruce Hewson
> >>
> >>
--
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
INFO
> >> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> >
> >
--
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> >>>
> 
> The information contained in this electronic communication and any
> document attached hereto or transmitted herewith is confidential and
> intended for the exclusive use of the individual or entity named
above.
> If the reader of this message is not the intended recipient or the
> employee or agent responsible for delivering it to the intended
recipient,
> you are hereby notified that any examination, use, dissemination,
> distribution or copying of this communication or any part thereof is
> strictly prohibited.  If you have received this communication in
error,
> please immediately notify the sender by reply e-mail and destroy this
> communication.  Thank you.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-08 Thread Gibney, Dave
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Joel Wolpert
> Sent: Friday, August 07, 2009 12:17 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> You can configure it so that it is not always polling, and therefore
not
> soaking up the CPU. In addition, you can set up an lpar weight and cap
the

Does not "cap it CF LPAR" ask for or equal "hang production LPAR(s)"
waiting for CF response?

> ICF lpar. This will also prevent the ICF from consuming more of the
CEC
> than
> you want. All of this being said: I would not recommend that you share
a
> CF
> with the production lpars. If someone changes the config of the CF you
> might
> end up with a performance problem.
> 
> - Original Message -
> From: "Gibney, Dave" 
> Newsgroups: bit.listserv.ibm-main
> To: 
> Sent: Friday, August 07, 2009 2:12 PM
> Subject: Re: DASD: to share or not to share
> 
> 
> >> -Original Message-
> >> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]
On
> >> Behalf Of Scott Rowe
> >> Sent: Friday, August 07, 2009 8:51 AM
> >> To: IBM-MAIN@bama.ua.edu
> >> Subject: Re: DASD: to share or not to share
> >>
> >> Heck, I just use a CF partition running on my shared CP.  As long
as
> > you
> >
> > I've heard lots of scary stories about sharing CPs for CF. I
understood
> > the CF code was basically a non-terminating loop.
> >
> > I've even heard of folks using a CF lpar as a MSU soaker to keep
z/OS
> > MSU's down for SCRT purposes.
> >
> > I certainly don't want any hang of my production system because I
shared
> > with a CF.
> >
> >
> >> are only doing GRS, and maybe HSM CRQm etc, there is no problem.
My
> > two
> >> CF partitions combined average about 0.2% of my CPUs.  It may not
be
> > as
> >> fast as dedicated CPs, but it is still faster than GRS ring.
> >>
> >> >>> "Gibney, Dave"  08/06/09 5:23 PM >>>
> >> When ICF's are free, maybe we'll consider it :) I share resvols and
> >> other volumes with software load and other read only type
operational
> >> data between some or all of our 4 LPARs with no appreciable impact.
> >>
> >> Dave Gibney
> >> Information Technology Services
> >> Washington State University
> >>
> >>
> >> > -Original Message-
> >> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu]
On
> >> > Behalf Of Chase, John
> >> > Sent: Thursday, August 06, 2009 12:32 PM
> >> > To: IBM-MAIN@bama.ua.edu
> >> > Subject: Re: DASD: to share or not to share
> >> >
> >> > > -Original Message-
> >> > > From: IBM Mainframe Discussion List On Behalf Of McKown, John
> >> > >
> >> > > [ snip ]
> >> > >
> >> > > A basic SYSPLEX (what we are running) does not require a
Coupling
> >> > Facility (CF). With a CF, you have a
> >> > > Parallel Sysplex, which has some better facilities. If you only
> > have
> >> > an single CEC (box), then you
> >> > > don't need a Sysplex Timer, either. And, if you're on a single
> > box,
> >> > you could get an CFL speciality
> >> > > engine and run a single CF in a separate LPAR. In this case,
there
> >> is
> >> > no cabling required - it used
> >> > > "pseudo cabling" via the PR/SM hipervisor. Much like
hipersockets.
> >> >
> >> > You must be wishing really hard for an IFL.  :-)
> >> >
> >> > The "Integrated Coupling Facility" engine is an "ICF" (more
overload
> >> > for
> >> > ICF).
> >> >
> >> > -jc-
> >> >
> >> >
> >
--
> >> > For IBM-MAIN subscribe / signoff / archive access instructions,
> >> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
> > INFO
> >> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> >>
> >>
--
> >> For IBM-MAIN subscribe / signoff / archive access instructions,
> >> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
INFO
> >> Search the archives

Re: DASD: to share or not to share

2009-08-07 Thread Frank Swarbrick
So for example, if our change control process for applications runs in DEV 
(which is how we have it in VSE) we should be able to update our production 
application loadlib PDSE from DEV exclusively and this will not be a problem, 
even without a Sysplex?  I am curious as to where I find this PDSE address 
space refresh command, and if it's really needed.  I just compiled a program in 
to a PDSE in DEV and ran it in PROD and it ran the new version just fine.  Did 
it twice just to make sure.  No problem either time.

Frank

On 8/7/2009 at 3:27 AM, in message
, "Gibney,
Dave"  wrote:
> I agree, but I doubt this applies to someone converting from VSE to
> their first z/OS LPAR(s). And, I think with "careful" change management,
> you could update even shared PDSE without sysplex. Do the update from
> only one system ever and cause that PDSE address space (can't remember
> it right off) to refresh in the other LPARs.
> 
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>> Behalf Of Bruce Hewson
>> Sent: Thursday, August 06, 2009 10:51 PM
>> To: IBM-MAIN@bama.ua.edu 
>> Subject: Re: DASD: to share or not to share
>> 
>> Hi Dave,,
>> 
>> with systems that are only IPL'd every 3-5 months, YES, there will be
>> updates
>> to those volumes. Changes still happen, and many fixes do not require
> an
>> IPL
>> to implement, only careful change management.
>> 
>> On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave 
> wrote:
>> 
>> >It may not be "supported", but PDSE that are only read are safe to
> share
>> >betwixt anything. And, since it's a "live" resvol, it is of course
> not
>> >subject to updates, right?
>> >
>> >Dave Gibney
>> >Information Technology Services
>> >Washington State University
>> >
>> 
>> 
>> Regards
>> Bruce Hewson
>> 
>> --
>> For IBM-MAIN subscribe / signoff / archive access instructions,
>> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
>> Search the archives at http://bama.ua.edu/archives/ibm-main.html 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Frank Swarbrick
Thanks for more ammunition.  I will add it to the pile!  :-)
-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075


On 8/7/2009 at 2:24 AM, in message
<000401ca1738$89b1aa90$9d14ff...@hawkins1960@sbcglobal.net>, Ron Hawkins
 wrote:
> Frank,
> 
> You have lots of good feedback as far as sharing with integrity and security
> go, but I did focus on comment below about performance impact. There is
> always the likelihood that activity from one LPAR may impact the performance
> of another, and it can happen for the SYSRES volumes just as easily as any
> other volumes.
> 
> The thing is that the chance of this impact is quite small, and can be kept
> that way by proper measurement and tuning, which is something that is also
> missing in the Open Systems arena. I've seen poor LLA parms on a Dev system
> start to cause this sort of contention on SYSRES, but a review of LLA and
> VLF fixed this and improved performance for both Dev and Prod.
> 
> Duplicating volumes just in case you have a performance problem is liking
> pulling out your teeth in case you get a hole in them. If the combined
> activity from each LPAR is already low then it is a no brainer.
> 
> If the CIO requires separation for performance reasons, then where do you
> stop? Should there be separate channels and switches? Should DEV and Prod
> have dedicated Parity Groups? Should Dev and Prod have their own cache
> partitions? Etc. Where would it stop?
> 
> Ron
> 
>> 
>> Now the discussion has "blown up" in that there are now questions by our
> CIO
>> as to if we really should be sharing even the "executable" libraries.  Not
>> only at the applications level, but also at the systems level.  He's
> thought
>> is that if two LPARs share the same OS executables that their use in DEV
> could
>> possibly hinder performance of PROD.
>> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Joel Wolpert
You can configure it so that it is not always polling, and therefore not 
soaking up the CPU. In addition, you can set up an lpar weight and cap the 
ICF lpar. This will also prevent the ICF from consuming more of the CEC than 
you want. All of this being said: I would not recommend that you share a CF 
with the production lpars. If someone changes the config of the CF you might 
end up with a performance problem.


- Original Message - 
From: "Gibney, Dave" 

Newsgroups: bit.listserv.ibm-main
To: 
Sent: Friday, August 07, 2009 2:12 PM
Subject: Re: DASD: to share or not to share



-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Scott Rowe
Sent: Friday, August 07, 2009 8:51 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

Heck, I just use a CF partition running on my shared CP.  As long as

you

I've heard lots of scary stories about sharing CPs for CF. I understood
the CF code was basically a non-terminating loop.

I've even heard of folks using a CF lpar as a MSU soaker to keep z/OS
MSU's down for SCRT purposes.

I certainly don't want any hang of my production system because I shared
with a CF.



are only doing GRS, and maybe HSM CRQm etc, there is no problem.  My

two

CF partitions combined average about 0.2% of my CPUs.  It may not be

as

fast as dedicated CPs, but it is still faster than GRS ring.

>>> "Gibney, Dave"  08/06/09 5:23 PM >>>
When ICF's are free, maybe we'll consider it :) I share resvols and
other volumes with software load and other read only type operational
data between some or all of our 4 LPARs with no appreciable impact.

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Chase, John
> Sent: Thursday, August 06, 2009 12:32 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> > -Original Message-
> > From: IBM Mainframe Discussion List On Behalf Of McKown, John
> >
> > [ snip ]
> >
> > A basic SYSPLEX (what we are running) does not require a Coupling
> Facility (CF). With a CF, you have a
> > Parallel Sysplex, which has some better facilities. If you only

have

> an single CEC (box), then you
> > don't need a Sysplex Timer, either. And, if you're on a single

box,

> you could get an CFL speciality
> > engine and run a single CF in a separate LPAR. In this case, there
is
> no cabling required - it used
> > "pseudo cabling" via the PR/SM hipervisor. Much like hipersockets.
>
> You must be wishing really hard for an IFL.  :-)
>
> The "Integrated Coupling Facility" engine is an "ICF" (more overload
> for
> ICF).
>
> -jc-
>
>

--

> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN

INFO

> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission

contains

confidential and privileged information intended only for the

addressee.

If you are not the intended recipient, please be advised that you have
received this material in error and that any forwarding, copying,
printing, distribution, use or disclosure of the material is strictly
prohibited.  If you have received this material in error, please (i)

do

not read it, (ii) reply to the sender that you received the message in
error, and (iii) erase or destroy the material. Emails are not secure

and

can be intercepted, amended, lost or destroyed, or contain viruses.

You

are deemed to have accepted these risks if you communicate with us by
email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Gibney, Dave
> Sent: Friday, August 07, 2009 1:12 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> > Behalf Of Scott Rowe
> > Sent: Friday, August 07, 2009 8:51 AM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: DASD: to share or not to share
> > 
> > Heck, I just use a CF partition running on my shared CP.  As long as
> you
> 
> I've heard lots of scary stories about sharing CPs for CF. I 
> understood
> the CF code was basically a non-terminating loop. 

The normal way that the CF code works is what is called an "active wait". 
Basically, it soaks 100% of the CPU because it is polling the interface, rather 
than waiting for an interrupt. However, there is a command that you can issue 
to the LPAR running the code which is, IIRC, "DYNDISP ON" which changes from 
polling to "interrupt mode". This reduces the CPU requirement, but slows the 
response time down.

> 
> I've even heard of folks using a CF lpar as a MSU soaker to keep z/OS
> MSU's down for SCRT purposes.

It's easier to use GROUP CAPACITY for this on the z9+ and z/OS 1.8+. We are 
doing it.

> 
> I certainly don't want any hang of my production system 
> because I shared
> with a CF.

Me neither.

> 

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Gibney, Dave
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Scott Rowe
> Sent: Friday, August 07, 2009 8:51 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> Heck, I just use a CF partition running on my shared CP.  As long as
you

I've heard lots of scary stories about sharing CPs for CF. I understood
the CF code was basically a non-terminating loop. 

I've even heard of folks using a CF lpar as a MSU soaker to keep z/OS
MSU's down for SCRT purposes.

I certainly don't want any hang of my production system because I shared
with a CF.


> are only doing GRS, and maybe HSM CRQm etc, there is no problem.  My
two
> CF partitions combined average about 0.2% of my CPUs.  It may not be
as
> fast as dedicated CPs, but it is still faster than GRS ring.
> 
> >>> "Gibney, Dave"  08/06/09 5:23 PM >>>
> When ICF's are free, maybe we'll consider it :) I share resvols and
> other volumes with software load and other read only type operational
> data between some or all of our 4 LPARs with no appreciable impact.
> 
> Dave Gibney
> Information Technology Services
> Washington State University
> 
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> > Behalf Of Chase, John
> > Sent: Thursday, August 06, 2009 12:32 PM
> > To: IBM-MAIN@bama.ua.edu
> > Subject: Re: DASD: to share or not to share
> >
> > > -Original Message-
> > > From: IBM Mainframe Discussion List On Behalf Of McKown, John
> > >
> > > [ snip ]
> > >
> > > A basic SYSPLEX (what we are running) does not require a Coupling
> > Facility (CF). With a CF, you have a
> > > Parallel Sysplex, which has some better facilities. If you only
have
> > an single CEC (box), then you
> > > don't need a Sysplex Timer, either. And, if you're on a single
box,
> > you could get an CFL speciality
> > > engine and run a single CF in a separate LPAR. In this case, there
> is
> > no cabling required - it used
> > > "pseudo cabling" via the PR/SM hipervisor. Much like hipersockets.
> >
> > You must be wishing really hard for an IFL.  :-)
> >
> > The "Integrated Coupling Facility" engine is an "ICF" (more overload
> > for
> > ICF).
> >
> > -jc-
> >
> >
--
> > For IBM-MAIN subscribe / signoff / archive access instructions,
> > send email to lists...@bama.ua.edu with the message: GET IBM-MAIN
INFO
> > Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
> 
> 
> 
> CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission
contains
> confidential and privileged information intended only for the
addressee.
> If you are not the intended recipient, please be advised that you have
> received this material in error and that any forwarding, copying,
> printing, distribution, use or disclosure of the material is strictly
> prohibited.  If you have received this material in error, please (i)
do
> not read it, (ii) reply to the sender that you received the message in
> error, and (iii) erase or destroy the material. Emails are not secure
and
> can be intercepted, amended, lost or destroyed, or contain viruses.
You
> are deemed to have accepted these risks if you communicate with us by
> email. Thank you.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Guy Gardoit
On Thu, Aug 6, 2009 at 10:24 AM, Staller, Allan wrote:

> A basic sysplex will provide z/OS availability, an improvement over
> single image. This can be done for the cost of a few CTC connections
> between LPARS. If you have the spare channels to define as CTC's, this
> is fairly easy to do.
>
> IIRC, (I know someone will correct me if I am wrong) a basic sysplex
> will also provide PDSE sharing within the sysplex.
>
> AFAIK there is no way with IBM vanilla code to share PDSE's across a
> SYSPLEX boundary (either monoplex or multi-system). Some 3rd party
> products will support this.


Not that I am aware of.   This is an architectural limitation on PDSEs (XCF
is involved).   No 3rd party product is going to get around that.   MIM does
not and so that pretty much seals the deal.Sharing of PDSE's across
sysplex boundaries or between (among) non-sysplexed LPARs R/O or otherwise
is not supported.   Break the rules and you'll get broken results.



>
>
> 
>
> > Unless your two systems are in the same Sysplex (Base or Parallel) I
> would
> > highly recommend that you do not share your sysres disk...they contain
> PDSE
> > datasets that you shouldn't share outside a sysplex.
> >
> > Similarly, any disk you do share should not contain PDSE datasets.
>
> We're slowly coming to this realization.  We don't have a Sysplex, and
> sharing PDSE's, while it works somewhat, doesn't work that great.  My
> personal libraries are PDSEs and if I am logged on to both PROD and DEV
> at the same time I have to make sure I am not trying to edit members in
> the same PDSE in both environments.  Blech.  I thought PDSEs were the
> future?
> 
>
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html
>



-- 
Guy Gardoit
z/OS Systems Programming

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Scott Rowe
Heck, I just use a CF partition running on my shared CP.  As long as you are 
only doing GRS, and maybe HSM CRQm etc, there is no problem.  My two CF 
partitions combined average about 0.2% of my CPUs.  It may not be as fast as 
dedicated CPs, but it is still faster than GRS ring.

>>> "Gibney, Dave"  08/06/09 5:23 PM >>>
When ICF's are free, maybe we'll consider it :) I share resvols and
other volumes with software load and other read only type operational
data between some or all of our 4 LPARs with no appreciable impact.

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Chase, John
> Sent: Thursday, August 06, 2009 12:32 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List On Behalf Of McKown, John
> >
> > [ snip ]
> >
> > A basic SYSPLEX (what we are running) does not require a Coupling
> Facility (CF). With a CF, you have a
> > Parallel Sysplex, which has some better facilities. If you only have
> an single CEC (box), then you
> > don't need a Sysplex Timer, either. And, if you're on a single box,
> you could get an CFL speciality
> > engine and run a single CF in a separate LPAR. In this case, there
is
> no cabling required - it used
> > "pseudo cabling" via the PR/SM hipervisor. Much like hipersockets.
> 
> You must be wishing really hard for an IFL.  :-)
> 
> The "Integrated Coupling Facility" engine is an "ICF" (more overload
> for
> ICF).
> 
> -jc-
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Thompson, Steve
> Sent: Friday, August 07, 2009 8:42 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of McKown, John
> Sent: Thursday, August 06, 2009 1:40 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> 
> > 
> 
> It is VSAM Record Level Sharing (not Locking).
> 
> 
> 
> Within CICS it is Record Level Locking (or at least it was).  
> It appears
> that they changed that with CICS/TS v2 (?). 
> 
> Meanwhile, I've not kept up with all the VSAM changes and 
> options as it
> applies to batch with CICS (I know that you have to do specific things
> to get the commit and all to work the same).
> 
> Regards,
> Steve Thompson

Ah. I see. Sorry about that. Two different but similar things with similar 
names. My bad.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Thompson, Steve
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of McKown, John
Sent: Thursday, August 06, 2009 1:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share


> 

It is VSAM Record Level Sharing (not Locking).



Within CICS it is Record Level Locking (or at least it was).  It appears
that they changed that with CICS/TS v2 (?). 

Meanwhile, I've not kept up with all the VSAM changes and options as it
applies to batch with CICS (I know that you have to do specific things
to get the commit and all to work the same).

Regards,
Steve Thompson

-- Opinions expressed by this poster may not reflect those of poster's
employer --

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Gibney, Dave
I agree, but I doubt this applies to someone converting from VSE to
their first z/OS LPAR(s). And, I think with "careful" change management,
you could update even shared PDSE without sysplex. Do the update from
only one system ever and cause that PDSE address space (can't remember
it right off) to refresh in the other LPARs.

> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Bruce Hewson
> Sent: Thursday, August 06, 2009 10:51 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> Hi Dave,,
> 
> with systems that are only IPL'd every 3-5 months, YES, there will be
> updates
> to those volumes. Changes still happen, and many fixes do not require
an
> IPL
> to implement, only careful change management.
> 
> On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave 
wrote:
> 
> >It may not be "supported", but PDSE that are only read are safe to
share
> >betwixt anything. And, since it's a "live" resvol, it is of course
not
> >subject to updates, right?
> >
> >Dave Gibney
> >Information Technology Services
> >Washington State University
> >
> 
> 
> Regards
> Bruce Hewson
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-07 Thread Ron Hawkins
Frank,

You have lots of good feedback as far as sharing with integrity and security
go, but I did focus on comment below about performance impact. There is
always the likelihood that activity from one LPAR may impact the performance
of another, and it can happen for the SYSRES volumes just as easily as any
other volumes.

The thing is that the chance of this impact is quite small, and can be kept
that way by proper measurement and tuning, which is something that is also
missing in the Open Systems arena. I've seen poor LLA parms on a Dev system
start to cause this sort of contention on SYSRES, but a review of LLA and
VLF fixed this and improved performance for both Dev and Prod.

Duplicating volumes just in case you have a performance problem is liking
pulling out your teeth in case you get a hole in them. If the combined
activity from each LPAR is already low then it is a no brainer.

If the CIO requires separation for performance reasons, then where do you
stop? Should there be separate channels and switches? Should DEV and Prod
have dedicated Parity Groups? Should Dev and Prod have their own cache
partitions? Etc. Where would it stop?

Ron

> 
> Now the discussion has "blown up" in that there are now questions by our
CIO
> as to if we really should be sharing even the "executable" libraries.  Not
> only at the applications level, but also at the systems level.  He's
thought
> is that if two LPARs share the same OS executables that their use in DEV
could
> possibly hinder performance of PROD.
> 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Bruce Hewson
Hi Dave,,

with systems that are only IPL'd every 3-5 months, YES, there will be updates 
to those volumes. Changes still happen, and many fixes do not require an IPL 
to implement, only careful change management.

On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave  wrote:

>It may not be "supported", but PDSE that are only read are safe to share
>betwixt anything. And, since it's a "live" resvol, it is of course not
>subject to updates, right?
>
>Dave Gibney
>Information Technology Services
>Washington State University
>


Regards
Bruce Hewson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Gibney, Dave
When ICF's are free, maybe we'll consider it :) I share resvols and
other volumes with software load and other read only type operational
data between some or all of our 4 LPARs with no appreciable impact.

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Chase, John
> Sent: Thursday, August 06, 2009 12:32 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> > -Original Message-
> > From: IBM Mainframe Discussion List On Behalf Of McKown, John
> >
> > [ snip ]
> >
> > A basic SYSPLEX (what we are running) does not require a Coupling
> Facility (CF). With a CF, you have a
> > Parallel Sysplex, which has some better facilities. If you only have
> an single CEC (box), then you
> > don't need a Sysplex Timer, either. And, if you're on a single box,
> you could get an CFL speciality
> > engine and run a single CF in a separate LPAR. In this case, there
is
> no cabling required - it used
> > "pseudo cabling" via the PR/SM hipervisor. Much like hipersockets.
> 
> You must be wishing really hard for an IFL.  :-)
> 
> The "Integrated Coupling Facility" engine is an "ICF" (more overload
> for
> ICF).
> 
> -jc-
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Gibney, Dave
It may not be "supported", but PDSE that are only read are safe to share
betwixt anything. And, since it's a "live" resvol, it is of course not
subject to updates, right?

Dave Gibney
Information Technology Services
Washington State University


> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Bruce Hewson
> Sent: Thursday, August 06, 2009 4:28 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 
> Hi Frank,
> 
> Unless your two systems are in the same Sysplex (Base or Parallel) I
> would
> highly recommend that you do not share your sysres disk...they contain
> PDSE
> datasets that you shouldn't share outside a sysplex.
> 
> Similarly, any disk you do share should not contain PDSE datasets.
> 
> Regards
> Bruce Hewson
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Chase, John
> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of McKown, John
> 
> [ snip ]
> 
> A basic SYSPLEX (what we are running) does not require a Coupling
Facility (CF). With a CF, you have a
> Parallel Sysplex, which has some better facilities. If you only have
an single CEC (box), then you
> don't need a Sysplex Timer, either. And, if you're on a single box,
you could get an CFL speciality
> engine and run a single CF in a separate LPAR. In this case, there is
no cabling required - it used
> "pseudo cabling" via the PR/SM hipervisor. Much like hipersockets.

You must be wishing really hard for an IFL.  :-)

The "Integrated Coupling Facility" engine is an "ICF" (more overload for
ICF).

-jc-

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Thompson, Steve
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Frank Swarbrick
Sent: Thursday, August 06, 2009 1:01 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

>>> On 8/6/2009 at 11:47 AM, in message
<45d79eacefba9b428e3d400e924d36b902557...@iwdubcormsg007.sci.local>,
"Thompson,
Steve"  wrote:
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Scott Rowe
> Sent: Thursday, August 06, 2009 12:44 PM
> To: IBM-MAIN@bama.ua.edu 
> Subject: Re: DASD: to share or not to share
> 
> I think the real question here is: Why don't you have a SYSPLEX?
There
> is =
> little to no cost involved, and many benefits.
> 

> If you set this up correctly, you will get Record Level Locking for
VSAM
> which will give you some benefits that you lost in moving from VSE. It
> is something to think about, if you are heavily using VSAM with
onlines
> and batch.

Can you explain this a bit more?  I will admit to not even knowing how
this works on VSE.



Within DOS/VS-VSE the buffers are handled by DOS for an image. So you
get to better share VSAM between Partitions (as in a VSE partition) than
you can between z/OS Address Spaces. This allows, as I recall (it has
been several years since I've done any VSE-VSAM work on a VSE system),
more than one partition to have an ACB open to output to a file (share
options that are NOT available on MVS if you compare your VSE IDCAMS
against z/OS IDCAMS doc).

With RLL (Record Level Locking), you can now have batch and onlines
updating VSAM files at the same time. This goes a step beyond what VSE
allowed, giving you certain performance boosts that were lost in going
to z/OS where buffers are controlled within an address space as opposed
to within an instance of VSE (image). 

Regards,
Steve Thompson

-- Opinions expressed by this poster may not reflect those of poster's
employer --

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Frank Swarbrick
> Sent: Thursday, August 06, 2009 1:01 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share

> > If you set this up correctly, you will get Record Level 
> Locking for VSAM
> > which will give you some benefits that you lost in moving 
> from VSE. It
> > is something to think about, if you are heavily using VSAM 
> with onlines
> > and batch.
> 
> Can you explain this a bit more?  I will admit to not even 
> knowing how this works on VSE.
> 
> Thanks,
> Frank
> 

It is VSAM Record Level Sharing (not Locking).

VSAM/RLS requires a Parallel Sysplex (Coupling Facility) to work. But it 
basically allows you to share a single VSAM file in read/write mode between 
CICS regions with integrity. In addition, it allows batch to have READ-ONLY(!) 
access to the same file, again with read integrity. If you need to share a VSAM 
file in READ-WRITE mode between one or more CICS and/or batch jobs (i.e. full 
sharing in R/W mode between any number of CICS and batch jobs concurrently), 
then you need an extra charge feature called "Transactional VSAM" aka DFSMStvs. 
This builds on top of VSAM/RLS, so you need VSAM/RLS working before using it.

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Schwarz, Barry A
For the same reason we have only 1 LPAR defined, we have no user Unix
files, and we don't use PDSEs.  Some bells and whistles are not worth
the overhead.

-Original Message-
From: Thompson, Steve 
Sent: Thursday, August 06, 2009 10:48 AM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

I think the real question here is: Why don't you have a SYSPLEX?  There
is =
little to no cost involved, and many benefits.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread McKown, John
> -Original Message-
> From: IBM Mainframe Discussion List 
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL
> Sent: Thursday, August 06, 2009 1:12 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> 

> 
> There is the cost of timers and coupling facilities -- times two.

A basic SYSPLEX (what we are running) does not require a Coupling Facility 
(CF). With a CF, you have a Parallel Sysplex, which has some better facilities. 
If you only have an single CEC (box), then you don't need a Sysplex Timer, 
either. And, if you're on a single box, you could get an CFL speciality engine 
and run a single CF in a separate LPAR. In this case, there is no cabling 
required - it used "pseudo cabling" via the PR/SM hipervisor. Much like 
hipersockets.



> Again, it's optional what you implement after SYSPLEX.
> 
> But, it's also a management/back-up decision, not truly technical.
> -
> Too busy driving to stop for gas!

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

 

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Ted MacNEIL
>You would have to ask someone higher in the food chain than me.  :-)  I'm 
>wondering if they think it's more difficult to set up or more expensive than 
>it really is?  No idea, really.  I'm not a systems guy.

>I will pose the question.  

There is the cost of timers and coupling facilities -- times two.
And, a few DASD volumes.

And, of course, SYSPROG time.
So, that depends.
But, it's an insurance policy.

Once implemented, then you can decide on Generic Resource, CICSPLEX/SM, 
IMSPLEX, DB2PLEX, etc.

But, you are not committed to them, just because you have a SYSPLEX.
Of course, if you are big with VSAM, there are advantages, as well.

I worked for one of the big Canadian Banks, in 1994, when we went to Parallel 
SYSPLEX, and as the first thing, got rid of IMS XRF, by using an IMSPLEX (and a 
DB2PLEX).

When I left the environment, after out-sourcing, they were just implementing 
GDPS.

Again, it's optional what you implement after SYSPLEX.

But, it's also a management/back-up decision, not truly technical.
-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Frank Swarbrick
>>> On 8/6/2009 at 11:47 AM, in message
<45d79eacefba9b428e3d400e924d36b902557...@iwdubcormsg007.sci.local>, "Thompson,
Steve"  wrote:
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Scott Rowe
> Sent: Thursday, August 06, 2009 12:44 PM
> To: IBM-MAIN@bama.ua.edu 
> Subject: Re: DASD: to share or not to share
> 
> I think the real question here is: Why don't you have a SYSPLEX?  There
> is =
> little to no cost involved, and many benefits.
> 
>>>> Frank Swarbrick  8/6/2009 12:15 PM
>>>>
> 
> 
> We're slowly coming to this realization.  We don't have a Sysplex, and
> shar=
> ing PDSE's, while it works somewhat, doesn't work that great.  My
> personal =
> libraries are PDSEs and if I am logged on to both PROD and DEV at the
> same =
> time I have to make sure I am not trying to edit members in the same
> PDSE i=
> n both environments.  Blech.  I thought PDSEs were the future?
> 
> 
> 
> If you set this up correctly, you will get Record Level Locking for VSAM
> which will give you some benefits that you lost in moving from VSE. It
> is something to think about, if you are heavily using VSAM with onlines
> and batch.

Can you explain this a bit more?  I will admit to not even knowing how this 
works on VSE.

Thanks,
Frank

-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075




The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Frank Swarbrick
You would have to ask someone higher in the food chain than me.  :-)  I'm 
wondering if they think it's more difficult to set up or more expensive than it 
really is?  No idea, really.  I'm not a systems guy.

I will pose the question.  

Frank

On 8/6/2009 at 11:43 AM, in message <4a7ade09.8489.00d...@joann.com>, Scott
Rowe  wrote:
> I think the real question here is: Why don't you have a SYSPLEX?  There is 
> little to no cost involved, and many benefits.
> 
 Frank Swarbrick  8/6/2009 12:15 PM >>>
 On 8/6/2009 at 5:28 AM, in message
> , Bruce Hewson
>  wrote:
>> Hi Frank,
>> 
>> Unless your two systems are in the same Sysplex (Base or Parallel) I would 
>> highly recommend that you do not share your sysres disk...they contain PDSE 
>> datasets that you shouldn't share outside a sysplex.
>> 
>> Similarly, any disk you do share should not contain PDSE datasets.
> 
> We're slowly coming to this realization.  We don't have a Sysplex, and 
> sharing PDSE's, while it works somewhat, doesn't work that great.  My 
> personal libraries are PDSEs and if I am logged on to both PROD and DEV at 
> the same time I have to make sure I am not trying to edit members in the same 
> PDSE in both environments.  Blech.  I thought PDSEs were the future?
> 
> Frank
> 
> 
> 
> 
> The information contained in this electronic communication and any document 
> attached hereto or transmitted herewith is confidential and intended for the 
> exclusive use of the individual or entity named above.  If the reader of this 
> message is not the intended recipient or the employee or agent responsible 
> for delivering it to the intended recipient, you are hereby notified that any 
> examination, use, dissemination, distribution or copying of this 
> communication or any part thereof is strictly prohibited.  If you have 
> received this communication in error, please immediately notify the sender by 
> reply e-mail and destroy this communication.  Thank you.
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html 
> 
> 
> 
> CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
> confidential and privileged information intended only for the addressee.  If 
> you are not the intended recipient, please be advised that you have received 
> this material in error and that any forwarding, copying, printing, 
> distribution, use or disclosure of the material is strictly prohibited.  If 
> you have received this material in error, please (i) do not read it, (ii) 
> reply to the sender that you received the message in error, and (iii) erase 
> or destroy the material. Emails are not secure and can be intercepted, 
> amended, lost or destroyed, or contain viruses. You are deemed to have 
> accepted these risks if you communicate with us by email. Thank you.
> 
> 
> --
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
> Search the archives at http://bama.ua.edu/archives/ibm-main.html

>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Thompson, Steve
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Scott Rowe
Sent: Thursday, August 06, 2009 12:44 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

I think the real question here is: Why don't you have a SYSPLEX?  There
is =
little to no cost involved, and many benefits.

>>> Frank Swarbrick  8/6/2009 12:15 PM
>>>


We're slowly coming to this realization.  We don't have a Sysplex, and
shar=
ing PDSE's, while it works somewhat, doesn't work that great.  My
personal =
libraries are PDSEs and if I am logged on to both PROD and DEV at the
same =
time I have to make sure I am not trying to edit members in the same
PDSE i=
n both environments.  Blech.  I thought PDSEs were the future?



If you set this up correctly, you will get Record Level Locking for VSAM
which will give you some benefits that you lost in moving from VSE. It
is something to think about, if you are heavily using VSAM with onlines
and batch.

Regards,
Steve Thompson

-- Opinions expressed by this poster may not reflect those of poster's
employer --

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Scott Rowe
I think the real question here is: Why don't you have a SYSPLEX?  There is 
little to no cost involved, and many benefits.

>>> Frank Swarbrick  8/6/2009 12:15 PM >>>
>>> On 8/6/2009 at 5:28 AM, in message
, Bruce Hewson
 wrote:
> Hi Frank,
> 
> Unless your two systems are in the same Sysplex (Base or Parallel) I would 
> highly recommend that you do not share your sysres disk...they contain PDSE 
> datasets that you shouldn't share outside a sysplex.
> 
> Similarly, any disk you do share should not contain PDSE datasets.

We're slowly coming to this realization.  We don't have a Sysplex, and sharing 
PDSE's, while it works somewhat, doesn't work that great.  My personal 
libraries are PDSEs and if I am logged on to both PROD and DEV at the same time 
I have to make sure I am not trying to edit members in the same PDSE in both 
environments.  Blech.  I thought PDSEs were the future?

Frank




The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html 



CONFIDENTIALITY/EMAIL NOTICE: The material in this transmission contains 
confidential and privileged information intended only for the addressee.  If 
you are not the intended recipient, please be advised that you have received 
this material in error and that any forwarding, copying, printing, 
distribution, use or disclosure of the material is strictly prohibited.  If you 
have received this material in error, please (i) do not read it, (ii) reply to 
the sender that you received the message in error, and (iii) erase or destroy 
the material. Emails are not secure and can be intercepted, amended, lost or 
destroyed, or contain viruses. You are deemed to have accepted these risks if 
you communicate with us by email. Thank you.


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Staller, Allan
A basic sysplex will provide z/OS availability, an improvement over
single image. This can be done for the cost of a few CTC connections
between LPARS. If you have the spare channels to define as CTC's, this
is fairly easy to do.

IIRC, (I know someone will correct me if I am wrong) a basic sysplex
will also provide PDSE sharing within the sysplex. 

AFAIK there is no way with IBM vanilla code to share PDSE's across a
SYSPLEX boundary (either monoplex or multi-system). Some 3rd party
products will support this.



> Unless your two systems are in the same Sysplex (Base or Parallel) I
would 
> highly recommend that you do not share your sysres disk...they contain
PDSE 
> datasets that you shouldn't share outside a sysplex.
> 
> Similarly, any disk you do share should not contain PDSE datasets.

We're slowly coming to this realization.  We don't have a Sysplex, and
sharing PDSE's, while it works somewhat, doesn't work that great.  My
personal libraries are PDSEs and if I am logged on to both PROD and DEV
at the same time I have to make sure I am not trying to edit members in
the same PDSE in both environments.  Blech.  I thought PDSEs were the
future?


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Bob Shannon
>I thought PDSEs were the future?

I prefer not to use PDSEs, although they finally work fairly well. The obvious 
problem is that PDSE sharing requires a Sysplex. I suspect, but have no factual 
data, that shops without a Sysplex are in the minority. There are pros and cons 
to having test and production in the same Sysplex. If the PDSE limitation is 
onerous, go back to PDSs.

Bob Shannon
Rocket Software

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Frank Swarbrick
>>> On 8/6/2009 at 5:28 AM, in message
, Bruce Hewson
 wrote:
> Hi Frank,
> 
> Unless your two systems are in the same Sysplex (Base or Parallel) I would 
> highly recommend that you do not share your sysres disk...they contain PDSE 
> datasets that you shouldn't share outside a sysplex.
> 
> Similarly, any disk you do share should not contain PDSE datasets.

We're slowly coming to this realization.  We don't have a Sysplex, and sharing 
PDSE's, while it works somewhat, doesn't work that great.  My personal 
libraries are PDSEs and if I am logged on to both PROD and DEV at the same time 
I have to make sure I am not trying to edit members in the same PDSE in both 
environments.  Blech.  I thought PDSEs were the future?

Frank




The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-06 Thread Bruce Hewson
Hi Frank,

Unless your two systems are in the same Sysplex (Base or Parallel) I would 
highly recommend that you do not share your sysres disk...they contain PDSE 
datasets that you shouldn't share outside a sysplex.

Similarly, any disk you do share should not contain PDSE datasets.

Regards
Bruce Hewson

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Frank Swarbrick
>>> On 8/5/2009 at 12:39 PM, in message
<45d79eacefba9b428e3d400e924d36b902557...@iwdubcormsg007.sci.local>, "Thompson,
Steve"  wrote:
> Again, stop thinking VSE and tell your CIO to stop thinking 10,000
> chickens trying to deliver the mail.

All of the replies so far have been very helpful.  Thank you!
I especially liked yours, Steve.  10,000 chickens indeed!

I think the main point everyone is trying to make is that *z/OS is designed to 
work this way*, so we should take advantage of it.  I agree 100%.  We'll see 
what happens.

Thanks again!!
Frank

-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075




The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Schwarz, Barry A
Then the person doing the update better specify OLD or MOD.  No one but
the BINDER (or a program with the equivalent ability to update the
enqueue) should ever be updating a shared dataset

-Original Message-
From: Ted MacNEIL 
Sent: Wednesday, August 05, 2009 12:45 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share

>You just need to be sure that GRS is setup to convert hardware reserves
to enqueues

Yes!

>and have everyone use DISP=SHR unless they really need to update the
dataset.

Disagree!
Too generic.

This one has nothing to do with shared DASD!
It is also an issue within a single system.

What if you want a consistant copy with no updates while you are
reading?

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread John Kington
>>and have everyone use DISP=SHR unless they really need to update the dataset.

>Disagree!
>Too generic.
>
>This one has nothing to do with shared DASD!
>It is also an issue within a single system.
>
>What if you want a consistant copy with no updates while you are reading?

I was thinking more in context of loadlib concatenations. If someone in our 
shop wants to update a file without getting exclusive control, I will be happy 
to supply the rails and build the fire to heat up the tar.

Regards,
John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Ted MacNEIL
>You just need to be sure that GRS is setup to convert hardware reserves to 
>enqueues

Yes!

>and have everyone use DISP=SHR unless they really need to update the dataset.

Disagree!
Too generic.

This one has nothing to do with shared DASD!
It is also an issue within a single system.

What if you want a consistant copy with no updates while you are reading?

-
Too busy driving to stop for gas!

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread John Kington
Frank,

The only resources that we separate is the sandbox where system programmers do 
dangerous things like bringing up new levels of operating systems, etc. z/OS 
has always had robust sharing.

I/O from the development LPAR would have negliable impact on production LPAR 
even reading the same exact data since data is staged through cache on all 
modern dasd subsystems. You just need to be sure that GRS is setup to convert 
hardware reserves to enqueues and have everyone use DISP=SHR unless they really 
need to update the dataset.

Regards,
John

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Thompson, Steve
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Frank Swarbrick
Sent: Wednesday, August 05, 2009 12:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: DASD: to share or not to share

As part of our migration to z/OS from z/VSE we have started a discussion
on what DASD, if any, should be shared between our production z/OS LPAR
and our development z/OS LPAR.  For what it's worth, on VSE here is what
we have right now...



Theoretically I can see this possibility, but is it really a concern?
What do others do for:
1) System executables
2) System parameter libraries
3) Applications executables
4) Application data

I should note that this is not the case of an uneducated manager trying
to make technical decisions.  But he is from the "distributed systems"
world, and even though he's worked here for 15+ years, much of that as
CIO over both distributed and mainframe systems, he still has a bit of a
"distributed systems" mindset.  In that world they don't share "system"
or "application" data at all (or very little).  But to me that doesn't
mean that it's wrong in the mainframe world.

Thoughts?



First:

Stop thinking VSE and its catalog system. 

Second:

You have a security system (well, I would think you are running one, and
probably RACF). Allow the system to put data sets where the system
wants/needs based on esoterics and SMS. Let the security system control
who can access which data set and if that is read, write, destroy,
update, control, twist, fold, spindle or mutilate.

Put into LNKLST what needs to be in LNKLST. Define to VLF what needs to
be defined there.

Now, system executables will be determined by IPL and the PARMS used for
that IPL (so if you change o/s levels, and the like...).

Application executables and application data should be more or less
under the control of the applications people. Production should be under
control of production control (or whatever you call that function), and
systems stuff should be under the control of the systems programmers.

BECAUSE mainframes are not BUS CENTRIC (those open platforms seem to
have this characteristic), there should not be problems with executables
being shared between systems. After all, once LPA is loaded, you aren't
going to read the base libraries until you IPL again. It is in Central
Memory or it is on a non-shared PAGE data set.

Think of LNKLST elements in roughly the same way thanks to LLA and VLF.

Again, stop thinking VSE and tell your CIO to stop thinking 10,000
chickens trying to deliver the mail.

Regards,
Steve Thompson

-- Opinions expressed by this poster do not necessarily reflect those
held by the poster's employer. --

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Staller, Allan
There is no reason not to share *EVERYTHING* possible at the physical
level. 

All of the integrity/performance issues perceived by the OP's management
occur when concurrent ACCESS occurs, not concurrent CONNECTION, the fact
that the devices are cabled to two (or more) LPARS, CECS,

I agree w/John's post that the concurrent ACCESS issues are minimal to
non-existent, but what if a SYSPROG fat fingers something and the prod
system won't come up? If concurrent CONNECTION is available, just vary
the DASD online, fix the problem and retry. If not, now there is a *BIG*
mess!

If the OP's management is truly worried about performance issues
w/concurrent ACCESS, duplicate what is needed and run w/test & prod
offline to each other. 



We fully share all of our DASD between all three of our z/OS images.
Shared DASD has never been a problem. The stuff is so fast that even if
both systems are accessing the same load library, we don't see any
measurable degradation. We convert most of the hardware reserves to
global ENQs which help prevent old-style "deadly embraces". We share the
JES2 SPOOL. Data set access integrity is assured by RACF (e.g. test jobs
cannot update anything other than test data sets due to RACF rules -
same would work with Top Secret or ACF2). We do have separate res and
master catalog volumes, but many shops share the system residence
volumes and master catalog, separating things using static system
symbols.

I really feel that not sharing is a recipe for disaster. Suppose some
system support library gets out of sync (such as DB2). You do all your
testing with a different PTF level of DB2 than you have in production.
Everything runs well in test, but "messes up" in production. Yuck! OK,
not likely, but a non-zero probability!


--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


Re: DASD: to share or not to share

2009-08-05 Thread Hal Merritt
It makes sense to share DASD. It makes sense to keep it separate. 

About the only real issue I can see is one of synchronization. That is, the 
instant you create a duplicate, the data starts to drift apart and requires 
vast amounts of The Force to force it back into sync. The farther apart you 
separate the storage media, the more Force you need.  

May The Force be with you, and in plentiful supply :-)


-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of 
Frank Swarbrick
Sent: Wednesday, August 05, 2009 12:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: DASD: to share or not to share

As part of our migration to z/OS from z/VSE we have started a discussion on 
what DASD, if any, should be shared between our production z/OS LPAR and our 
development z/OS LPAR.  For what it's worth, on VSE here is what we have right 
now...

When both PROD and DEV are at the same OS level they both share the same OS 
"system residence" library.  Also shared are things like the DB2 "load 
library", the DL/I "load library", the CICS "load library", etc.  In addition 
we also share the "load library" that contains all of our application "load 
modules".  (I use these quotes because the VSE term is not "load modules / load 
libraries" etc, but I can't think of a good generic name.  Executables, maybe.)

What we do not share is the disk volumes that contain production datasets (VSAM 
files, DL/I databases, sequential files, etc.).  [Actually that's not quite 
true, because we actually have two PROD VSE guests which have some DASD shared 
between the two, though not between them and DEV.]  If a developer needs 
production data then he can either restore from a nightly backup (backups are 
on shared DASD) or can run a process that 1) copies from the production 
unshared volume to a shared volume (job run in PROD), and 2) copies from the 
shared volume to a DEV volume (job run in DEV).

As an applications developer I've never been to happy with the unshared data, 
but I can certainly understand why it is done.  Security of production data 
(perceived or actual risk I am not sure...) and performance (shared DASD has 
performance implications) being the main issues.

Now the discussion has "blown up" in that there are now questions by our CIO as 
to if we really should be sharing even the "executable" libraries.  Not only at 
the applications level, but also at the systems level.  He's thought is that if 
two LPARs share the same OS executables that their use in DEV could possibly 
hinder performance of PROD.

Theoretically I can see this possibility, but is it really a concern?  What do 
others do for:
1) System executables
2) System parameter libraries
3) Applications executables
4) Application data

I should note that this is not the case of an uneducated manager trying to make 
technical decisions.  But he is from the "distributed systems" world, and even 
though he's worked here for 15+ years, much of that as CIO over both 
distributed and mainframe systems, he still has a bit of a "distributed 
systems" mindset.  In that world they don't share "system" or "application" 
data at all (or very little).  But to me that doesn't mean that it's wrong in 
the mainframe world.

Thoughts?

Thanks!
Frank

-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075


>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html
NOTICE: This electronic mail message and any files transmitted with it are 
intended
exclusively for the individual or entity to which it is addressed. The message, 
together with any attachment, may contain confidential and/or privileged 
information.
Any unauthorized review, use, printing, saving, copying, disclosure or 
distribution 
is strictly prohibited. If you have received this message in error, please 
immed

Re: DASD: to share or not to share

2009-08-05 Thread McKown, John
We fully share all of our DASD between all three of our z/OS images. Shared 
DASD has never been a problem. The stuff is so fast that even if both systems 
are accessing the same load library, we don't see any measurable degradation. 
We convert most of the hardware reserves to global ENQs which help prevent 
old-style "deadly embraces". We share the JES2 SPOOL. Data set access integrity 
is assured by RACF (e.g. test jobs cannot update anything other than test data 
sets due to RACF rules - same would work with Top Secret or ACF2). We do have 
separate res and master catalog volumes, but many shops share the system 
residence volumes and master catalog, separating things using static system 
symbols.

I really feel that not sharing is a recipe for disaster. Suppose some system 
support library gets out of sync (such as DB2). You do all your testing with a 
different PTF level of DB2 than you have in production. Everything runs well in 
test, but "messes up" in production. Yuck! OK, not likely, but a non-zero 
probability!

"Distributed" systems generally don't share disk drives simply because they 
CANNOT do with with integrity. z/OS is designed to do this with integrity 
(assuming you don't do something foolish).

--
John McKown 
Systems Engineer IV
IT

Administrative Services Group

HealthMarkets(r)

9151 Boulevard 26 * N. Richland Hills * TX 76010
(817) 255-3225 phone * (817)-961-6183 cell
john.mck...@healthmarkets.com * www.HealthMarkets.com

Confidentiality Notice: This e-mail message may contain confidential or 
proprietary information. If you are not the intended recipient, please contact 
the sender by reply e-mail and destroy all copies of the original message. 
HealthMarkets(r) is the brand name for products underwritten and issued by the 
insurance subsidiaries of HealthMarkets, Inc. -The Chesapeake Life Insurance 
Company(r), Mid-West National Life Insurance Company of TennesseeSM and The 
MEGA Life and Health Insurance Company.SM

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html


DASD: to share or not to share

2009-08-05 Thread Frank Swarbrick
As part of our migration to z/OS from z/VSE we have started a discussion on 
what DASD, if any, should be shared between our production z/OS LPAR and our 
development z/OS LPAR.  For what it's worth, on VSE here is what we have right 
now...

When both PROD and DEV are at the same OS level they both share the same OS 
"system residence" library.  Also shared are things like the DB2 "load 
library", the DL/I "load library", the CICS "load library", etc.  In addition 
we also share the "load library" that contains all of our application "load 
modules".  (I use these quotes because the VSE term is not "load modules / load 
libraries" etc, but I can't think of a good generic name.  Executables, maybe.)

What we do not share is the disk volumes that contain production datasets (VSAM 
files, DL/I databases, sequential files, etc.).  [Actually that's not quite 
true, because we actually have two PROD VSE guests which have some DASD shared 
between the two, though not between them and DEV.]  If a developer needs 
production data then he can either restore from a nightly backup (backups are 
on shared DASD) or can run a process that 1) copies from the production 
unshared volume to a shared volume (job run in PROD), and 2) copies from the 
shared volume to a DEV volume (job run in DEV).

As an applications developer I've never been to happy with the unshared data, 
but I can certainly understand why it is done.  Security of production data 
(perceived or actual risk I am not sure...) and performance (shared DASD has 
performance implications) being the main issues.

Now the discussion has "blown up" in that there are now questions by our CIO as 
to if we really should be sharing even the "executable" libraries.  Not only at 
the applications level, but also at the systems level.  He's thought is that if 
two LPARs share the same OS executables that their use in DEV could possibly 
hinder performance of PROD.

Theoretically I can see this possibility, but is it really a concern?  What do 
others do for:
1) System executables
2) System parameter libraries
3) Applications executables
4) Application data

I should note that this is not the case of an uneducated manager trying to make 
technical decisions.  But he is from the "distributed systems" world, and even 
though he's worked here for 15+ years, much of that as CIO over both 
distributed and mainframe systems, he still has a bit of a "distributed 
systems" mindset.  In that world they don't share "system" or "application" 
data at all (or very little).  But to me that doesn't mean that it's wrong in 
the mainframe world.

Thoughts?

Thanks!
Frank

-- 

Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO  USA
P: 303-235-1403
F: 303-235-2075


>>> 

The information contained in this electronic communication and any document 
attached hereto or transmitted herewith is confidential and intended for the 
exclusive use of the individual or entity named above.  If the reader of this 
message is not the intended recipient or the employee or agent responsible for 
delivering it to the intended recipient, you are hereby notified that any 
examination, use, dissemination, distribution or copying of this communication 
or any part thereof is strictly prohibited.  If you have received this 
communication in error, please immediately notify the sender by reply e-mail 
and destroy this communication.  Thank you.

--
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html