IIRC, when running in Ring mode, each GRS keeps a copy of all ENQs from all
systems in memory, while in Star mode each GRS only keeps ENQs for his own
system, so it seems logical that there would be a point at which Star uses less
memory overall.
Try being an all SAP shop, I am going to be g
On Thu, 13 Aug 2009 10:46:44 -0400, Scott Rowe
wrote:
>Arthur, I snipped most of your text, which I whole-heartedly agree with, but
I have one comment about the "cost" of GRS-Star: I don't know if anyone
has truly studied this, but I have doubts as to whether GRS Star has a CPU
cost, at leas
Arthur, I snipped most of your text, which I whole-heartedly agree with, but I
have one comment about the "cost" of GRS-Star: I don't know if anyone has
truly studied this, but I have doubts as to whether GRS Star has a CPU cost, at
least in an environment with relatively low ENQ activity. I h
With others making judicious mention of GRS RNLs, I realized I omitted another
important reference: "z/OS MVS Planning: Global Resource Serialization",
SA22-7600, as well as sections in the various components' (HSM, RACF,
etc.) "sysprog" or "reference" manuals that speak to GRS requirements.
>
>> - Original Message
>> From: Mark Zelden mark.zel...@zurichna.com
>>
>> Typical size is 956 byte, at list at our shop. My RMF reports shows i.e.
>> 1,868,451 for the Default Class (956) and only 4,539 for the Big Class
>> (16,316)
>> as outbound traffic from one LPAR to another in the p
Thank you! As the original poster, I got a bit confused with Allan's
recommendation, as I could have sworn others were recommending one Sysplex
owning both PROD and TEST/DEV. Thought maybe I had misinterpreted everything
and had to start over! :-)
--
Frank Swarbrick
Applications Architect -
On Tue, 11 Aug 2009 14:34:17 -1000, Stephen Y Odo
wrote:
>We have 3 LPARs configured as monoplex. One for our Production
>environment, one for our Test/QA environment, and one for our
>Application Development environment.
>
>From this discussion, it sounds like everybody is saying even we would
I am getting ratios of 30:1 or better of small messages to large, and IIRC
it is even higher when using GRS-Ring. So, ESCON CTCs do very well in a
Basic Sysplex, where the majority of messages are GRS ENQ traffic.
If I had a FICON switch in my environment, I would use single FICON channels
in bou
On Wed, 12 Aug 2009 08:08:53 -0700, Walter Marguccio
wrote:
>> - Original Message
>> From: Mark Zelden mark.zel...@zurichna.com
>
>> So FICON is better with current technology regardless. But for those
>> using "old" technology, does anyone know what the typical / average size
>> of GR
I am curious why you would need to take an outage to "harden any dynamic
changes to the IODF"?
It was IODF and microcode. I am limited to a single CEC (a Z/9).
Eventually, dynamic IODF changes will fail when the allocated HSA fills
up. Either that, or an LPAR won't activate because the HSA has
e
> - Original Message
> From: Mark Zelden mark.zel...@zurichna.com
> So FICON is better with current technology regardless. But for those
> using "old" technology, does anyone know what the typical / average size
> of GRS XCF messages are in a ring? Scott, are you lurking? :-)
Typical
Ah yes, that was it. That is why I use ESCON for 1k messages, and use my ICF
for larger messages, since I don't have a FICON CTC set up. The vast majority
of the messages in my Sysplex fir in the 1k class.
>>> Walter Marguccio 8/12/2009 10:34 AM >>>
> - Original Message
> From: Scott
Well then, you don't agree with me!
There are many smaller shops which share DASD across PROD/DEV/TEST, and a
Sysplex can be a very appropriate configuration for them.
>>> "Staller, Allan" 8/12/2009 10:01 AM >>>
I agree. 1 sysplex for all is not a good design point for the O/P. I
believe he co
Allan,
I am curious why you would need to take an outage to "harden any dynamic
changes to the IODF"?
Scott
>>> "Staller, Allan" 8/12/2009 8:54 AM >>>
The purpose of a sysplex is z/OS availability. I commonly go 3 months
between outages (production IPL's) and I am in the middle of creating 2
On Wed, 12 Aug 2009 07:34:16 -0700, Walter Marguccio
wrote:
>> - Original Message
>> From: Scott Rowe scott.r...@joann.com
>
>> BTW, for small packets of data, I believe ESCON links are actually faster
than FICON.
>> I think I have seen a paper on this, but I can't remember where.
>
>Yo
> - Original Message
> From: Scott Rowe scott.r...@joann.com
> BTW, for small packets of data, I believe ESCON links are actually faster
> than FICON.
> I think I have seen a paper on this, but I can't remember where.
You are right:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/
I agree. 1 sysplex for all is not a good design point for the O/P. I
believe he could still benefit from a 2 sysplex design in the same box.
1 for Prod, and 1 for DEV/TEST/QA. Two LPARs for each.
The OP was asking about using Sysplex to join his three monoplexes into
a single Sysplex. Availabili
Allan,
The OP was asking about using Sysplex to join his three monoplexes into a
single Sysplex. Availability may have been the original purpose of Sysplex,
but it can and has been used for other purposes since it's inception. You have
obviously not been following the recent discussion that
Afterword:
And what about the non-IBM components? We run ADABAS/Natural from
Software Ag.
Most major software vendors have had to coexist in or with a SYSPLEX
environment for at least 10 years. I would expect them to be able to
cope!
Check with the vendor for specific information.
HTH,
-
We have 3 LPARs configured as monoplex. One for our Production
environment, one for our Test/QA environment, and one for our
Application Development environment.
>From this discussion, it sounds like everybody is saying even we would
benefit from Sysplex.
The purpose of a sysplex is z/OS avail
We have 3 LPARs configured as monoplex. One for our Production
environment, one for our Test/QA environment, and one for our
Application Development environment.
>From this discussion, it sounds like everybody is saying even we would
benefit from Sysplex.
If we did do Sysplex, would we still be
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of R.S.
> Sent: Monday, August 10, 2009 4:53 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> Gibney, Dave pisze:
> > VSAM (and
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of R.S.
> Sent: Monday, August 10, 2009 11:41 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> In fact VSAM datasets are safe a
Mark Zelden pisze:
[[[.]
It is safe and fully supported to share an UCAT without GRS
or MIM. That means RESERVEs and can hurt performance, but it is possible
to safely and effectively share low-used UCAT. BTDT.
Somehow I missed commenting on that part in my last response. Yes,
RESERVE will pr
> - Original Message
> From: Mark Zelden mark.zel...@zurichna.com
> Out of curiosity: What type of XCF inks (ESCON or FICON)? How many?
we have ESCON XCF links. Every LPAR has 4 PATHIN and 4 PATHOUT
which connect to/from either LPARs.
> Do you have XCF transport classes set up / tuned
Yes, GRS using XCF is far superior in many ways, I would certainly not advise
using GRS ring (without XCF) for a 4 way GRSplex, but using XCF it would
certainly work. Of course, GRS-Star would be even better, but that is a little
trickier.
BTW, for small packets of data, I believe ESCON links
On Mon, 10 Aug 2009 07:20:35 -0700, Walter Marguccio
wrote:
>>>On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave wrote:
>
>>> I'm led to understand that our four would LPARs push or exceed GRS ring
performance.
>
>>From: Mark Zelden mark.zel...@zurichna.com
>
>> I can't say I have been in any 4 sy
>>On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave wrote:
>> I'm led to understand that our four would LPARs push or exceed GRS ring
>>performance.
>From: Mark Zelden mark.zel...@zurichna.com
> I can't say I have been in any 4 system GRS ring (using XCF) shops
> in the last 10 years, so perhaps
in LPAR-P. 2. Quickly
update
> >>> it from LPAR-other. 3. Quickly try in LPAR-P. I don't believe you
will
> >>> always get the new version.
> >
> > Apparently he did. But why? (more below)
> >
> >>>
> >>>> -Original Messa
On Sun, 9 Aug 2009 21:01:42 -0700, Gibney, Dave wrote:
>
>True, and last time we revamped the IODF I had the required CTC(s)
>added. That was over a year ago when we brought in the z9-BC and is the
>last time I had a chance to take a baby step that direction. Also, on
>this list, I'm led to unde
On Mon, 10 Aug 2009 13:53:29 +0200, R.S. wrote:
>Gibney, Dave pisze:
>> VSAM (and all SMS datasets) can't exist without being cataloged.
>
>Well... To be accurate: they cannot be *used* without being cataloged.
>But they can exist.
>
>
>> Shared
>> Catalogs are not "safe" without integrity protec
Gibney, Dave pisze:
VSAM (and all SMS datasets) can't exist without being cataloged.
Well... To be accurate: they cannot be *used* without being cataloged.
But they can exist.
Shared
Catalogs are not "safe" without integrity protection.
Huh? AFAIK It is safe and fully supported to share
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Mark Zelden
> Sent: Sunday, August 09, 2009 8:17 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> On Sun, 9 Aug 2009 14:39:32 -0
On Sun, 9 Aug 2009 14:39:32 -0700, Gibney, Dave wrote:
>
>VSAM (and all SMS datasets) can't exist without being cataloged. Shared
>Catalogs are not "safe" without integrity protection. Integrity
>protection takes MIM ($) or Sysplex ($) and both require time to
>configure and maintain. As the only
CIS <=> CICS, sorry about the typo
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Gibney, Dave
> Sent: Sunday, August 09, 2009 2:40 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to sh
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Sunday, August 09, 2009 12:57 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> I believe you!
>
>
On Sun, 9 Aug 2009 13:56:53 -0600, Frank Swarbrick
wrote:
>I believe you!
>
>Can I restate as follows?
>If we do not have a sysplex we should not be sharing PDSE datasets between
LPARs because an update to the PDSE in one LPAR is not guaranteed to be seen
immediately (or ever?) by another LPAR.
Original Message-----
>>>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>>>> Behalf Of Frank Swarbrick
>>>> Sent: Friday, August 07, 2009 1:34 PM
>>>> To: IBM-MAIN@bama.ua.edu
>>>> Subject: Re: DASD: to share or no
on List [mailto:ibm-m...@bama.ua.edu] On
>>> Behalf Of Frank Swarbrick
>>> Sent: Friday, August 07, 2009 1:34 PM
>>> To: IBM-MAIN@bama.ua.edu
>>> Subject: Re: DASD: to share or not to share
>>>
>>> So for example, if our change control proce
ge may vary and other comments from Phil's disclaimer :)
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Saturday, August 08, 2009 9:42 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD:
ways get the new version.
>
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>> Behalf Of Frank Swarbrick
>> Sent: Friday, August 07, 2009 1:34 PM
>> To: IBM-MAIN@bama.ua.edu
>> Subject: Re: DASD: to share o
ust 08, 2009 3:09 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> I was not referring to Paralell Sysplex in this case, only to basic
> sysplex. If they are going to be sharing DASD at all then they need
> GRS, and GRS is much slower without XCF.
&g
At 12:57 -0400 on 08/06/2009, Bob Shannon wrote about Re: DASD: to
share or not to share:
If the PDSE limitation is onerous, go back to PDSs.
This option is not always available since while it works for Text
Libraries, you lost some PDSE features with Load Libraries (since
there are Load
I was not referring to Paralell Sysplex in this case, only to basic
sysplex. If they are going to be sharing DASD at all then they need
GRS, and GRS is much slower without XCF.
>>> "R.S." 08/08/09 12:55 PM >>>
Scott Rowe pisze:
> I think the real question here is: Why don't you have a SYSPLEX?
>I agree that capping a shared (Production) CF could be dangerous, and I have a
>hard time imagining a configuration where I would consider such a thing.
We tried it quite a while back.
We never had any success.
Even the TESTPLEX suffered
-
Too busy driving to stop for gas!
-
urday, August 08, 2009 11:44 AM
Subject: Re: DASD: to share or not to share
>I do not cap my CF LPARs, except that they are limited to only 1 CP. I
>define a weight such that they are guaranteed a full CP, yet they never use
>a full CP, since they are using dynamic dispatching. This co
Scott Rowe pisze:
I think the real question here is: Why don't you have a SYSPLEX? There is
little to no cost involved, and many benefits.
I dare to disagree. Strongly disagree.
At least you have to pay for memory for CF, ICF engine, more CP cycles,
GRS delays, significantly more sysprog eff
Message -
From: "Scott Rowe"
Newsgroups: bit.listserv.ibm-main
To:
Sent: Saturday, August 08, 2009 11:44 AM
Subject: Re: DASD: to share or not to share
I do not cap my CF LPARs, except that they are limited to only 1 CP. I
define a weight such that they are guaranteed a full CP
I do not cap my CF LPARs, except that they are limited to only 1 CP. I define
a weight such that they are guaranteed a full CP, yet they never use a full CP,
since they are using dynamic dispatching. This configuration is not going to
hang my production LPAR, and I believe it is silly to sugge
ibney, Dave"
Newsgroups: bit.listserv.ibm-main
To:
Sent: Saturday, August 08, 2009 3:27 AM
Subject: Re: DASD: to share or not to share
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Joel Wolpert
Sent: Friday, August 07, 2009 12:17
elieve you will
always get the new version.
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Frank Swarbrick
> Sent: Friday, August 07, 2009 1:34 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Joel Wolpert
> Sent: Friday, August 07, 2009 12:17 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> You can configure it so that
ce (can't remember
> it right off) to refresh in the other LPARs.
>
>> -Original Message-
>> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
>> Behalf Of Bruce Hewson
>> Sent: Thursday, August 06, 2009 10:51 PM
>> To: IBM-M
Thanks for more ammunition. I will add it to the pile! :-)
--
Frank Swarbrick
Applications Architect - Mainframe Applications Development
FirstBank Data Corporation
Lakewood, CO USA
P: 303-235-1403
F: 303-235-2075
On 8/7/2009 at 2:24 AM, in message
<000401ca1738$89b1aa90$9d14ff...@hawkins196
a CF
with the production lpars. If someone changes the config of the CF you might
end up with a performance problem.
- Original Message -
From: "Gibney, Dave"
Newsgroups: bit.listserv.ibm-main
To:
Sent: Friday, August 07, 2009 2:12 PM
Subject: Re: DASD: to share or no
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Gibney, Dave
> Sent: Friday, August 07, 2009 1:12 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> > -Original Message--
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Scott Rowe
> Sent: Friday, August 07, 2009 8:51 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> Heck, I just use a CF partition
On Thu, Aug 6, 2009 at 10:24 AM, Staller, Allan wrote:
> A basic sysplex will provide z/OS availability, an improvement over
> single image. This can be done for the cost of a few CTC connections
> between LPARS. If you have the spare channels to define as CTC's, this
> is fairly easy to do.
>
> I
n Technology Services
Washington State University
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Chase, John
> Sent: Thursday, August 06, 2009 12:32 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Thompson, Steve
> Sent: Friday, August 07, 2009 8:42 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> -Original Message---
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of McKown, John
Sent: Thursday, August 06, 2009 1:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share
>
It is VSAM Record Level Sharing (not Locking).
Within CICS
t right off) to refresh in the other LPARs.
> -Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Bruce Hewson
> Sent: Thursday, August 06, 2009 10:51 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not
Frank,
You have lots of good feedback as far as sharing with integrity and security
go, but I did focus on comment below about performance impact. There is
always the likelihood that activity from one LPAR may impact the performance
of another, and it can happen for the SYSRES volumes just as easi
Hi Dave,,
with systems that are only IPL'd every 3-5 months, YES, there will be updates
to those volumes. Changes still happen, and many fixes do not require an IPL
to implement, only careful change management.
On Thu, 6 Aug 2009 14:18:15 -0700, Gibney, Dave wrote:
>It may not be "supported",
-Original Message-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Chase, John
> Sent: Thursday, August 06, 2009 12:32 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> > -Original Message-
>
e-
> From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
> Behalf Of Bruce Hewson
> Sent: Thursday, August 06, 2009 4:28 AM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> Hi Frank,
>
> Unless your two systems are in
> -Original Message-
> From: IBM Mainframe Discussion List On Behalf Of McKown, John
>
> [ snip ]
>
> A basic SYSPLEX (what we are running) does not require a Coupling
Facility (CF). With a CF, you have a
> Parallel Sysplex, which has some better facilities. If you only have
an single CEC
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Frank Swarbrick
Sent: Thursday, August 06, 2009 1:01 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share
>>> On 8/6/2009 at 11:47 AM, i
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Frank Swarbrick
> Sent: Thursday, August 06, 2009 1:01 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
> > If you set this up correc
share or not to share
I think the real question here is: Why don't you have a SYSPLEX? There
is =
little to no cost involved, and many benefits.
--
For IBM-MAIN subscribe / signoff / archive access instructions,
send emai
> -Original Message-
> From: IBM Mainframe Discussion List
> [mailto:ibm-m...@bama.ua.edu] On Behalf Of Ted MacNEIL
> Sent: Thursday, August 06, 2009 1:12 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
>
> There is the
>You would have to ask someone higher in the food chain than me. :-) I'm
>wondering if they think it's more difficult to set up or more expensive than
>it really is? No idea, really. I'm not a systems guy.
>I will pose the question.
There is the cost of timers and coupling facilities -- t
Sent: Thursday, August 06, 2009 12:44 PM
> To: IBM-MAIN@bama.ua.edu
> Subject: Re: DASD: to share or not to share
>
> I think the real question here is: Why don't you have a SYSPLEX? There
> is =
> little to no cost involved, and many benefits.
>
>>>>
You would have to ask someone higher in the food chain than me. :-) I'm
wondering if they think it's more difficult to set up or more expensive than it
really is? No idea, really. I'm not a systems guy.
I will pose the question.
Frank
On 8/6/2009 at 11:43 AM, in message <4a7ade09.8489.00
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Scott Rowe
Sent: Thursday, August 06, 2009 12:44 PM
To: IBM-MAIN@bama.ua.edu
Subject: Re: DASD: to share or not to share
I think the real question here is: Why don't you have a SY
I think the real question here is: Why don't you have a SYSPLEX? There is
little to no cost involved, and many benefits.
>>> Frank Swarbrick 8/6/2009 12:15 PM >>>
>>> On 8/6/2009 at 5:28 AM, in message
, Bruce Hewson
wrote:
> Hi Frank,
>
> Unless your two systems are in the same Sysplex (Base
A basic sysplex will provide z/OS availability, an improvement over
single image. This can be done for the cost of a few CTC connections
between LPARS. If you have the spare channels to define as CTC's, this
is fairly easy to do.
IIRC, (I know someone will correct me if I am wrong) a basic sysplex
>I thought PDSEs were the future?
I prefer not to use PDSEs, although they finally work fairly well. The obvious
problem is that PDSE sharing requires a Sysplex. I suspect, but have no factual
data, that shops without a Sysplex are in the minority. There are pros and cons
to having test and pro
>>> On 8/6/2009 at 5:28 AM, in message
, Bruce Hewson
wrote:
> Hi Frank,
>
> Unless your two systems are in the same Sysplex (Base or Parallel) I would
> highly recommend that you do not share your sysres disk...they contain PDSE
> datasets that you shouldn't share outside a sysplex.
>
> Simil
Hi Frank,
Unless your two systems are in the same Sysplex (Base or Parallel) I would
highly recommend that you do not share your sysres disk...they contain PDSE
datasets that you shouldn't share outside a sysplex.
Similarly, any disk you do share should not contain PDSE datasets.
Regards
Bruce
>>> On 8/5/2009 at 12:39 PM, in message
<45d79eacefba9b428e3d400e924d36b902557...@iwdubcormsg007.sci.local>, "Thompson,
Steve" wrote:
> Again, stop thinking VSE and tell your CIO to stop thinking 10,000
> chickens trying to deliver the mail.
All of the replies so far have been very helpful. Than
@bama.ua.edu
Subject: Re: DASD: to share or not to share
>You just need to be sure that GRS is setup to convert hardware reserves
to enqueues
Yes!
>and have everyone use DISP=SHR unless they really need to update the
dataset.
Disagree!
Too generic.
This one has nothing to do with shared DASD!
>>and have everyone use DISP=SHR unless they really need to update the dataset.
>Disagree!
>Too generic.
>
>This one has nothing to do with shared DASD!
>It is also an issue within a single system.
>
>What if you want a consistant copy with no updates while you are reading?
I was thinking more in
>You just need to be sure that GRS is setup to convert hardware reserves to
>enqueues
Yes!
>and have everyone use DISP=SHR unless they really need to update the dataset.
Disagree!
Too generic.
This one has nothing to do with shared DASD!
It is also an issue within a single system.
What if you
Frank,
The only resources that we separate is the sandbox where system programmers do
dangerous things like bringing up new levels of operating systems, etc. z/OS
has always had robust sharing.
I/O from the development LPAR would have negliable impact on production LPAR
even reading the same e
-Original Message-
From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On
Behalf Of Frank Swarbrick
Sent: Wednesday, August 05, 2009 12:40 PM
To: IBM-MAIN@bama.ua.edu
Subject: DASD: to share or not to share
As part of our migration to z/OS from z/VSE we have started a
There is no reason not to share *EVERYTHING* possible at the physical
level.
All of the integrity/performance issues perceived by the OP's management
occur when concurrent ACCESS occurs, not concurrent CONNECTION, the fact
that the devices are cabled to two (or more) LPARS, CECS,
I agree w/J
@bama.ua.edu
Subject: DASD: to share or not to share
As part of our migration to z/OS from z/VSE we have started a discussion on
what DASD, if any, should be shared between our production z/OS LPAR and our
development z/OS LPAR. For what it's worth, on VSE here is what we have right
now...
When
We fully share all of our DASD between all three of our z/OS images. Shared
DASD has never been a problem. The stuff is so fast that even if both systems
are accessing the same load library, we don't see any measurable degradation.
We convert most of the hardware reserves to global ENQs which he
As part of our migration to z/OS from z/VSE we have started a discussion on
what DASD, if any, should be shared between our production z/OS LPAR and our
development z/OS LPAR. For what it's worth, on VSE here is what we have right
now...
When both PROD and DEV are at the same OS level they bot
90 matches
Mail list logo