One might question why a single system was configured as parallel sysplex. 
Answer was that for historical business reasons, no one wanted this system 
merged into another sysplex. We are DB2 data sharing all around the Enterprise, 
and no one wanted to configure DB2 differently just for this one system. So we 
left it as 'standalone parallel sysplex'.

I mentioned this experience at the next SHARE conference. Given that the 
advertised threshold for GRS star is around 3 or 4 systems, everyone was 
surprised that a single system benefitted so much. No other changes were made 
than to define a GRS structure in the existing CFs along with editing IEASYSxx 
with GRS=STAR. 

Before the advent here of sysplex in the mid-90s, every system was a separate 
LPAR, and DASD was genned as nonshared to facilitate volume management. It was 
quite a cultural change to interconnect systems and deal with cross LPAR 
contention for the first time. This system was subsequently merged into a 
bronze-plex for software pricing reasons. It still owns a separate nonshared 
RACF data base.  

.
.
J.O.Skip Robinson
Southern California Edison Company
Electric Dragon Team Paddler 
SHARE MVS Program Co-Manager
323-715-0595 Mobile
626-543-6132 Office ⇐=== NEW
robin...@sce.com


-----Original Message-----
From: IBM Mainframe Discussion List [mailto:IBM-MAIN@LISTSERV.UA.EDU] On Behalf 
Of Mark Zelden
Sent: Thursday, April 06, 2017 9:36 AM
To: IBM-MAIN@LISTSERV.UA.EDU
Subject: (External):Re: RLS for catalogs

On Thu, 6 Apr 2017 15:56:29 +0000, Jesse 1 Robinson <jesse1.robin...@sce.com> 
wrote:

>This goes back several years when CF and memory resources were more expensive 
>and less flexible than today. Think standalone CF where a memory >upgrade was 
>a huge PITA. We had one single-system parallel sysplex that I had refrained 
>from turning over to GRS star. It was only one system, after all, so >what 
>could be harm in running GRS ring? Star would be a waste of resources, I 
>thought.  
> 
>One particular housekeeping job ran daily in every sysplex. It did a massive 
>LISTCAT. We noticed that this job ran two or three times longer (!) on this 
>one >system as compared with other sysplexes that made use of GRS star. I 
>could not find any plausible difference other than GRS configuration. So on a 
>hunch I >bit the bullet and implemented GRS star. Sure enough, the elapsed 
>time for the big LISTCAT job immediately dropped to a value in line with the 
>other >sysplexes. That's on a system that did not actually share resources 
>with any other. Not a scientific observation, but I'm as convinced as any UFO 
>witness ever >was.  

Wow!  I believe you, but I'd love to hear the technical explanation for that 
from an IBM
GRS and/or sysplex guru considering it was a single system.   I can only guess 
that
it was just a different code path taken that was more efficient and it didn't 
have anything to do with delays in propagating ENQs.  Or maybe there were 
reserves involved that went away with GRS STAR, but I wouldn't think so with 
just a LISTCAT.

Going back to the "old days" if you really had a single system you didn't want 
to gen the DASD as shared and that would avoid RESERVEs from being issued.

I'm feeling old now...  :-)

Regards,

Mark
--
Mark Zelden - Zelden Consulting Services - z/OS, OS/390 and MVS ITIL v3 
Foundation Certified mailto:m...@mzelden.com Mark's MVS Utilities: 
http://www.mzelden.com/mvsutil.html
Systems Programming expert at http://search390.techtarget.com/ateExperts/
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions, send email to 
lists...@listserv.ua.edu with the message: INFO IBM-MAIN


----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to