Sharing Data between SMSPLEXES.
We have 2 SMSPLEXES within a bronze SYSPLEX. GRS is common to all LPARS, but most everything else is not. Now we have a new DB2 application coming that will require application datasets which will be required to be written to from all LPARS in the PLEX. I've shared volumes before by having a storage group which is written to from one SMSPLEX being online and in read mode from another plex with no problems, but for both sides to be able to allocate, write & extend datasets then I'm wondering if there's any gottchas coming at me. We can create the same Storage Group on both SMSes and define all the same volumes within. Of course we need to have procedures in place to ensure that someone doesn't go adding or removing volumes to one side and not the other. We also need to make sure that HSM cant migrate datasets from these volumes because the other HSM wont be able to recall them. My only real concern, is will it all hang together if/when one of these dsns goes into extents on another candidate volume. Will this cause any problems to any LPAR thats not in the SMS that performed the extend? Brian -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Brian Sharing resources outside of the sysplex boundary using GRS-Star is pretty much a no-no. There are some clever VM tricks that can be used to mount DASD volumes to other non-plex systems as read-only - however even this is going to fall foul when PDS-Es come into the mix. You might be able to overcome some restrictions using CA-MIM - however I doubt if even that can address the PDS-E issues (am not sure as have not used MIM for 10years or more). Rob Scott Developer Rocket Software 275 Grove Street * Newton, MA 02466-2272 * USA Tel: +1.617.614.2305 Email: rsc...@rs.com Web: www.rocketsoftware.com -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Brian Fraser Sent: 23 September 2009 16:22 To: IBM-MAIN@bama.ua.edu Subject: Sharing Data between SMSPLEXES. We have 2 SMSPLEXES within a bronze SYSPLEX. GRS is common to all LPARS, but most everything else is not. Now we have a new DB2 application coming that will require application datasets which will be required to be written to from all LPARS in the PLEX. I've shared volumes before by having a storage group which is written to from one SMSPLEX being online and in read mode from another plex with no problems, but for both sides to be able to allocate, write & extend datasets then I'm wondering if there's any gottchas coming at me. We can create the same Storage Group on both SMSes and define all the same volumes within. Of course we need to have procedures in place to ensure that someone doesn't go adding or removing volumes to one side and not the other. We also need to make sure that HSM cant migrate datasets from these volumes because the other HSM wont be able to recall them. My only real concern, is will it all hang together if/when one of these dsns goes into extents on another candidate volume. Will this cause any problems to any LPAR thats not in the SMS that performed the extend? Brian -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Given the constraints you have imposed, I would suggest a 3rd ('platimum') SMSplex for this data. i.e. shared ucat,. Volumes, SMS cds's, HSMplex,... Comments interspersed: >We have 2 SMSPLEXES within a bronze SYSPLEX. >GRS is common to all LPARS, but most everything else is not. >Now we have a new DB2 application coming that will require application >datasets which will be required to be written to from all LPARS in the PLEX. >I've shared volumes before by having a storage group which is written to >from one SMSPLEX being online and in read mode from another plex with no >problems, but for both sides to be able to allocate, write & extend datasets >then I'm wondering if there's any gottchas coming at me. >We can create the same Storage Group on both SMSes and define all the same >volumes within. Of course we need to have procedures in place to ensure that >someone doesn't go adding or removing volumes to one side and not the other. This is handled by SMS. Since there are shared SMS CDS's, any change from one image to the other will be automatically communicated via the CDS's. See my comment above regarding a "platinum" SMSplex. >We also need to make sure that HSM cant migrate datasets from these volumes >because the other HSM wont be able to recall them. There are three possibilities here. One is to route all HSM functions to a particular image via the SG attribute "MIGRATE SYSTEM or SYSTEM GROUP". I have never used this so check the manuals for details. Another possibility is to set up an SMSplex/HSMplex for *ONLY THIS* data. I *STRONGLY* suggest that if this is done, the boundaries of the SMSplex/HSMplex and the data are congruent. i.e. set up a SMSplex/HSMplex to only manage this set of volumes. All other SMSplex/HSMplex's should ignore this pool. The third possibility is to create an appropriate set of MC/SC/SG attribute that prevent migration. Of course this means you will have to do storage management for this pool the "old fashioned" way, by hand. >My only real concern, is will it all hang together if/when one of these dsns >goes into extents on another candidate volume. Will this cause any problems to any LPAR thats not in the SMS that performed >the extend? Comes with XCF/GRS/SMS. This should cause no issues within the SMSplex. HTH, -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
On Wed, 23 Sep 2009 23:21:45 +0800, Brian Fraser wrote: >We have 2 SMSPLEXES within a bronze SYSPLEX. > >GRS is common to all LPARS, but most everything else is not. > >Now we have a new DB2 application coming that will require application >datasets which will be required to be written to from all LPARS in the PLEX. > >I've shared volumes before by having a storage group which is written to >from one SMSPLEX being online and in read mode from another plex with no >problems, but for both sides to be able to allocate, write & extend datasets >then I'm wondering if there's any gottchas coming at me. > >We can create the same Storage Group on both SMSes and define all the same >volumes within. Of course we need to have procedures in place to ensure that >someone doesn't go adding or removing volumes to one side and not the other. > >We also need to make sure that HSM cant migrate datasets from these volumes >because the other HSM wont be able to recall them. > >My only real concern, is will it all hang together if/when one of these dsns >goes into extents on another candidate volume. > >Will this cause any problems to any LPAR thats not in the SMS that performed >the extend? > >Brian Although I don't recommend it, it can be done. The main thing is to keep all the ACS routines and storage groups consistent. Since GRS is common, you don't have to worry about data set integrity or issues with sharing PDSE / HFS across SMS. (someone already mentioned this as a problem, perhaps they missed the part about this being a bronze sysplex) Please read this post and thread where I talk about the caveats. At least the ones I know of. "Re: how-to Sysplex? - the LOGR and exploiters part" http://bama.ua.edu/cgi-bin/wa?A2=ind0908&L=ibm-main&D=1&O=D&P=178760 Regards, Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Since GRS is common, you don't have to worry about data set integrity or issues with sharing PDSE / HFS across SMS. I thought that PDSE didn't "work" inter-SYSPLEX no matter what you did with GRS but only intra-SYSPLEX. Wrong again? Jack Kelly 202-502-2390 (Office) -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
On Wed, 23 Sep 2009 15:10:55 -0400, John Kelly wrote: > >Since GRS is common, you don't have to worry about data set integrity or >issues with sharing PDSE / HFS across SMS. > > >I thought that PDSE didn't "work" inter-SYSPLEX no matter what you did >with GRS but only intra-SYSPLEX. Wrong again? > Read the OP's post again The very first thing he wrote was: >"We have 2 SMSPLEXES within a bronze SYSPLEX." Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Brian , >We have 2 SMSPLEXES within a bronze SYSPLEX. > >GRS is common to all LPARS, but most everything else is not. > >Now we have a new DB2 application coming that will require application >datasets which will be required to be written to from all LPARS in the PLEX. > >I've shared volumes before by having a storage group which is written to >from one SMSPLEX being online and in read mode from another plex with no >problems, but for both sides to be able to allocate, write & extend datasets >then I'm wondering if there's any gottchas coming at me. > SMS keeps space utilization information for each managed volume in the communications dataset. By allowing multiple SMSplexes to allocate, extend and delete datasets on the same volume(s), you will likely get these stats out of whack. The result would be something like never allocating a dataset to a given volume because SMS thinks the volume is full (exceeds high threshold) or tries to allocate a dataset on a volume that does not have enough space. If I remember correctly, the only way to clear the statistics was to remove the volume from the SMS configuration, activate the configuration, add the volume back to the configuration and then active the configuration. Regards, John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
I would think from a dataset perspective you would be okay as the catalog would track extents/allocation, well assuming the new volume was online to the other lpar trying to read it and the catalogs are all shared. If they are extended format datasets that may/maybe be an issue. Also assumming you don't migrate with HSM like you stated, it should work as clear as mud. Im hoping you don't have different catalogs sharing the same volume as that would be ugly/interesting. Thanks Ms. Terri E. Shaffer terri.e.shaf...@jpmchase.com Engineer J.P.Morgan Chase & Co. GTI DCT ECS Core Services zSoftware Group / Emerging Technologies Office: # 614-213-3467 Cell: # 412-519-2592 -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Brian Fraser Sent: Wednesday, September 23, 2009 11:22 AM To: IBM-MAIN@bama.ua.edu Subject: Sharing Data between SMSPLEXES. We have 2 SMSPLEXES within a bronze SYSPLEX. GRS is common to all LPARS, but most everything else is not. Now we have a new DB2 application coming that will require application datasets which will be required to be written to from all LPARS in the PLEX. I've shared volumes before by having a storage group which is written to from one SMSPLEX being online and in read mode from another plex with no problems, but for both sides to be able to allocate, write & extend datasets then I'm wondering if there's any gottchas coming at me. We can create the same Storage Group on both SMSes and define all the same volumes within. Of course we need to have procedures in place to ensure that someone doesn't go adding or removing volumes to one side and not the other. We also need to make sure that HSM cant migrate datasets from these volumes because the other HSM wont be able to recall them. My only real concern, is will it all hang together if/when one of these dsns goes into extents on another candidate volume. Will this cause any problems to any LPAR thats not in the SMS that performed the extend? Brian -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html This communication is for informational purposes only. It is not intended as an offer or solicitation for the purchase or sale of any financial instrument or as an official confirmation of any transaction. All market prices, data and other information are not warranted as to completeness or accuracy and are subject to change without notice. Any comments or statements made herein do not necessarily reflect those of JPMorgan Chase & Co., its subsidiaries and affiliates. This transmission may contain information that is privileged, confidential, legally privileged, and/or exempt from disclosure under applicable law. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or use of the information contained herein (including any reliance thereon) is STRICTLY PROHIBITED. Although this transmission and any attachments are believed to be free of any virus or other defect that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is virus free and no responsibility is accepted by JPMorgan Chase & Co., its subsidiaries and affiliates, as applicable, for any loss or damage arising in any way from its use. If you received this transmission in error, please immediately contact the sender and destroy the material in its entirety, whether in electronic or hard copy format. Thank you. Please refer to http://www.jpmorgan.com/pages/disclosures for disclosures relating to European legal entities. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Well, not sure of the value of multiple SMSplexes in the same sysplex, but that is not your question. Since it is a DB2 application you mention, where physical datasets are controlled by DBA's, and you don't want HSM migrating anything, and typically with DB2, you use specialized utilities to do backups, maybe the best option here is to setup a group of volumes that are NON-SMS managed. Then you can avoid all the other pitfalls that others have mentioned. _ Dave Jousma Assistant Vice President, Mainframe Services david.jou...@53.com 1830 East Paris, Grand Rapids, MI 49546 MD RSCB1G p 616.653.8429 f 616.653.8497 -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Brian Fraser Sent: Wednesday, September 23, 2009 11:22 AM To: IBM-MAIN@bama.ua.edu Subject: Sharing Data between SMSPLEXES. We have 2 SMSPLEXES within a bronze SYSPLEX. GRS is common to all LPARS, but most everything else is not. Now we have a new DB2 application coming that will require application datasets which will be required to be written to from all LPARS in the PLEX. I've shared volumes before by having a storage group which is written to from one SMSPLEX being online and in read mode from another plex with no problems, but for both sides to be able to allocate, write & extend datasets then I'm wondering if there's any gottchas coming at me. We can create the same Storage Group on both SMSes and define all the same volumes within. Of course we need to have procedures in place to ensure that someone doesn't go adding or removing volumes to one side and not the other. We also need to make sure that HSM cant migrate datasets from these volumes because the other HSM wont be able to recall them. My only real concern, is will it all hang together if/when one of these dsns goes into extents on another candidate volume. Will this cause any problems to any LPAR thats not in the SMS that performed the extend? Brian This e-mail transmission contains information that is confidential and may be privileged. It is intended only for the addressee(s) named above. If you receive this e-mail in error, please do not read, copy or disseminate it in any manner. If you are not the intended recipient, any disclosure, copying, distribution or use of the contents of this information is prohibited. Please reply to the message immediately by informing the sender that the message was misdirected. After replying, please erase it from your computer system. Your assistance in correcting this error is appreciated. -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
On Wed, 23 Sep 2009 15:25:45 -0400, John Kington wrote: >SMS keeps space utilization information for each managed volume in the communications dataset. By allowing multiple SMSplexes to allocate, extend and delete datasets on the same volume(s), you will likely get these stats out of whack. The result would be something like never allocating a dataset to a given volume because SMS thinks the volume is full (exceeds high threshold) or tries to allocate a dataset on a volume that does not have enough space. If I remember correctly, the only way to clear the statistics was to remove the volume from the SMS configuration, activate the configuration, add the volume back to the configuration and then active the configuration. > I'm pretty sure I tested that also... and it still didn't help. The only way I found was to allocate or delete a dataset on the volume from the SMSplex. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Mark, >>SMS keeps space utilization information for each managed volume in the communications dataset. By allowing multiple SMSplexes to allocate, extend and delete datasets on the same volume(s), you will likely get these stats out of whack. The result would be something like never allocating a dataset to a given volume because SMS thinks the volume is full (exceeds high threshold) or tries to allocate a dataset on a volume that does not have enough space. If I remember correctly, the only way to clear the statistics was to remove the volume from the SMS configuration, activate the configuration, add the volume back to the configuration and then active the configuration. >> >I'm pretty sure I tested that also... and it still didn't help. The only way I found was to allocate or delete a dataset on the volume from the SMSplex. > Was the problem in SMS stastics or in the vtoc on the volume? You do have to turn on the DIRF bit and force an allocation to the volume to get CVAF(?) to tidy up the vtoc space information on a volume with an indexed vtoc. Regards, John -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
On Wed, 23 Sep 2009 16:51:12 -0400, John Kington wrote: >Mark, >>>SMS keeps space utilization information for each managed volume in the >communications dataset. By allowing multiple SMSplexes to allocate, extend >and delete datasets on the same volume(s), you will likely get these stats >out of whack. The result would be something like never allocating a dataset >to a given volume because SMS thinks the volume is full (exceeds high >threshold) or tries to allocate a dataset on a volume that does not have >enough space. If I remember correctly, the only way to clear the statistics >was to remove the volume from the SMS configuration, activate the >configuration, add the volume back to the configuration and then active the >configuration. >>> > >>I'm pretty sure I tested that also... and it still didn't help. The only >way I >found was to allocate or delete a dataset on the volume from the SMSplex. >> >Was the problem in SMS stastics or in the vtoc on the volume? You do have to turn on the DIRF bit and force an allocation to the volume to get CVAF(?) to tidy up the vtoc space information on a volume with an indexed vtoc. >Regards, >John > I've never had to do that for a volume only shared between MVS systems. But to answer your question, the space according to the VTOC is correct. It's SMS's knowledge of what is free on the volume that is not correct since it only updates the COMMDS when a data set is allocated or deleted on the volume from that SMSplex. See the post I referenced before. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Mark, I must admit I read the following in the OP : "I've shared volumes before by having a storage group which is written to from one SMSPLEX being online and in read mode from another plex with no problems" ..and thought he meant SYSPLEX for "another plex" rather than SMSPLEX - hence my rather off target reply. Rob Scott Developer Rocket Software 275 Grove Street * Newton, MA 02466-2272 * USA Tel: +1.617.614.2305 Email: rsc...@rs.com Web: www.rocketsoftware.com -Original Message- From: IBM Mainframe Discussion List [mailto:ibm-m...@bama.ua.edu] On Behalf Of Mark Zelden Sent: 23 September 2009 20:14 To: IBM-MAIN@bama.ua.edu Subject: Re: Sharing Data between SMSPLEXES. On Wed, 23 Sep 2009 15:10:55 -0400, John Kelly wrote: > >Since GRS is common, you don't have to worry about data set integrity >or issues with sharing PDSE / HFS across SMS. > > >I thought that PDSE didn't "work" inter-SYSPLEX no matter what you did >with GRS but only intra-SYSPLEX. Wrong again? > Read the OP's post again The very first thing he wrote was: >"We have 2 SMSPLEXES within a bronze SYSPLEX." Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Has anyone asked why the original poster has 2 SMSPlexes? His cleanest solution might be to merge the 2. Sounds like he has at least some shared DASD - in a previous incarnation, we had a similar situation and I did NOT want 2 SMSPlexes controlling volumes. So I 'merged' the code - basically I took the 2 pieces of code & surrounded them with checks for the SysID. If (SYSID1 or SYSID3), then fall into this code segment, if not fall into the 2nd piece. Then I moved all the construct definitions into the same SCDS - the only issue I had was there were a couple of constructs with the same name on both sides. I resolved that by merging the constructs and using what ever criteria gave the most service, i.e. - if the MgmtClas on 1 side said migrate after 7 days and on the other side said 15 days, I gave the 'merged' MgmtClas 15 days. StorGrps with the same name, I just renamed one. The StorGrps were still owned & managed by the original LPAR. I put the shared COMMDS & CDS's on shared DASD. We use GRS. There were a few steps ahead of SMS, like I merged the catalog environments - my choice, I think I could have gotten SMS to work anyway. Since then I've re-written the code so that we truly have 1 SMSPlex. ** This e-mail message and all attachments transmitted with it may contain legally privileged and/or confidential information intended solely for the use of the addressee(s). If the reader of this message is not the intended recipient, you are hereby notified that any reading, dissemination, distribution, copying, forwarding or other use of this message or its attachments is strictly prohibited. If you have received this message in error, please notify the sender immediately and delete this message and all copies and backups thereof. Thank you. ** -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
On Thu, 24 Sep 2009 07:30:38 -0500, Darth Keller wrote: >Has anyone asked why the original poster has 2 SMSPlexes? His cleanest >solution might be to merge the 2. > Good point. >Sounds like he has at least some shared DASD - in a previous incarnation, >we had a similar situation and I did NOT want 2 SMSPlexes controlling >volumes. So I 'merged' the code - basically I took the 2 pieces of code & >surrounded them with checks for the SysID. If (SYSID1 or SYSID3), then >fall into this code segment, if not fall into the 2nd piece. > Exactly what I have seen done in several shops I've been at. Including here in the late 90s (I was consulting at the time) when systems were being price-plexed together. >Then I moved all the construct definitions into the same SCDS - the only >issue I had was there were a couple of constructs with the same name on >both sides. I resolved that by merging the constructs and using what ever >criteria gave the most service, i.e. - if the MgmtClas on 1 side said >migrate after 7 days and on the other side said 15 days, I gave the >'merged' MgmtClas 15 days. StorGrps with the same name, I just renamed >one. The StorGrps were still owned & managed by the original LPAR. > >I put the shared COMMDS & CDS's on shared DASD. We use GRS. > >There were a few steps ahead of SMS, like I merged the catalog >environments - my choice, I think I could have gotten SMS to work anyway. >Since then I've re-written the code so that we truly have 1 SMSPlex. Again, very similar to what was done here over there years (a lot of it done after my consulting gig ended and before my return years later). Unfortunately we still have this kludge with our SAP SMSplex which is one system from a production sysplex and one from a development sysplex, which leaves 3 SMSplexes within 2 sysplexes. With one system in each sysplex, there isn't a good answer for merging that SMSplex. The SAP development LPAR is very stable since pretty much all it runs is DB2 (back end on z/OS, app on AIX) - so if I had to merge it, I would put the SAP development LPAR in the production sysplex and then that SMSplex could be merged and SMSplex would = SYSPLEX. Sometimes political barriers are greater than technical ones. Mark -- Mark Zelden Sr. Software and Systems Architect - z/OS Team Lead Zurich North America / Farmers Insurance Group - ZFUS G-ITO mailto:mark.zel...@zurichna.com z/OS Systems Programming expert at http://expertanswercenter.techtarget.com/ Mark's MVS Utilities: http://home.flash.net/~mzelden/mvsutil.html -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html
Re: Sharing Data between SMSPLEXES.
Thanks everyone for your input. >>Has anyone asked why the original poster has 2 SMSPlexes? His cleanest >>solution might be to merge the 2. >> >Good point. The bronze sysplex is fairly recently merged from seperate sysplexes to take advantage of licensing savings. Merging the 2 completely to a Platinum PLEX is our eventual goal, but currently there are duplicate volser names, duplicate usercat names and aliases as well as user datasets. SMS constructs that are similar, but diferent. Also 2 RACF databases and 2 HSMs that will need to be merged. All stuff that needs resolving, but will not be completed any time soon. Our new DB2 application on the other hand is just a few weeks away. So we can only do what's possible in a short time frame and thats create a new shared storage group, and a new shared UCAT. Thanks again. Brian -- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@bama.ua.edu with the message: GET IBM-MAIN INFO Search the archives at http://bama.ua.edu/archives/ibm-main.html