See below and thank you very much for all the information and suggestions.
Jerry ________________________________ From: IBM Mainframe Discussion List <IBM-MAIN@LISTSERV.UA.EDU> on behalf of Peter Bishop <pbisho...@dxc.com> Sent: Tuesday, June 2, 2020 8:06 PM To: IBM-MAIN@LISTSERV.UA.EDU Subject: Re: XCF/GRS question This message was sent from an external source outside of Western & Southern's network. Do not click links or open attachments unless you recognize the sender and know the contents are safe. ________________________________________________________________________________________________________________________ Hi Jerry, questions, and a suggestion. These are more at the hardware layer than the GRS one, which I saw Paul Feller addressing quite well. It may be that you cannot change the LPAR setup, but if you can, here are some ideas. 1. Must the CFs share the GPs with the z/OS systems, or are there ICF engines they can use? - Yes, we don't have CF engines and only 2 GPs, being max'ed out at 16 MSUs For small workloads it may be acceptable to have z/OS and CF workloads in the same processor pool, but CF workloads are different than z/OS ones, and where possible I have seen much benefit from having an ICF pool for CFs, and a CP pool for z/OS (and if you have VM or Linux, an IFL pool, which may be out of scope here). - I am trying to get a CF engine, but I can't get it approved in the budget, yet. z/VM, Linux and IFL are running on a different CEC 2. Must the non-production and production workloads share the same Sysplex? - I started with only one LPAR, running production and Test workloads. Our maintenance window is only once a month for 4 hours. So, we needed a way to "fit" the upgrades into the maintenance windows. I'd be inclined to separate them were I in charge. Two monoplexes may be less hassle than a "forced sharing" Sysplex. But you may have reasons for joining non-production into the production Sysplex. - This is a small system, and I would have a major change to split into separate SYSplex or even two monoplex'es. Due to the way the batch and development are setup, it would be a very big change 3. Do you have DYNDISP=THIN set on the CF LPARs? - Yes, DYNDISP=THIN on the ICF LARs For non-production CFs, this is best, but in your case with a single plex it may be inapplicable. Consider how you might benefit from it. It is a much-improved algorithm than its predecessors has been my experience. Considering you are sharing the pool, it may be a "quick fix" if you can live with it. Try a test. - we two ICF LPARs, so I single SYSplex was really the only way to accomplish all the goals, with minimal impact to the business and developers. 4. If you split the plexes, and have separate CFs, it will be better if you weight the CF LPARs as you do the z/OS ones, e.g. if z/OS has an 80:20 CP pool weight, then the CF LPARs should have the same weights for the ICF pool. - it would be a big deal, both political and business impact to split the environments into more SYSplex'es with more ICF. I think wouldn't be a good idea, with this small setup. - Thanks, I will take a look at the CPU weights, about increasing the ICF LPAR CP pool weights. kind regards, Peter On Tue, 2 Jun 2020 17:39:19 +0000, Edgington, Jerry <jerry.edging...@westernsouthernlife.com> wrote: > >We are running on single SYSPlex with two LPARs (Prod and Test) with 2 ICFs, >all running on the GPs. We are experiencing slowdowns, due to PROC-GRS on >Test, PROC-XCFAS on Prod. Weights are 20/20/20/80 for ICF1/ICF2/Test/Prod. >We have setup XCF Structures and FCTC for GRS Star > >Higher Weight: >PROC-GRS 3.4 users >PROC-GRS 2.4 users >ENQ -ACF2ACB 100.0 % delay LOGONIDS >PROC-GRS 99.0 % delay >PROC-GRS 13.0 % delay > >Lower weight: >PROC-XCFAS 14.1 users >PROC-XCFAS 13.1 users >PROC-XCFAS 99.0 % delay >PROC-XCFAS 45.0 % delay >PROC-XCFAS 16.0 % delay >PROC-XCFAS 11.0 % delay >PROC-XCFAS 33.0 % delay >PROC-XCFAS 77.0 % delay >PROC-XCFAS 45.0 % delay > >GRSCNFxx: >GRSDEF MATCHSYS(*) > SYNCHRES (YES) > GRSQ (CONTENTION) > ENQMAXA(250000) > ENQMAXU(16384) > AUTHQLVL(2) > RESMIL(5) > TOLINT(180) > >IEASYSxx: >GRS=STAR, JOIN GRS STAR >GRSCNF=00, GRS INITIALIZATION MEMBER >GRSRNL=00, GRS RESOURCE LIST > >D GRS: >RESPONSE=TEST > ISG343I 13.38.49 GRS STATUS 604 > SYSTEM STATE SYSTEM STATE > MVSZ CONNECTED TEST CONNECTED > GRS STAR MODE INFORMATION > LOCK STRUCTURE (ISGLOCK) CONTAINS 1048576 LOCKS. > THE CONTENTION NOTIFYING SYSTEM IS TEST > SYNCHRES: YES > ENQMAXU: 16384 > ENQMAXA: 250000 > GRSQ: CONTENTION > AUTHQLVL: 1 > MONITOR: NO > >Any advice or help on what I can do about these delays, would be great? > >Thanks, >Jerry > >---------------------------------------------------------------------- >For IBM-MAIN subscribe / signoff / archive access instructions, >send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN ---------------------------------------------------------------------- For IBM-MAIN subscribe / signoff / archive access instructions, send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN