Re: OS390 Guests using GRS and Sharing DASD with virtual RESERVE/RELEASE under z/VM
On Saturday, 05/22/2010 at 02:36 EDT, TaeMin Baek wrote: > When one guest OS/390 access to a certain dataset, > 1) z/VM CP control the DASD and set it reserved > 2) if other OS/390 guest try to access different dataset in same DASD, it must > wait and CP continue to check until DASD is released. > > Is the above logic of shaing DASD is right? > If it is right, i think GRS is useless now and it can give more workload to CP. Not quite, no. Confirming what Kris said, OS/390 issues a RESERVE command to the virtual DASD and CP remembers it. Since you have a fullpack minidisk, if you also haved the dasd volume defined as SHARED on an RDEVICE statement in SYSTEM CONFIG, then CP will also issue a real RESERVE so that it can be shared with other LPARs. If you aren't sharing the dasd volume with other LPARs, you can leave it as unshared. Further, it is *OS/390* that keeps checking, not CP. That constant checking is what will drive up the CPU consumption, as CP has to keep saying rejecting the I/O. > Today we tried to change sharing DASD method from with using Virtual > Reseve/Release to without using Virtual Reseve/Release. > What i mean is that i don't use MDISK with 'V' option. Only define MDISK with > 'MW' option. The question is really "Why is OS/390 issuing hw reserves?" Because something (e.g. JES) is issuing RESERVEs or ISGENQ RESERVEVOLUME=YES (I don't know if ISGENQ exists in OS/390). > I want to let GRS in OS/390 to control dataset level sharing like in Native > LPAR mode. > I guess it might reduce the CP workload relate to reserve/release DASD. GRS gives you the ability to control whether or not RESERVE will cause a real reserve to be issued to the hw. You can convert a RESERVE to a global ENQ by placing the dataset and queue name in the RESERVE conversion RNL. You may have to turn on the ENQ/DEQ/RESERVE monitor to see what's happening and get all the correct names. When you do this, then it will no longer be necesssary for OS/390 to poll the device to see if the hw reserve is still there. When one system issues DEQ for the volume, the other systems will be notified via GRS and one of them will race. Note that you should turn off the hw reserve ONLY if the dataset or volume is NOT being shared outside of the GRS complex. > But the problem is JES failed to start because it cannot access the volume > containg checkpoint dataset. > The message said the volume containing checkpoint dataset is not shared. Without the "V", the RESERVE macro issued by JES fails. Caveat: I am not a z/OS or GRS expert. Alan Altmark z/VM Development IBM Endicott
Re: OS390 Guests using GRS and Sharing DASD with virtual RESERVE/RELEASE under z/VM
Sdc Envoyé de mon iPhone Le 22 mai 2010 à 22:30, Kris Buelens a écrit : When you write "CP continue to check until DASD is released." this sounds as if you think CP is executing code all the time a disk is reserved. That indeed would be high overhead. No, CP doesn't need to be "active" all the time, the only thing it must do is inspect some flag when IO's happen on disks with Virtual R/R. I wouldn't think the CP overhead for virtual R/R is noticable. And, when a disk is virtually reserved, the whole guest wouldn't need to wait, just that IO will wait, the guest can do other things. 2010/5/22 TaeMin Baek We are using GRS ring mode in three OS/390 V2.10 in Baseplex enviroment under z/VM v6.1 on z10 to share DASD and dataset with serialization between OS/390 guests and z/VM provide shared full-pack minidisk with using virtual RESERVE/RELEASE. While system is running, Suervisor CPU usage is high up to 30%~40% of Total CPU Usage on Performance Toolkits. ex) Total CPU % : 67.4% Superv. CPU : 29.8% Emulat. CPU : 37.6% When one guest OS/390 access to a certain dataset, 1) z/VM CP control the DASD and set it reserved 2) if other OS/390 guest try to access different dataset in same DASD, it must wait and CP continue to check until DASD is released. Is the above logic of shaing DASD is right? If it is right, i think GRS is useless now and it can give more workload to CP. Today we tried to change sharing DASD method from with using Virtual Reseve/Release to without using Virtual Reseve/Release. What i mean is that i don't use MDISK with 'V' option. Only define MDISK with 'MW' option. I want to let GRS in OS/390 to control dataset level sharing like in Native LPAR mode. I guess it might reduce the CP workload relate to reserve/release DASD. But the problem is JES failed to start because it cannot access the volume containg checkpoint dataset. The message said the volume containing checkpoint dataset is not shared. Is there anybody who use GRS in Sysplex or Baseplex? Do you use GRS and Sharing DASD with Virtual Reseve/Release? How is your CPU utilization? Supervisor CPU% is high like us? Is there any tuning point to reduce the Supervisor CPU Usage? Regards Tae Min Baek Mmaa Bldg, 467-12 Dogok-Dong Advisory IT ArchitectSeoul, 135700 z/Linux Team Korea IBM Sales & Distribution, STG Sales Phone: +822-3781-8224 Mobile: +82-010-4995-8224 e-mail: tmb...@kr.ibm.com -- Kris Buelens, IBM Belgium, VM customer support
Re: OS390 Guests using GRS and Sharing DASD with virtual RESERVE/RELEASE under z/VM
When you write "CP continue to check until DASD is released." this sounds as if you think CP is executing code all the time a disk is reserved. That indeed would be high overhead. No, CP doesn't need to be "active" all the time, the only thing it must do is inspect some flag when IO's happen on disks with Virtual R/R. I wouldn't think the CP overhead for virtual R/R is noticable. And, when a disk is virtually reserved, the whole guest wouldn't need to wait, just that IO will wait, the guest can do other things. 2010/5/22 TaeMin Baek > We are using GRS ring mode in three OS/390 V2.10 in Baseplex enviroment > under z/VM v6.1 on z10 > to share DASD and dataset with serialization between OS/390 guests and z/VM > provide shared full-pack minidisk > with using virtual RESERVE/RELEASE. > While system is running, Suervisor CPU usage is high up to 30%~40% of Total > CPU Usage on Performance Toolkits. > ex) Total CPU % : 67.4% > Superv. CPU : 29.8% > Emulat. CPU : 37.6% > > When one guest OS/390 access to a certain dataset, > 1) z/VM CP control the DASD and set it reserved > 2) if other OS/390 guest try to access different dataset in same DASD, it > must wait > and CP continue to check until DASD is released. > > Is the above logic of shaing DASD is right? > If it is right, i think GRS is useless now and it can give more workload to > CP. > > Today we tried to change sharing DASD method from with using Virtual > Reseve/Release to without using Virtual Reseve/Release. > What i mean is that i don't use MDISK with 'V' option. Only define MDISK > with 'MW' option. > I want to let GRS in OS/390 to control dataset level sharing like in Native > LPAR mode. > I guess it might reduce the CP workload relate to reserve/release DASD. > > But the problem is JES failed to start because it cannot access the volume > containg checkpoint dataset. > The message said the volume containing checkpoint dataset is not shared. > > Is there anybody who use GRS in Sysplex or Baseplex? > Do you use GRS and Sharing DASD with Virtual Reseve/Release? > How is your CPU utilization? Supervisor CPU% is high like us? > Is there any tuning point to reduce the Supervisor CPU Usage? > > > Regards > -- > *Tae Min Baek* Mmaa Bldg, 467-12 Dogok-Dong > Advisory IT Architect Seoul, 135700 z/Linux Team Korea IBM Sales & > Distribution, STG SalesPhone: +822-3781-8224Mobile: > +82-010-4995-8224e-mail: tmb...@kr.ibm.com > -- Kris Buelens, IBM Belgium, VM customer support
Re: OS390 Guests using GRS and Sharing DASD with virtual RESERVE/RELEASE under z/VM
I'm not going to claim to understanding any of the nuances, but maybe this redpiece will help: Multiple z/OS Virtual Machines on z/VM http://www.redbooks.ibm.com/abstracts/redp4507.html On 05/22/2010 01:36 PM, TaeMin Baek wrote: We are using GRS ring mode in three OS/390 V2.10 in Baseplex enviroment under z/VM v6.1 on z10 to share DASD and dataset with serialization between OS/390 guests and z/VM provide shared full-pack minidisk with using virtual RESERVE/RELEASE. While system is running, Suervisor CPU usage is high up to 30%~40% of Total CPU Usage on Performance Toolkits. ex) Total CPU % : 67.4% Superv. CPU : 29.8% Emulat. CPU : 37.6% When one guest OS/390 access to a certain dataset, 1) z/VM CP control the DASD and set it reserved 2) if other OS/390 guest try to access different dataset in same DASD, it must wait and CP continue to check until DASD is released. Is the above logic of shaing DASD is right? If it is right, i think GRS is useless now and it can give more workload to CP. Today we tried to change sharing DASD method from with using Virtual Reseve/Release to without using Virtual Reseve/Release. What i mean is that i don't use MDISK with 'V' option. Only define MDISK with 'MW' option. I want to let GRS in OS/390 to control dataset level sharing like in Native LPAR mode. I guess it might reduce the CP workload relate to reserve/release DASD. But the problem is JES failed to start because it cannot access the volume containg checkpoint dataset. The message said the volume containing checkpoint dataset is not shared. Is there anybody who use GRS in Sysplex or Baseplex? Do you use GRS and Sharing DASD with Virtual Reseve/Release? How is your CPU utilization? Supervisor CPU% is high like us? Is there any tuning point to reduce the Supervisor CPU Usage? Regards *Tae Min Baek* Mmaa Bldg, 467-12 Dogok-Dong Advisory IT ArchitectSeoul, 135700 z/Linux Team Korea IBM Sales & Distribution, STG Sales Phone: +822-3781-8224 Mobile: +82-010-4995-8224 e-mail: tmb...@kr.ibm.com -- Rich Smrcina Phone: 414-491-6001 http://www.linkedin.com/in/richsmrcina Catch the WAVV! http://www.wavv.org WAVV 2011
OS390 Guests using GRS and Sharing DASD with virtual RESERVE/RELEASE under z/VM
We are using GRS ring mode in three OS/390 V2.10 in Baseplex enviroment under z/VM v6.1 on z10 to share DASD and dataset with serialization between OS/390 guests and z/VM provide shared full-pack minidisk with using virtual RESERVE/RELEASE. While system is running, Suervisor CPU usage is high up to 30%~40% of Total CPU Usage on Performance Toolkits. ex) Total CPU % : 67.4% Superv. CPU : 29.8% Emulat. CPU : 37.6% When one guest OS/390 access to a certain dataset, 1) z/VM CP control the DASD and set it reserved 2) if other OS/390 guest try to access different dataset in same DASD, it must wait and CP continue to check until DASD is released. Is the above logic of shaing DASD is right? If it is right, i think GRS is useless now and it can give more workload to CP. Today we tried to change sharing DASD method from with using Virtual Reseve/Release to without using Virtual Reseve/Release. What i mean is that i don't use MDISK with 'V' option. Only define MDISK with 'MW' option. I want to let GRS in OS/390 to control dataset level sharing like in Native LPAR mode. I guess it might reduce the CP workload relate to reserve/release DASD. But the problem is JES failed to start because it cannot access the volume containg checkpoint dataset. The message said the volume containing checkpoint dataset is not shared. Is there anybody who use GRS in Sysplex or Baseplex? Do you use GRS and Sharing DASD with Virtual Reseve/Release? How is your CPU utilization? Supervisor CPU% is high like us? Is there any tuning point to reduce the Supervisor CPU Usage? Regards Tae Min Baek Mmaa Bldg, 467-12 Dogok-Dong Advisory IT Architect Seoul, 135700 z/Linux Team Korea IBM Sales & Distribution, STG Sales Phone: +822-3781-8224 Mobile: +82-010-4995-8224 e-mail: tmb...@kr.ibm.com <>