Excuse the non plain text email - for some reason I can't change that on this 
one.

Whenever someone says "batch" and "too long" I/O delay is usually what comes to 
mind to me.
50-70% is too high.  What kind of i/o service times are you getting?
Can you convert to FICON?
Or at least go back to 16 ESCON?


Marcy

________________________________
From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of TaeMin Baek
Sent: Monday, May 17, 2010 7:49 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: Re: [IBMVM] Batch job takes too long in OS/390 Guest under z/VM v6.1

The change is that we add xstor 1GB and the others are same.
we reduced the number of ESCON Channel from 16 to 8 and now channel busy is 
around 50%~70%.

We are sharing DASD among 5 OS/390 Virtual Machines by Using Virtual 
Reserve/Release(MWV Option for MDISK) with SET SHARED ON option in SYSTEM 
CONFIG.
I read 'CP pllaning and admin guide and it says like the below.

*When to Use Concurrent Virtual and Real Reserve/Release
In general, you should use concurrent virtual and real reserve/release when you 
need to share DASD among many virtual machines and other systems. Do not use 
this method when you need to share DASD only among virtual machines, because 
the CP overhead is much greater than if you use virtual reserve/release.

If I changed the option to 'SET SHARED OFF', can it reduce CP overhead between 
VM Guest OS?
and is it safe to change this option by SET command while system is running?

Regards
________________________________
Tae Min Baek     Mmaa Bldg, 467-12 Dogok-Dong
[cid:703573516@18052010-2B73]
Advisory IT Architect    Seoul, 135700
z/Linux Team     Korea
IBM Sales & Distribution, STG Sales
Phone:  +822-3781-8224
Mobile:         +82-010-4995-8224
e-mail:         tmb...@kr.ibm.com





From:        Marcy Cortes <marcy.d.cor...@wellsfargo.com>
To:        IBMVM@LISTSERV.UARK.EDU
Date:        2010-05-18 오전 09:13
Subject:        Re: Batch job takes too long in OS/390 Guest under z/VM v6.1
Sent by:        The IBM z/VM Operating System <IBMVM@LISTSERV.UARK.EDU>
________________________________



What changes were made to your memory config?  xstor, cstor, mdc size ?
Any changes to the I/O configuration?


Marcy

"This message may contain confidential and/or privileged information. If you 
are not the addressee or authorized to receive this for the addressee, you must 
not use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation."



________________________________

From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On Behalf 
Of TaeMin Baek
Sent: Monday, May 17, 2010 3:22 PM
To: IBMVM@LISTSERV.UARK.EDU
Subject: [IBMVM] Batch job takes too long in OS/390 Guest under z/VM v6.1


We migrate three OS/390 system from 2064-2C1 running on native LPAR mode to 
2098-T01 running
under z/VM like the below.

HW: z10 BC- 2098 T01 (GCP 1 EA)
OS: z/VM V6.1
Guest OS: Four OS/390 v2.10 (SE02, SE05, SE06, CF) running under z/VM

1) Between three OS/390 guests share many DASD which are defined by full-pack 
minidisk in CP
directory.
2) three OS/390 guests share the DASD using GRS. MIM is used before migration
3) three OS/390 guests are coupled by Sysplex(VCFLINK under z/VM) thru CF guest
4) Sysplex(XCF) is used for only GRS and Tape sharing. MIA is used before 
migration
5) GRS was not used befor migration

* Problem Symptom
 => During night time, batch job workload on SE02 was taking too long and SE02 
Guest system
has high CPU usage(Specially Supervisor CPU% is high around 32% and Emulated 
CPU% is 35%) and I/O wait(12%) by performance toolkit.
 => By monitoring in OS/390(SE02), CPU Usage of MASTER, GRS, CATALOG task is 
high. therefore batch job is hard to get CPU resource.
 => CF Guest user has only 1% of CPU utilization.
 => batch job use DASD and Tape devices.

could it be caused by VM side or OS/390?
What do we need to check more to find what is root cause and to fix this 
problem?

Reply via email to