It looks like we were a bit fooled by the customer. After a lot of
discussion it looks like it is not only CPU constrained. And also, at
least in part, they already had problems in the old machine (and they
forgot to mention that little detail). It used to be just acceptable but
with the current overhead the performance dropped too much.

As for your questions, the LPAR is the only one in use on the hardware.
The VM guest is the only machine running any actual load. Others are
DIRMAINT and a CMS servicemachine, things like that. The VSE in the
guest VM is the top user. We did find that when the guest had been given
two CPU's. Most of the time the LPAR ran at just over 100%. Sometimes it
spiked to 120%. Once the VSE got it's second CPU and TD the load went up
to close to 200%. So we can conclude the VSE is the top user here.

Tomorrow we move the guest into an LPAR on a z9. Perhaps that would give
us a faster CP.

We have to look into the caching, like you and Kris suggested. Perhaps
that could also give us some additional speed. And we also look into
other configurations that would normally not do that much but it could
be just enough to get an acceptable performance.

Thanks, Berry.

Dieltiens Geert schreef:
> Well, if the problem is caused by a CPU-intensive CICS-program, then I
> would expect that you would have seen that problem on your old system as
> well (when we put a really CPU-intensive CICS-program into production,
> we get calls from frustrated users immediately). But you'll need a CICS
> monitor to look into the resource usage of your CICS-transactions...  
>
> A couple of other things to consider regarding CPU-resources:
> - is the VM/ESA guest the only (heavy) guest in the z/VM system or is
> competing with others? 
> - was CP SET SHARE set appropriately for this guest in the z/VM system?
> - did you provide QUICKDSP for the VM/ESA guest in the z/VM system? 
> - does the LPAR get all the resources you think its getting (check the
> Change Logical Partition Controls task or the activity display on the
> HMC)?
>
> Bye,
> Geert.
>
>
> -----Original Message-----
> From: The IBM z/VM Operating System [mailto:ib...@listserv.uark.edu] On
> Behalf Of Berry van Sleeuwen
> Sent: woensdag 29 april 2009 13:51
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: Third level VSE
>
> Geert,
>
>   
>> Do you mean: attached to the VSE-guest or to the VM/ESA-guest?
>> If attached to the VSE-guest: is there still a real performance benefit
>> in attaching dasd to a 3rd level VSE-guest?
>>     
>
> Attached to the guest VM. I don't know if there would be any advantage
> in=
>
> attaching to third level, other than an as-is move to a new location.
>
>   
>> Anyway, MDC has the potential of giving your VSE-throughput a real
>>     
> boost=
>
>   
>> (it did in our case), so in order for the VSE-guest to benefit from MDC
>> in the VM/ESA system, I would: 
>> - in the first level z/VM: attach the dasd to the 2nd level VM/ESA
>> guest.
>> - in the 2nd level VM/ESA: attach the dasd to SYSTEM, and define
>> fullpack MDISKs for the 3rd level VSE guest. 
>>     
>
> But would that also boost non-IO load? I expect the problem is CPU load
> i=
> n
> some stupid program. In that case any MDC wouldn't help me for that. The
> only advantage would be an improvement of the batch processing.
>
>   
>> Also, if enough storage is available in VSE, add more buffers to your
>> CICS LSR-pools and/or database system.
>>     
>
> Storage enough. We have 512M spare in the host VM that isn't used. And
> th=
> e
> VSE runs NOPDS so we can increase it just by adding virtual storage in
> th=
> e
> VSE guest directory. If VM runs out of storage (or starts paging at any
> serious level) we can add virual storage to the guest VM.
>
> Regards, Berry.
> DISCLAIMER
>
> This email and any files transmitted with it are confidential 
> and intended solely for the use of the individual or entity 
> to whom they are addressed. If you have received this email 
> in error please notify postmas...@vanbreda.be
> This footnote also confirms that this email has been checked 
> for the presence of viruses.
>
> Informatica J.Van Breda & Co NV BTW BE 0427 908 174
>
>
>
>   

Reply via email to