On Fri, 14 Jul 2006 08:45:55 -0400, Bill Bitner <[EMAIL PROTECTED]>
wrote:
>Brian, when you say 'relative performance metrics' are you asking about
>the CPU cost associated with the various paths? If so, I don't think
>I have any current data on that. Partly because, it does depend on a lot
>of
Brian, when you say 'relative performance metrics' are you asking about
the CPU cost associated with the various paths? If so, I don't think
I have any current data on that. Partly because, it does depend on a lot
of different things. My suggestion there is to find a virtual machine
that is fairly
On 7/13/06, Bill Bitner <[EMAIL PROTECTED]> wrote:
While MDC is a write-through cache, it does not automatically
insert writes into the cache if the disk location being
I have done some experiments with Linux that seem to work differently.
I flushed the cache and then had Linux sequentially wr
For dasd cache, all writes are hits unless there is a non-volatile
storage full (error) condition.
If you have dedicated dasd addresses for linux guests, the dasd
cache report (ESADSD5) shows percent of I/O that is read vs
write, and the percent of each that are hits.
The seek reports show by use
At the moment I only need to deal with a couple dozen Linux images and
each has different access patterns based on the application that runs in
it. I doubt Linux (SUSE) will have any of the complicated channel
programs that would confuse MDC.
If I use the assumption of no complicated channel
Brian, you raise an interesting question about tuning MDC on a
per user basis vs. a system basis vs. a combination. A single
solve all algorithm would be cool, but beyond my imagination
to make it perfect. I believe there was a lot of research
originally about
While MDC is a write-through cache, i
On Wed, 12 Jul 2006 14:16:37 -0400, Bill Bitner <[EMAIL PROTECTED]>
wrote:
>I tend not to just look at the hit ratio, but at the
>I/Os avoided. 50% hit ratio on 1000 I/Os per second is
>better than 90% hit ratio on 100 I/Os per second.
I do the same, but I'd like your thoughts on factoring in the
I think others have covered most of the factors on which it
depends. One area that seemed to beg for a little more discussion
was MDC in central storage vs. expanded storage. Two things to bear in
min are:
1. CP uses access register mode to address guest memory for MDC in
many cases
2. the archi
I've never done any test comparing STORAGE vs. XSTORAGE for MDC. I suspect
that "it depends". I would guess that a deciding factor is whether or not
you can sometimes use main storage for other things than MDC. If your
application requirements are such that in the day time, you're doing a lot
Now that I had time to think about it a little more.
80% MDC cache hit ratio isn't that good. How much memory would it take
to get the hit ratio up to 95% (and can you afford the memory).
My concern now, is that your application mix may not be a good caching
candidate. i.e. if you have large am
]
ext. 40441
> -Original Message-
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Jim Bohnsack
> Sent: Wednesday, July 12, 2006 10:48 AM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: MDC, Storage, xstore, and cache on External dasd.
>
>
On Wed, 12 Jul 2006 10:16:09 -0400, Edward M. Martin wrote:
>Hello And thanks to everyone,
>
> I do appreciate everyone's input and opinions. We have the
>memory.
>
>8 gig total, 5 gig defined for storage, 2 gig to xstore, and the rest
>used
>by the HMC.
>
> I do think that the
20
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED]
ext. 40441
> -Original Message-
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Tom Duerbusch
> Sent: Tuesday, July 11, 2006 3:06 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: MDC, S
I think that the answer is probably a little of "it depends" (thank you
Bill Bitner). The "it depends" I think is related to your I/O rate and I/O
requirements. An example, and even tho it is very old, I think would still
apply, is that back in the late '80's I had some 3090's as well as some
artin
Aultman Health Foundation
330-588-4723
[EMAIL PROTECTED]
ext. 40441
> -Original Message-
> From: The IBM z/VM Operating System [mailto:[EMAIL PROTECTED]
On
> Behalf Of Tom Duerbusch
> Sent: Tuesday, July 11, 2006 3:06 PM
> To: IBMVM@LISTSERV.UARK.EDU
> Subject: Re: MD
On the, "Get close to the application" thread, remember that any I/O
request satisfied from MDC is likely to be satisfied (on modern boxes) in
less than a microsecond. Even cached, anything that involves a
conversation with a channel is going to experience a service time that is
several order
Your concern is justified.
The question isreal memory vs CPU.
You shouldn't have much of an I/O bottleneck with your caching
controller, assumming you have ficon or better channel speeds.
But if your read I/O is satisfied from MDC, you won't go thru the I/O
boundry which is a saving in CPU
17 matches
Mail list logo