On Fri, Oct 31, 2008 at 2:30 PM, Mary Anne Matyaz
<[EMAIL PROTECTED]> wrote:

> As for MDC, I've been curious about that lately. About a week ago, I turned
> off mdc for a highly active volume, and it seemed to me that resp increased
> rapidly and markedly.

It's good that you try and measure rather than rely on hearsay or guts
feeling. The nasty part is that I/O measurement is complicated (that's
why I have been postponing such research to the point where I have
more time to spend on it).

When you set MDC OFF for a virtual machine or virtual device, it
avoids further inserts but anything in MDC will remain there and gets
used on a read (unless you also purge it). I recall from some
experiments in the past that system-wide MDC OFF was the only thing
that really made a difference when the workload did not take advantage
of it.

Further, the actual I/O performed by CP is may be different with MDC
enabled. When Linux wants a series of blocks and one is found in MDC,
the rest is read through 2 I/O operations by CP. The consequence is
that the size of the average I/O goes down, so the I/O response time
(per I/O operation) gets lower. Whether that makes the throughput of
the application higher with the same factor is not obvious.

Another gotcha is that MDC takes the low-hanging fruit. So when you
enable MDC the remaining I/O to the DASD subsystem are less likely to
cache than when you would do all I/O.
:anecdote type=sad.
Long ago, we replaced some 3390's with a RAID based subsystem. The
vendor had promised a certain cache hit ratio internally in the
subsystem. When that was not met for VM devices, we were told to
disable MDC because it "interfered" with the DASD subsystem. Clearly,
when VM MDC already avoided the ones that were easy to cache, the
remaining work for the DASD was harder. But going to the subsystem
cache is still slower than taking it out of MDC, so the application
throughput got worse...
:eanecdote.

Rob
-- 
Rob van der Heij
Velocity Software
http://velocitysoftware.com/

Reply via email to