On Thu, Feb 21, 2008 at 2:27 AM, Alan Altmark <[EMAIL PROTECTED]> wrote:

>  Hard evidence of the performance of emulated FBA can be found at:
>   http://www.vm.ibm.com/perf/reports/zvm/html/530scsi.html
>
>  Don't just look at the tables; read the text that goes with them.

Since I have not been able yet to measure it myself, I've hesitated
before responding to this thread. So you're excused if you want to
skip my long post with no real numbers... ;-)

As with some of the measurements from others, the significance for
Linux on z/VM installations isn't as much as we would hope. The most
significant part in these is a potential reduction in CPU usage.
That's good news. The "native SCSI" was meant as a realistic option to
hold the z/VM data while Linux has its data on FCP directly. That is
also the motivation for the measurements in the performance report.
The context that Sir Santa introduced is to have Linux run on emulated
devices. A simple iozone test does not address the complexity of the
performance issues involved.

Most of the measurements folks seem to do are "top speed" measurements
(like maximum single-user throughput). With Linux on z/VM we want to
tune the system to deliver efficient *multi-user* throughput. We
rarely care to tune the system such that a single user can get all
resources for itself, because in most real-life cases there will be
sufficient others who also need resources and need to make progress.
Normally that means lower single-user throughput because you introduce
latency. If you care for analogies: it's about measuring mileage
versus top speed.

Many of the algorithms involved do not behave in a linear fashion.
Driving a Linux virtual machine at saturation level provides little
information about what it will do at lower utilization levels that you
may have in real life. We've seen both extremes: code that gets more
efficient at high levels as well as code that performs worse under
pressure.

If you're careful, you can measure the effect of configuration changes
on your own workload. But it does not mean you could predict the
effect for other workloads. I once counted Linux on z/VM disk I/O to
have 17 layers where we block, cache, queue, skip or reorder the I/O.
Some of these layers have instrumentation, but many don't. Trying to
understand that with benchmarks is like doing weather forecast by
looking out of the window.

Rob
-- 
Rob van der Heij
Velocity Software, Inc
http://velocitysoftware.com/

Reply via email to