David Boyes wrote:

> Historically, I guess I'm somewhat suspicious of compatibility modes
> like the mixed mode 31/64-bit stuff. None of the implementations of
> such code I've ever worked with was sufficient (DEC Alpha, Cray, HP)
> for production level reliability.  Perhaps you folks are better
> programmers...8-).

Compat mode should work well enough; some people are running a full
32-bit userland environment under a 64-bit kernel, and I think some
of the IBM middleware is actually certified for 32-bit compat layer
on SLES-7 (64-bit) for example.

If you know of specific problems, we are always happy to receive
bug reports ;-)

> I also still find that the algorithms used in Linux for buffer
> management are quite a bit less efficient than the ones used in VM --
> that's no slight to the Linux folks, it's what they have to work with

Could you be more specific here?

> If your application issues a read (either buffered or non-buffered) in
> the VM case and MDC has pre-cached the response by doing a fulltrack
> read or has previously cached the record, the response time for I/O
> completion is significantly better than going direct to disk.

Well, if the block is already in the page cache, response time will be
even more significantly better, as we save the round-trip through CP,
SIE intercepts and all ...

> Simplistic, until you consider that if the same database table
> is active for multiple database machines, you can do quite a bit of
> I/O avoidance that isn't possible in the Linux-only scenario. Net win.

As soon as you have multiple guests accessing the same disk, shared
caching will provide benefits, no arguments about that.  However, I'd
still consider shared read/write access to be a rare scenario, only
exploitable from a few specialized applications at the moment.

> You also gain the early I/O completion notification from
> virtualizing DASDFW, although that's more a hardware feature than a VM
> feature. It does have an impact on write performance in that the write
> I/O completes much more quickly, and is guaranteed via the NVRAM.

And how is that different from Linux using the DASDFW feature directly
(actually, we don't even need to do anything, the hardware uses DASDFW
by default unless you specifically switch it off)?

> Is it better? Maybe not. It does however give you a lot more knobs to
> manipulate the performance of the process. I'm of the opinion that the
> I/O optimization code in VM has had more time to get optimized, and I
> find that to be more tunable than the Linux code.

It is true that the Linux philosophy generally views lots of tuning
knobs with suspicion -- the system is supposed to tune itself, with
the existing knobs having more something of a debugging function.

Is there anything specific you'd like to be able to tune but cannot?

> It also has some
> very inspired hardware feature exploitation code in in that Linux
> hasn't inherited yet.  Time will tell -- you've got plenty of work to
> keep you busy nights...8-).

Is there anything specific you have in mind?  (Things that are applicable
to modern storage subsystems -- we are not particularly interested in
supporting all the quirks of real 3390 devices at this point ...)


Bye,
Ulrich

--
  Dr. Ulrich Weigand
  [EMAIL PROTECTED]

Reply via email to