And more questions before I get an answer to the last one :)
If we're going to go changing it to all to deadline, is that the best choice?
I see that many folks recommend noop for KVM virtual machines, wouldn't this
also apply under z?
And Oracle mentions noop here
http://www.zseriesoraclesig.org/2012presentations/2012_226_simpson_11gR2CustExpSys_z_pptV2.pdf
as a way to reduce CPU consumption, although databases aren't your average i/o
kind of guys either of course.
I found this in an old xSeries redbook of all places
http://www.redbooks.ibm.com/redpapers/pdfs/redp3861.pdf
Select the right I/O elevator in kernel 2.6
For most server workloads, the complete fair queuing (CFQ) elevator is an
adequate choice
as it is optimized for the multiuser, multiprocess environment a typical server
operates in.
However, certain environments can benefit from a different I/O elevator.
Intelligent disk subsystems
Benchmarks have shown that the NOOP elevator is an interesting alternative in
high-end
server environments. When using IBM ServeRAID or TotalStorage DS class disk
subsystems, the lack of ordering capability of the NOOP elevator becomes its
strength.
Intelligent disk subsystems such as IBM ServeRAID and TotalStorage DS class
disk
subsystems feature their own I/O ordering capabilities. Enterprise class disk
subsystems
may contain multiple SCSI or FibreChannel disks that each have individual disk
heads and
data striped across the disks. It would be very difficult for an operating
system to anticipate
the I/O characteristics of such complex subsystems correctly, so you might
often observe
at least equal performance at less overhead when using the NOOP I/O elevator.
Virtual machines
Virtual machines, regardless of whether in VMware or VM for zSeries(r), may
only
communicate through the virtualization layer with the underlying hardware.
Hence a virtual
machine is not aware of the fact if the assigned disk device consists of a
single SCSI
device or an array of FibreChannel disks on a TotalStorage DS8000. The
virtualization
layer takes care of necessary I/O reordering and the communication with the
physical
block devices. Therefore, we recommend using the NOOP elevator for virtual
machines to
ensure minimal processor overhead.
Marcy Cortes
Operating Systems Engineer, z/VM and Linux on System z
Compute Platform Services, Mainframe/Midrange Services
Wells Fargo Bank | MAC A0194-110 | San Francisco
Cell 415-517-0895
marcy.d.cor...@wellsfargo.com
This message may contain confidential and/or privileged information. If you are
not the addressee or authorized to receive this for the addressee, you must not
use, copy, disclose, or take any action based on this message or any
information herein. If you have received this message in error, please advise
the sender immediately by reply e-mail and delete this message. Thank you for
your cooperation.
-Original Message-
From: Cortes, Marcy D.
Sent: Thursday, October 25, 2012 9:19 AM
To: LINUX-390@VM.MARIST.EDU
Subject: RE: [LINUX-390] I/O scheduler on sles 11 sp2 on z
Yes! We are using LVM extensively.
So cfq is being used on LVM by default then?
Marcy
This message may contain confidential and/or privileged information. If you are
not the addressee or authorized to receive this for the addressee, you must not
use, copy, disclose, or take any action based on this message or any
information herein. If you have received this message in error, please advise
the sender immediately by reply e-mail and delete this message. Thank you for
your cooperation.
-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Peter
Oberparleiter
Sent: Thursday, October 25, 2012 12:55 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: [LINUX-390] I/O scheduler on sles 11 sp2 on z
On 24.10.2012 18:54, Marcy Cortes wrote:
> "On IBM System z the default I/O scheduler for a storage device is set by the
> device driver", but trying to query what it is seems to indicate that
> everything is deadline already (but it must not be or the parm wouldn't have
> helped).
Could it be that you are using LVM which adds its own block devices which are
not affected by DASD's elevator defaults?
Regards,
Peter Oberparleiter
--
Peter Oberparleiter
Linux on System z Development - IBM Germany
--
For LINUX-390 subscribe / signoff / archive access instructions, send email to
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit