Hi, > On Mon, 2009-06-29 at 16:32 +0100, John Haxby wrote: > > That's a fairly busy system but the iostat output doesn't look to me > > like something that's I/O bound: the average wait times and queue > size > > just don't look like something that's in trouble or even working all > > that hard. > > > > Am I missing something here? > > I think what you're missing is the IO being spread across multiple > paths. It's difficult to read in the format with dm-12, dm-13, and > dm-14 mixed together, however, if you separate them you'll see a > pattern > like this: Snip
> Then it wraps around to dm-12 and the pattern continues for about 90
> seconds. I'm assuming that's the 90 seconds of the full table scan.
> He's pretty IO bound during that part. He could set the multipath
> rr_min_io parameter lower to more evenly balance the IO across the
> paths, but I think he's pretty much maxing out the IOPS his array has
> already based on the graph his SAN admin provided. In the end, the
> RHEL5 box appears to be doing what it can with the IOPS available.
Sorry for the confusion, but dm-12, 13 and 14 are 2Tb mpath devices on wich
Oracle stores it database files.
I've attached a more detailed iostat report regarding one of the mpath
devices, data002 (dm-12).
]# mulipath -ll data002
data002 (3600508b40010889b000090000c5f0000) dm-12 HP,HSV210
[size=2.0T][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=20][enabled]
\_ 0:0:3:11 sdj 8:144 [active][ready]
\_ 1:0:3:11 sdv 65:80 [active][ready]
\_ round-robin 0 [prio=100][active]
\_ 0:0:2:11 sdd 8:48 [active][ready]
\_ 1:0:2:11 sdp 8:240 [active][ready]
Maybe of interest /etc/multipath.conf:
defaults {
udev_dir /dev
polling_interval 5
selector "round-robin 0"
path_grouping_policy failover
getuid_callout "/sbin/scsi_id -g -u -s"
prio_callout none
path_checker readsector0
rr_min_io 1000
rr_weight uniform
failback manual
no_path_retry fail
user_friendly_names no
bindings_file "/etc/multipath_bindings"
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
multipaths {
multipath {
wwid "3600508b40010889b00009000015a0000"
alias system
}
multipath {
wwid "3600508b40010889b000090000c5f0000"
alias data002
}
multipath {
wwid "3600508b40010889b000090000c620000"
alias data003
}
multipath {
wwid "3600508b40010889b000090000c650000"
alias data004
}
}
device {
vendor "(COMPAQ|HP)"
product "HSV(1|2).*"
getuid_callout "/sbin/scsi_id -g -u -s"
prio_callout "/sbin/mpath_prio_alua %d"
features "0"
hardware_handler "0"
path_grouping_policy group_by_prio
failback immediate
rr_weight uniform
no_path_retry 60
rr_min_io 1000
path_checker tur
}
Not sure if any IO performance can be gained tweaking multipath settings.
Thanks again for your input.
Cheers,
Andre
> Later,
> Tom
data002io.txt.gz
Description: GNU Zip compressed data
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ rhelv5-list mailing list [email protected] https://www.redhat.com/mailman/listinfo/rhelv5-list
