On Tue, Oct 02, 2007 at 03:56:17PM -0400, Ross S. W. Walker wrote:
> Pasi Kärkkäinen wrote:
> > 
> > On Tue, Oct 02, 2007 at 08:57:28PM +0300, Pasi Kärkkäinen wrote:
> > > On Tue, Oct 02, 2007 at 09:39:09AM -0400, Ross S. W. Walker wrote:
> > > > Simon Banton wrote:
> > > > > 
> > > > > At 12:30 +0200 2/10/07, matthias platzer wrote:
> > > > > >
> > > > > >What I did to work around them was basically switching 
> > to XFS for 
> > > > > >everything except / (3ware say their cards are fast, 
> > but only on 
> > > > > >XFS) AND using very low nr_requests for every blockdev 
> > on the 3ware 
> > > > > >card.
> > > > > 
> > > > > Hi Matthias,
> > > > > 
> > > > > Thanks for this. In my CentOS 5 tests the nr_requests 
> > turned out by 
> > > > > default to be 128, rather than the 8192 of CentOS 4.5. 
> > I'll have a go 
> > > > > at reducing it still further.
> > > > 
> > > > Yes, the nr_requests should be a realistic reflection of what the
> > > > card itself can handle. If too high you will see io_waits stack up
> > > > high.
> > > > 
> > > > 64 or 128 are good numbers, rarely have I seen a card 
> > that can handle
> > > > a depth larger then 128 (some older scsi cards did 256 I think).
> > > > 
> > > 
> > > Hmm.. let's say you have a linux software md-raid array made of sata
> > > drives.. what kind of nr_request values you should use for 
> > that for optimal
> > > performance? 
> > > 
> > 
> > Or let's put it this way:
> > 
> > You have a md-raid array on dom0. What kind of nr_requests 
> > values should you
> > use for normal 7200 rpm sata-ncq disks on intel ich8 (ncq) 
> > controller? 
> > 
> > And then this md-array is seen as xvdb by domU.. what kind of 
> > nr_requests
> > values should you use in domU? 
> > 
> > io-scheduler/elevator should be deadline in domU I assume.. 
> > how about in
> > dom0? deadline there too? 
> 
> Arrr, where thou go thar be monsters...
> 
> You got me Pasi, with Xen as the workload it adds a whole new
> dimension.
> 
> Unless you have hardware RAID, stick to the default setting
> and when you see a bottleneck double check your hardware
> drivers and RAID config first and only twiddle the queue
> settings if everything else has been twiddled first.
> 

OK.

I'm seeing quite high io-wait times on domU, but hardly any io-wait in
dom0.. so I was wondering if I have too high nr_requests on domU. I think it
is 256 atm.

Maybe I'll have to do some benchmarking. 

What's the best multi-threaded / multi-process io-benchmark utility that
works with filesystems instead of raw devices? and can read/write multiple
files at once..

-- Pasi       
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to