Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-04-01 Thread Robert Hancock

Phillip Susi wrote:

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with 
reads (of which most are, by nature, synchronous at the app level) 
from multiple threads or apps.  It helps with writes, even with write 
cache on, by allowing multiple commands to be submitted and/or retired 
at the same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.


As well as what others have pointed out, without NCQ the disk is forced 
to accept the data in the order that the host provides it. If the host 
writes a burst of data that doesn't fill the write cache it's not as 
much of an issue, but if the write cache fills up then the disk may have 
to flush out data in a suboptimal order since it can't see what other 
requests are coming and can't change the order in which that data shows up.


--
Robert Hancock  Saskatoon, SK, Canada
To email, remove "nospam" from [EMAIL PROTECTED]
Home Page: http://www.roberthancock.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-04-01 Thread Robert Hancock

Phillip Susi wrote:

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with 
reads (of which most are, by nature, synchronous at the app level) 
from multiple threads or apps.  It helps with writes, even with write 
cache on, by allowing multiple commands to be submitted and/or retired 
at the same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.


As well as what others have pointed out, without NCQ the disk is forced 
to accept the data in the order that the host provides it. If the host 
writes a burst of data that doesn't fill the write cache it's not as 
much of an issue, but if the write cache fills up then the disk may have 
to flush out data in a suboptimal order since it can't see what other 
requests are coming and can't change the order in which that data shows up.


--
Robert Hancock  Saskatoon, SK, Canada
To email, remove nospam from [EMAIL PROTECTED]
Home Page: http://www.roberthancock.com/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-31 Thread Ric Wheeler



Mark Rustad wrote:

On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:


Mark Rustad wrote:
reorder any queued operations. Of course if you really care about 
your data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data 
integrity as well.


Turning write cache off has always been a performance-killing action 
on ATA.


Perhaps. Folks I work with would disagree with that, but I am not 
enough of a storage expert to judge. My statement mirrors the 
judgement of folks I work with that know more than I do.


You can easily demonstrate that disabling write cache on a S-ATA or ATA 
drive will drop your large file write performance by 50% - just try 
writing 10MB files to disk. 


ric


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-31 Thread Ric Wheeler



Mark Rustad wrote:

On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:


Mark Rustad wrote:
reorder any queued operations. Of course if you really care about 
your data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data 
integrity as well.


Turning write cache off has always been a performance-killing action 
on ATA.


Perhaps. Folks I work with would disagree with that, but I am not 
enough of a storage expert to judge. My statement mirrors the 
judgement of folks I work with that know more than I do.


You can easily demonstrate that disabling write cache on a S-ATA or ATA 
drive will drop your large file write performance by 50% - just try 
writing 10MB files to disk. 


ric


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-30 Thread Lennart Sorensen
On Thu, Mar 29, 2007 at 02:47:20PM -0700, David Schwartz wrote:
> Which sounds faster to you:
> 
> 1) "Do A, B, C, and D."
>"Okay, I've finished A, B, C, and B."
> or
> 
> 2) "Do A."
>"Okay." 
>"Do B."
>"Okay."
>"Do C."
>"Okay."
>"Do D."
>"Okay."
> 
> The first looks a bit more efficient to me.

It also looks like the first one got confused by having to remember all
those letters it had done. :)

--
Len Sorensen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-30 Thread Lennart Sorensen
On Thu, Mar 29, 2007 at 02:47:20PM -0700, David Schwartz wrote:
 Which sounds faster to you:
 
 1) Do A, B, C, and D.
Okay, I've finished A, B, C, and B.
 or
 
 2) Do A.
Okay. 
Do B.
Okay.
Do C.
Okay.
Do D.
Okay.
 
 The first looks a bit more efficient to me.

It also looks like the first one got confused by having to remember all
those letters it had done. :)

--
Len Sorensen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread David Schwartz

> But when writing, what is the difference between queuing multiple tagged 
> writes, and sending down multiple untagged cached writes that complete 
> immediately and actually hit the disk later?  Either way the host keeps 
> sending writes to the disk until it's buffers are full, and the disk is 
> constantly trying to commit those buffers to the media in the most 
> optimal order.

Which sounds faster to you:

1) "Do A, B, C, and D."
   "Okay, I've finished A, B, C, and B."
or

2) "Do A."
   "Okay." 
   "Do B."
   "Okay."
   "Do C."
   "Okay."
   "Do D."
   "Okay."

The first looks a bit more efficient to me.

DS


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Alan Cox
O> writes, and sending down multiple untagged cached writes that complete 
> immediately and actually hit the disk later?  Either way the host keeps 
> sending writes to the disk until it's buffers are full, and the disk is 
> constantly trying to commit those buffers to the media in the most 
> optimal order.

On the controller side primarily you get to queue commands which means
you don't have a dead time period between the completion interrupt and
the next command being issued. Those times add up even when there is a
disk cache buffering the output
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Jeff Garzik

Phillip Susi wrote:

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with 
reads (of which most are, by nature, synchronous at the app level) 
from multiple threads or apps.  It helps with writes, even with write 
cache on, by allowing multiple commands to be submitted and/or retired 
at the same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.


Less overhead to starting commands, and all the other benefits of making 
operations fully async.


Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread linux
> But when writing, what is the difference between queuing multiple tagged 
> writes, and sending down multiple untagged cached writes that complete 
> immediately and actually hit the disk later?  Either way the host keeps 
> sending writes to the disk until it's buffers are full, and the disk is 
> constantly trying to commit those buffers to the media in the most 
> optimal order.

Well, theoretically it allows more buffering, without hurting read
cacheing.

With NCQ, the drive gets the command, and then tells the host when it
wants the corresponding data.  It can ask for the data in any order
it likes, when it's decided which write will be serviced next.  So it
doesn's have to fill up its RAM with the write data.  This leaves more
RAM free for things like read-ahead.

Another trick, that I know SCSI can do and I expect NCQ can do, is that
the drive cam ask for the data for a single write in different orders.
This is particularly useful for reads, where a drive asked for blocks
100-199 can deliver blocks 150-199 first, then 100-149 when the drive
spins around.

This is, unfortunately, kind of theoretical.  I don't actually know
how hard drive cacheing algorithms work, but I assume it's mostly a
readahead cache.  The host has much more RAM than the drive, so any
block that it's read won't be requested again for a long time.  So the
drive doesn't want to keep that in cache.  But any sectors that the
drive happens to read nearby requested sectors are worth keeping.


I'm not sure it's a big deal, as 32 (tags) x 128K (largest LBA28 write
size) is 4M, only half of a typical drive's cache RAM.  But it's
possible that there's some difference.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Phillip Susi

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with reads 
(of which most are, by nature, synchronous at the app level) from 
multiple threads or apps.  It helps with writes, even with write cache 
on, by allowing multiple commands to be submitted and/or retired at the 
same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread David Schwartz

 But when writing, what is the difference between queuing multiple tagged 
 writes, and sending down multiple untagged cached writes that complete 
 immediately and actually hit the disk later?  Either way the host keeps 
 sending writes to the disk until it's buffers are full, and the disk is 
 constantly trying to commit those buffers to the media in the most 
 optimal order.

Which sounds faster to you:

1) Do A, B, C, and D.
   Okay, I've finished A, B, C, and B.
or

2) Do A.
   Okay. 
   Do B.
   Okay.
   Do C.
   Okay.
   Do D.
   Okay.

The first looks a bit more efficient to me.

DS


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Phillip Susi

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with reads 
(of which most are, by nature, synchronous at the app level) from 
multiple threads or apps.  It helps with writes, even with write cache 
on, by allowing multiple commands to be submitted and/or retired at the 
same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread linux
 But when writing, what is the difference between queuing multiple tagged 
 writes, and sending down multiple untagged cached writes that complete 
 immediately and actually hit the disk later?  Either way the host keeps 
 sending writes to the disk until it's buffers are full, and the disk is 
 constantly trying to commit those buffers to the media in the most 
 optimal order.

Well, theoretically it allows more buffering, without hurting read
cacheing.

With NCQ, the drive gets the command, and then tells the host when it
wants the corresponding data.  It can ask for the data in any order
it likes, when it's decided which write will be serviced next.  So it
doesn's have to fill up its RAM with the write data.  This leaves more
RAM free for things like read-ahead.

Another trick, that I know SCSI can do and I expect NCQ can do, is that
the drive cam ask for the data for a single write in different orders.
This is particularly useful for reads, where a drive asked for blocks
100-199 can deliver blocks 150-199 first, then 100-149 when the drive
spins around.

This is, unfortunately, kind of theoretical.  I don't actually know
how hard drive cacheing algorithms work, but I assume it's mostly a
readahead cache.  The host has much more RAM than the drive, so any
block that it's read won't be requested again for a long time.  So the
drive doesn't want to keep that in cache.  But any sectors that the
drive happens to read nearby requested sectors are worth keeping.


I'm not sure it's a big deal, as 32 (tags) x 128K (largest LBA28 write
size) is 4M, only half of a typical drive's cache RAM.  But it's
possible that there's some difference.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Jeff Garzik

Phillip Susi wrote:

Jeff Garzik wrote:
NCQ provides for a more asynchronous flow.  It helps greatly with 
reads (of which most are, by nature, synchronous at the app level) 
from multiple threads or apps.  It helps with writes, even with write 
cache on, by allowing multiple commands to be submitted and/or retired 
at the same time.


But when writing, what is the difference between queuing multiple tagged 
writes, and sending down multiple untagged cached writes that complete 
immediately and actually hit the disk later?  Either way the host keeps 
sending writes to the disk until it's buffers are full, and the disk is 
constantly trying to commit those buffers to the media in the most 
optimal order.


Less overhead to starting commands, and all the other benefits of making 
operations fully async.


Jeff



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-29 Thread Alan Cox
O writes, and sending down multiple untagged cached writes that complete 
 immediately and actually hit the disk later?  Either way the host keeps 
 sending writes to the disk until it's buffers are full, and the disk is 
 constantly trying to commit those buffers to the media in the most 
 optimal order.

On the controller side primarily you get to queue commands which means
you don't have a dead time period between the completion interrupt and
the next command being issued. Those times add up even when there is a
disk cache buffering the output
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-28 Thread Jeff Garzik

Phillip Susi wrote:

Justin Piszcz wrote:

I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


Shouldn't NCQ only help write performance if write caching is 
_disabled_?  Since write cache essentially is just non tagged command 
queuing?


NCQ provides for a more asynchronous flow.  It helps greatly with reads 
(of which most are, by nature, synchronous at the app level) from 
multiple threads or apps.  It helps with writes, even with write cache 
on, by allowing multiple commands to be submitted and/or retired at the 
same time.


Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-28 Thread Phillip Susi

Justin Piszcz wrote:

I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


Shouldn't NCQ only help write performance if write caching is 
_disabled_?  Since write cache essentially is just non tagged command 
queuing?


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-28 Thread Phillip Susi

Justin Piszcz wrote:

I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


Shouldn't NCQ only help write performance if write caching is 
_disabled_?  Since write cache essentially is just non tagged command 
queuing?


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-28 Thread Jeff Garzik

Phillip Susi wrote:

Justin Piszcz wrote:

I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


Shouldn't NCQ only help write performance if write caching is 
_disabled_?  Since write cache essentially is just non tagged command 
queuing?


NCQ provides for a more asynchronous flow.  It helps greatly with reads 
(of which most are, by nature, synchronous at the app level) from 
multiple threads or apps.  It helps with writes, even with write cache 
on, by allowing multiple commands to be submitted and/or retired at the 
same time.


Jeff



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Rustad

On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:


Mark Rustad wrote:
reorder any queued operations. Of course if you really care about  
your data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data  
integrity as well.


Turning write cache off has always been a performance-killing  
action on ATA.


Perhaps. Folks I work with would disagree with that, but I am not  
enough of a storage expert to judge. My statement mirrors the  
judgement of folks I work with that know more than I do.


Also the controller used can have unfortunate interactions. For  
example the Adaptec SAS controller firmware will never issue more  
than two queued commands to a SATA drive (even though the firmware  
will happily accept more from the driver), so even if an attached  
drive is capable of reordering queued commands, its performance is  
seriously crippled by not getting more commands queued up. In  
addition, some drive firmware seems to try to bunch up queued  
command completions which interacts very badly with a controller  
that queues up so few commands. In this case turning NCQ off  
performs better because the drive knows it can't hold off  
completions to reduce interrupt load on the host – a good idea  
gone totally wrong when used with the Adaptec controller.


All of that can be fixed with an Adaptec firmware upgrade, so not  
our problem here, and not a reason to disable NCQ in libata core.


It theoretically could be, but we are using the latest Adaptec  
firmware. Until there exists firmware that fixes it, it remains an  
issue. We worked with Adaptec to isolate this issue, but no  
resolution has been forthcoming from them. I agree that this does not  
mean that NCQ should be disabled in libata core, but some combination  
of controller/drive/firmware blacklist may need to be managed, as  
distasteful as that is.


Today SATA NCQ seems to be an area where few combinations work  
well. It seems so bad to me that a whitelist might be better than  
a blacklist. That is probably overstating it, but NCQ performance  
is certainly a big problem.


Real world testing disagrees with you.  NCQ has been enabled for a  
while now.  We would have screaming hordes of users if the majority  
of configurations were problematic.


I didn't say that it is a majority or that it doesn't work, it just  
often doesn't perform. If it didn't work there would be lots of  
howling for sure. I'm also not saying that it is a libata problem. It  
seems mostly to be controller and drive firmware issues - and the odd  
fan issue (if you saw the thread: [BUG 2.6.21-rc3-git9] SATA NCQ  
failure with Samsum HD401LJ).


I guess I am mainly lamenting the current state of SATA/NCQ devices  
and sharing what little I have picked up about it - which is that I  
want SAS disks in my next system!


--
Mark Rustad, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Jeff Garzik

Mark Rustad wrote:
reorder any queued operations. Of course if you really care about your 
data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data 
integrity as well.


Turning write cache off has always been a performance-killing action on ATA.


Also the controller used can have unfortunate interactions. For example 
the Adaptec SAS controller firmware will never issue more than two 
queued commands to a SATA drive (even though the firmware will happily 
accept more from the driver), so even if an attached drive is capable of 
reordering queued commands, its performance is seriously crippled by not 
getting more commands queued up. In addition, some drive firmware seems 
to try to bunch up queued command completions which interacts very badly 
with a controller that queues up so few commands. In this case turning 
NCQ off performs better because the drive knows it can't hold off 
completions to reduce interrupt load on the host – a good idea gone 
totally wrong when used with the Adaptec controller.


All of that can be fixed with an Adaptec firmware upgrade, so not our 
problem here, and not a reason to disable NCQ in libata core.



Today SATA NCQ seems to be an area where few combinations work well. It 
seems so bad to me that a whitelist might be better than a blacklist. 
That is probably overstating it, but NCQ performance is certainly a big 
problem.


Real world testing disagrees with you.  NCQ has been enabled for a while 
now.  We would have screaming hordes of users if the majority of 
configurations were problematic.


Jeff


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Rustad

On Mar 27, 2007, at 12:59 AM, Jeff Garzik wrote:


Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation,  
with the exception of 2-3 items.


Variables to take into account:

* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload (or in your case, benchmark tool)
* in particular, the threaded-ness of the apps

For the overwhelming majority of combinations, NCQ should not / 
hurt/ performance.


For the majority of combinations, NCQ helps (though it may not be  
often that you use more than 4-8 tags).


In some cases, NCQ firmware may be broken.  There is a Maxtor  
firmware id, and some Hitachi ids that people are leaning towards  
recommending be added to the libata 'horkage' list.


Some other variables that we have noticed: Some drive firmware goes  
into "stupid" mode when write cache is turned off. Meaning that it  
does not reorder any queued operations. Of course if you really care  
about your data, you don't really want to turn write cache on.


Also the controller used can have unfortunate interactions. For  
example the Adaptec SAS controller firmware will never issue more  
than two queued commands to a SATA drive (even though the firmware  
will happily accept more from the driver), so even if an attached  
drive is capable of reordering queued commands, its performance is  
seriously crippled by not getting more commands queued up. In  
addition, some drive firmware seems to try to bunch up queued command  
completions which interacts very badly with a controller that queues  
up so few commands. In this case turning NCQ off performs better  
because the drive knows it can't hold off completions to reduce  
interrupt load on the host – a good idea gone totally wrong when used  
with the Adaptec controller.


Today SATA NCQ seems to be an area where few combinations work well.  
It seems so bad to me that a whitelist might be better than a  
blacklist. That is probably overstating it, but NCQ performance is  
certainly a big problem.


--
Mark Rustad, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


I meant you do not allocate the entire disk per raidset, which may alter
performance numbers.


No, that would be silly.  It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so


04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II 
Controller (rev 01)
I assume you mean 3132 right?


Yes; did I mistype?

02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)


I also have 6 seagates, I'd need to run one
of these tests on them as well, also you took the micro jumper off the
Seagate 400s in the back as well right?


Um... no, I don't remember doing anything like that.  What micro jumper?
It's been a while, but I just double-checked the drive manual and
it doesn't mention any jumpers.



The 7200.8's don't use a jumper except for "factory use" - the 7200.9s and 
10s I believe have a jumper in the back to enable/disable 3.0GBps 
operation.  Your model # corresponds with a 7200.8, so nevermind about the 
jumper.


Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
> I meant you do not allocate the entire disk per raidset, which may alter 
> performance numbers.

No, that would be silly.  It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so

> 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II 
> Controller (rev 01)
> I assume you mean 3132 right?

Yes; did I mistype?

02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)

> I also have 6 seagates, I'd need to run one 
> of these tests on them as well, also you took the micro jumper off the 
> Seagate 400s in the back as well right?

Um... no, I don't remember doing anything like that.  What micro jumper?
It's been a while, but I just double-checked the drive manual and
it doesn't mention any jumpers.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007

Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz <[EMAIL PROTECTED]>
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org,
   linux-kernel@vger.kernel.org
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)
In-Reply-To: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

   --Sequential Create-- Random Create
   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
   16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
   16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
   16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
   16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
   16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
   16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
   16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5




I would try with write-caching enabled.


I did.  See the "wcache5" lines?


Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


I don't quite understand.  "Each volume is on part of the platter" -
yes, it's called partitioning, and it's pretty common.

Basically, the first 50G of each drive is assembled with RAID-10 to make
a 150G "system" file system, where I appreciate the speed and greater
redundancy of RAID-10, and the last 250G are combined 

Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
>From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz <[EMAIL PROTECTED]>
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org, 
linux-kernel@vger.kernel.org
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)
In-Reply-To: <[EMAIL PROTECTED]>
References: <[EMAIL PROTECTED]>
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:

> Here's some more data.
>
> 6x ST3400832AS (Seagate 7200.8) 400 GB drives.
> 3x SiI3232 PCIe SATA controllers
> 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
> Linux 2.6.20.4, 64-bit kernel
>
> Tested able to sustain reads at 60 MB/sec/drive simultaneously.
>
> RAID-10 is across 6 drives, first part of drive.
> RAID-5 most of the drive, so depending on allocation policies,
> may be a bit slower.
>
> The test sequence actually was:
> 1) raid5ncq
> 2) raid5noncq
> 3) raid10noncq
> 4) raid10ncq
> 5) raid5ncq
> 6) raid5noncq
> but I rearranged things to make it easier to compare.
>
> Note that NCQ makes writes faster (oh... I have write cacheing turned off;
> perhaps I should turn it on and do another round), but no-NCQ seems to have
> a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
> file
> read times; I haven't bothered to fix that yet.
>
> NCQ seems to have a pretty significant effect on the file operations,
> especially deletes.
>
> Update: added
> 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
> 8) wcache5ncq - RAID 5 with NCQ and write cache enabled
>
>
> RAID=5, NCQ
> Version  1.03   --Sequential Output-- --Sequential Input- 
> --Random-
>-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   > 0
> raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   > 0
> raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   > 0
> raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   > 0
> wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   > 0
> wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   > 0
> raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   > 0
> raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   > 0
>
>--Sequential Create-- Random Create
>-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
> %CP
>16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
>16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
>16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
>16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
>16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
>16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
>16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
>16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5
>
> raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
> raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
> raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
> raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
> wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
> wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
> raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
> raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5
>

> I would try with write-caching enabled.

I did.  See the "wcache5" lines?

> Also, the RAID5/RAID10 you mention seems like each volume is on part of
> the platter, a strange setup you got there :)

I don't quite understand.  "Each volume is on part of the platter" -
yes, it's called partitioning, and it's pretty common.

Basically, the first 50G of each drive is assembled with RAID-10 to make
a 150G "system" file 

Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

   --Sequential Create-- Random Create
   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
   16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
   16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
   16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
   16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
   16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
   16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
   16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5



I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)

Also you are disabling NCQ on/off via the /sys/block device, e.g., setting 
it to 1 (off) and 31 (on) during testing, yes?


Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Lord

Jeff Garzik wrote:


In some cases, NCQ firmware may be broken.  There is a Maxtor firmware 
id, and some Hitachi ids that people are leaning towards recommending be 
added to the libata 'horkage' list.


Western Digital "Raptor" drives (the 10K rpm things) are also somewhat
borked in NCQ mode, depending on the application.

Their firmware turns off all drive readahead during NCQ.
This makes them very good for an email/news server application,
but also causes them to suck for regular desktop applications.

Because of this, they use special software drivers under MSwin
which detect large sequential accesses, and avoid NCQ during such times.

Cheers

-ml
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz


On Tue, 27 Mar 2007, Tejun Heo wrote:


Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion if 
there is only 1 thread using I/O (single user system) then NCQ off is the 
best.


Are they testing using Linux?  I/O performance is highly dependent on 
workload and scheduling, so result on windows wouldn't be very useful. 
Posting some links here would be nice.


I see 30-50MB/s faster speeds with NCQ turned off on two different SW 
RAID5s.


You're testing raptors, right?  If the performance drop is that drastic and 
consistent over different workloads, we'll have to disable NCQ for raptors. 
I'm not sure about other drives.  Care to perform tests over more popular 
ones (e.g. recent seagates or 7200rpm wds)?


--
tejun



You are correct, it definitely depends upon the workload, and a lot of the 
benchmarks do use Windows; however, I will have to check later, I recall 
finding a few that did test under Linux.


For a plain untar with lots of small files, the benefit is not as big as 
sequential reads/writes of big files; however, there is still an 
improvement:


Raid5 Quad 150 Raptor (NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'

real0m21.721s
user0m0.174s
sys 0m1.541s

Raid5 Quad 150 Raptor (NO NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'

real0m16.761s
user0m0.195s
sys 0m1.361s

Raid5 Six 400GB Sata Drives (NO NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'
real0m54.844s
user0m0.189s
sys 0m1.432s

Raid5 Six 400GB Sata Drives (NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'
real1m7.322s
user0m0.194s
sys 0m1.492s

Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Tejun Heo

Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion 
if there is only 1 thread using I/O (single user system) then NCQ off is 
the best.


Are they testing using Linux?  I/O performance is highly dependent on 
workload and scheduling, so result on windows wouldn't be very useful. 
Posting some links here would be nice.


I see 30-50MB/s faster speeds with NCQ turned off on two 
different SW RAID5s.


You're testing raptors, right?  If the performance drop is that drastic 
and consistent over different workloads, we'll have to disable NCQ for 
raptors.  I'm not sure about other drives.  Care to perform tests over 
more popular ones (e.g. recent seagates or 7200rpm wds)?


--
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Sat, 24 Mar 2007, Alan Cox wrote:


On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz <[EMAIL PROTECTED]> wrote:


Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.


It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are probably still cases where we get bad
interactions in the kernel code that want tuning too



Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion if 
there is only 1 thread using I/O (single user system) then NCQ off is the 
best.  I see 30-50MB/s faster speeds with NCQ turned off on two different 
SW RAID5s.


Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Sat, 24 Mar 2007, Alan Cox wrote:


On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz [EMAIL PROTECTED] wrote:


Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.


It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are probably still cases where we get bad
interactions in the kernel code that want tuning too



Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion if 
there is only 1 thread using I/O (single user system) then NCQ off is the 
best.  I see 30-50MB/s faster speeds with NCQ turned off on two different 
SW RAID5s.


Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Tejun Heo

Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion 
if there is only 1 thread using I/O (single user system) then NCQ off is 
the best.


Are they testing using Linux?  I/O performance is highly dependent on 
workload and scheduling, so result on windows wouldn't be very useful. 
Posting some links here would be nice.


I see 30-50MB/s faster speeds with NCQ turned off on two 
different SW RAID5s.


You're testing raptors, right?  If the performance drop is that drastic 
and consistent over different workloads, we'll have to disable NCQ for 
raptors.  I'm not sure about other drives.  Care to perform tests over 
more popular ones (e.g. recent seagates or 7200rpm wds)?


--
tejun
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz


On Tue, 27 Mar 2007, Tejun Heo wrote:


Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech, 
hothardware and others, they generally all come to the same conclusion if 
there is only 1 thread using I/O (single user system) then NCQ off is the 
best.


Are they testing using Linux?  I/O performance is highly dependent on 
workload and scheduling, so result on windows wouldn't be very useful. 
Posting some links here would be nice.


I see 30-50MB/s faster speeds with NCQ turned off on two different SW 
RAID5s.


You're testing raptors, right?  If the performance drop is that drastic and 
consistent over different workloads, we'll have to disable NCQ for raptors. 
I'm not sure about other drives.  Care to perform tests over more popular 
ones (e.g. recent seagates or 7200rpm wds)?


--
tejun



You are correct, it definitely depends upon the workload, and a lot of the 
benchmarks do use Windows; however, I will have to check later, I recall 
finding a few that did test under Linux.


For a plain untar with lots of small files, the benefit is not as big as 
sequential reads/writes of big files; however, there is still an 
improvement:


Raid5 Quad 150 Raptor (NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'

real0m21.721s
user0m0.174s
sys 0m1.541s

Raid5 Quad 150 Raptor (NO NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'

real0m16.761s
user0m0.195s
sys 0m1.361s

Raid5 Six 400GB Sata Drives (NO NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'
real0m54.844s
user0m0.189s
sys 0m1.432s

Raid5 Six 400GB Sata Drives (NCQ)
# time sh -c 'tar xf linux-2.6.20.tar; sync'
real1m7.322s
user0m0.194s
sys 0m1.492s

Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Lord

Jeff Garzik wrote:


In some cases, NCQ firmware may be broken.  There is a Maxtor firmware 
id, and some Hitachi ids that people are leaning towards recommending be 
added to the libata 'horkage' list.


Western Digital Raptor drives (the 10K rpm things) are also somewhat
borked in NCQ mode, depending on the application.

Their firmware turns off all drive readahead during NCQ.
This makes them very good for an email/news server application,
but also causes them to suck for regular desktop applications.

Because of this, they use special software drivers under MSwin
which detect large sequential accesses, and avoid NCQ during such times.

Cheers

-ml
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

   --Sequential Create-- Random Create
   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
   16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
   16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
   16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
   16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
   16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
   16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
   16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5



I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)

Also you are disabling NCQ on/off via the /sys/block device, e.g., setting 
it to 1 (off) and 31 (on) during testing, yes?


Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz [EMAIL PROTECTED]
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org, 
linux-kernel@vger.kernel.org
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)
In-Reply-To: [EMAIL PROTECTED]
References: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:

 Here's some more data.

 6x ST3400832AS (Seagate 7200.8) 400 GB drives.
 3x SiI3232 PCIe SATA controllers
 2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
 Linux 2.6.20.4, 64-bit kernel

 Tested able to sustain reads at 60 MB/sec/drive simultaneously.

 RAID-10 is across 6 drives, first part of drive.
 RAID-5 most of the drive, so depending on allocation policies,
 may be a bit slower.

 The test sequence actually was:
 1) raid5ncq
 2) raid5noncq
 3) raid10noncq
 4) raid10ncq
 5) raid5ncq
 6) raid5noncq
 but I rearranged things to make it easier to compare.

 Note that NCQ makes writes faster (oh... I have write cacheing turned off;
 perhaps I should turn it on and do another round), but no-NCQ seems to have
 a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
 file
 read times; I haven't bothered to fix that yet.

 NCQ seems to have a pretty significant effect on the file operations,
 especially deletes.

 Update: added
 7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
 8) wcache5ncq - RAID 5 with NCQ and write cache enabled


 RAID=5, NCQ
 Version  1.03   --Sequential Output-- --Sequential Input- 
 --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
 %CP
 raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.20
 raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.60
 raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.60
 raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.40
 wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.60
 wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.80
 raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.80
 raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.20

--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
 files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
 %CP
16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

 raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
 raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
 raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
 raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
 wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
 wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
 raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
 raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5


 I would try with write-caching enabled.

I did.  See the wcache5 lines?

 Also, the RAID5/RAID10 you mention seems like each volume is on part of
 the platter, a strange setup you got there :)

I don't quite understand.  Each volume is on part of the platter -
yes, it's called partitioning, and it's pretty common.

Basically, the first 50G of each drive is assembled with RAID-10 to make
a 150G system file system, where I appreciate the speed and greater
redundancy of RAID-10, and the last 250G are 

Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007

Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz [EMAIL PROTECTED]
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org,
   linux-kernel@vger.kernel.org
Subject: Re: Why is NCQ enabled by default by libata? (2.6.20)
In-Reply-To: [EMAIL PROTECTED]
References: [EMAIL PROTECTED]
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed

On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


Here's some more data.

6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel

Tested able to sustain reads at 60 MB/sec/drive simultaneously.

RAID-10 is across 6 drives, first part of drive.
RAID-5 most of the drive, so depending on allocation policies,
may be a bit slower.

The test sequence actually was:
1) raid5ncq
2) raid5noncq
3) raid10noncq
4) raid10ncq
5) raid5ncq
6) raid5noncq
but I rearranged things to make it easier to compare.

Note that NCQ makes writes faster (oh... I have write cacheing turned off;
perhaps I should turn it on and do another round), but no-NCQ seems to have
a read advantage.  [EMAIL PROTECTED]@#ing bonnie++ overflows and won't print 
file
read times; I haven't bothered to fix that yet.

NCQ seems to have a pretty significant effect on the file operations,
especially deletes.

Update: added
7) wcache5noncq - RAID 5 with no NCQ but write cache enabled
8) wcache5ncq - RAID 5 with NCQ and write cache enabled


RAID=5, NCQ
Version  1.03   --Sequential Output-- --Sequential Input- --Random-
   -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
raid5ncq  7952M 31688  53  34760 10 25327   4 57908  86 167680 13 292.2   0
raid5ncq  7952M 30357  50  34154 10 24876   4 59692  89 165663 13 285.6   0
raid5noncq7952M 29015  48  31627  9 24263   4 61154  91 185389 14 286.6   0
raid5noncq7952M 28447  47  31163  9 23306   4 60456  89 198624 15 293.4   0
wcache5ncq7952M 32433  54  35413 10 26139   4 59898  89 168032 13 303.6   0
wcache5noncq  7952M 31768  53  34597 10 25849   4 61049  90 193351 14 304.8   0
raid10ncq 7952M 54043  89 110804 32 48859   9 58809  87 142140 12 363.8   0
raid10noncq   7952M 48912  81  68428 21 38906   7 57824  87 146030 12 358.2   0

   --Sequential Create-- Random Create
   -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files:max:min/sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
   16:10:16/64  1351  25 + +++   941   3  2887  42 31526  96   382   1
   16:10:16/64  1400  18 + +++   386   1  4959  69 32118  95   570   2
   16:10:16/64   636   8 + +++   176   0  1649  23 + +++   245   1
   16:10:16/64   715  12 + +++   164   0   156   2 11023  32  2161   8
   16:10:16/64  1291  26 + +++  2778  10  2424  33 31127  93   483   2
   16:10:16/64  1236  26 + +++   840   3  2519  37 30366  91   445   2
   16:10:16/64  1714  37 + +++  1652   6   789  11  4700  14 12264  48
   16:10:16/64   634  11 + +++  1035   3   338   4 + +++  1349   5

raid5ncq,7952M,31688,53,34760,10,25327,4,57908,86,167680,13,292.2,0,16:10:16/64,1351,25,+,+++,941,3,2887,42,31526,96,382,1
raid5ncq,7952M,30357,50,34154,10,24876,4,59692,89,165663,13,285.6,0,16:10:16/64,1400,18,+,+++,386,1,4959,69,32118,95,570,2
raid5noncq,7952M,29015,48,31627,9,24263,4,61154,91,185389,14,286.6,0,16:10:16/64,636,8,+,+++,176,0,1649,23,+,+++,245,1
raid5noncq,7952M,28447,47,31163,9,23306,4,60456,89,198624,15,293.4,0,16:10:16/64,715,12,+,+++,164,0,156,2,11023,32,2161,8
wcache5ncq,7952M,32433,54,35413,10,26139,4,59898,89,168032,13,303.6,0,16:10:16/64,1291,26,+,+++,2778,10,2424,33,31127,93,483,2
wcache5noncq,7952M,31768,53,34597,10,25849,4,61049,90,193351,14,304.8,0,16:10:16/64,1236,26,+,+++,840,3,2519,37,30366,91,445,2
raid10ncq,7952M,54043,89,110804,32,48859,9,58809,87,142140,12,363.8,0,16:10:16/64,1714,37,+,+++,1652,6,789,11,4700,14,12264,48
raid10noncq,7952M,48912,81,68428,21,38906,7,57824,87,146030,12,358.2,0,16:10:16/64,634,11,+,+++,1035,3,338,4,+,+++,1349,5




I would try with write-caching enabled.


I did.  See the wcache5 lines?


Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)


I don't quite understand.  Each volume is on part of the platter -
yes, it's called partitioning, and it's pretty common.

Basically, the first 50G of each drive is assembled with RAID-10 to make
a 150G system file system, where I appreciate the speed and greater
redundancy of RAID-10, and the last 250G are combined with RAID-5 to 

Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread linux
 I meant you do not allocate the entire disk per raidset, which may alter 
 performance numbers.

No, that would be silly.  It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so

 04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II 
 Controller (rev 01)
 I assume you mean 3132 right?

Yes; did I mistype?

02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)

 I also have 6 seagates, I'd need to run one 
 of these tests on them as well, also you took the micro jumper off the 
 Seagate 400s in the back as well right?

Um... no, I don't remember doing anything like that.  What micro jumper?
It's been a while, but I just double-checked the drive manual and
it doesn't mention any jumpers.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Justin Piszcz



On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:


I meant you do not allocate the entire disk per raidset, which may alter
performance numbers.


No, that would be silly.  It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so


04:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II 
Controller (rev 01)
I assume you mean 3132 right?


Yes; did I mistype?

02:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
03:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid 
II Controller (rev 01)


I also have 6 seagates, I'd need to run one
of these tests on them as well, also you took the micro jumper off the
Seagate 400s in the back as well right?


Um... no, I don't remember doing anything like that.  What micro jumper?
It's been a while, but I just double-checked the drive manual and
it doesn't mention any jumpers.



The 7200.8's don't use a jumper except for factory use - the 7200.9s and 
10s I believe have a jumper in the back to enable/disable 3.0GBps 
operation.  Your model # corresponds with a 7200.8, so nevermind about the 
jumper.


Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Rustad

On Mar 27, 2007, at 12:59 AM, Jeff Garzik wrote:


Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation,  
with the exception of 2-3 items.


Variables to take into account:

* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload (or in your case, benchmark tool)
* in particular, the threaded-ness of the apps

For the overwhelming majority of combinations, NCQ should not / 
hurt/ performance.


For the majority of combinations, NCQ helps (though it may not be  
often that you use more than 4-8 tags).


In some cases, NCQ firmware may be broken.  There is a Maxtor  
firmware id, and some Hitachi ids that people are leaning towards  
recommending be added to the libata 'horkage' list.


Some other variables that we have noticed: Some drive firmware goes  
into stupid mode when write cache is turned off. Meaning that it  
does not reorder any queued operations. Of course if you really care  
about your data, you don't really want to turn write cache on.


Also the controller used can have unfortunate interactions. For  
example the Adaptec SAS controller firmware will never issue more  
than two queued commands to a SATA drive (even though the firmware  
will happily accept more from the driver), so even if an attached  
drive is capable of reordering queued commands, its performance is  
seriously crippled by not getting more commands queued up. In  
addition, some drive firmware seems to try to bunch up queued command  
completions which interacts very badly with a controller that queues  
up so few commands. In this case turning NCQ off performs better  
because the drive knows it can't hold off completions to reduce  
interrupt load on the host – a good idea gone totally wrong when used  
with the Adaptec controller.


Today SATA NCQ seems to be an area where few combinations work well.  
It seems so bad to me that a whitelist might be better than a  
blacklist. That is probably overstating it, but NCQ performance is  
certainly a big problem.


--
Mark Rustad, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Jeff Garzik

Mark Rustad wrote:
reorder any queued operations. Of course if you really care about your 
data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data 
integrity as well.


Turning write cache off has always been a performance-killing action on ATA.


Also the controller used can have unfortunate interactions. For example 
the Adaptec SAS controller firmware will never issue more than two 
queued commands to a SATA drive (even though the firmware will happily 
accept more from the driver), so even if an attached drive is capable of 
reordering queued commands, its performance is seriously crippled by not 
getting more commands queued up. In addition, some drive firmware seems 
to try to bunch up queued command completions which interacts very badly 
with a controller that queues up so few commands. In this case turning 
NCQ off performs better because the drive knows it can't hold off 
completions to reduce interrupt load on the host – a good idea gone 
totally wrong when used with the Adaptec controller.


All of that can be fixed with an Adaptec firmware upgrade, so not our 
problem here, and not a reason to disable NCQ in libata core.



Today SATA NCQ seems to be an area where few combinations work well. It 
seems so bad to me that a whitelist might be better than a blacklist. 
That is probably overstating it, but NCQ performance is certainly a big 
problem.


Real world testing disagrees with you.  NCQ has been enabled for a while 
now.  We would have screaming hordes of users if the majority of 
configurations were problematic.


Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-27 Thread Mark Rustad

On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:


Mark Rustad wrote:
reorder any queued operations. Of course if you really care about  
your data, you don't really want to turn write cache on.


That's a gross exaggeration.  FLUSH CACHE and FUA both ensure data  
integrity as well.


Turning write cache off has always been a performance-killing  
action on ATA.


Perhaps. Folks I work with would disagree with that, but I am not  
enough of a storage expert to judge. My statement mirrors the  
judgement of folks I work with that know more than I do.


Also the controller used can have unfortunate interactions. For  
example the Adaptec SAS controller firmware will never issue more  
than two queued commands to a SATA drive (even though the firmware  
will happily accept more from the driver), so even if an attached  
drive is capable of reordering queued commands, its performance is  
seriously crippled by not getting more commands queued up. In  
addition, some drive firmware seems to try to bunch up queued  
command completions which interacts very badly with a controller  
that queues up so few commands. In this case turning NCQ off  
performs better because the drive knows it can't hold off  
completions to reduce interrupt load on the host – a good idea  
gone totally wrong when used with the Adaptec controller.


All of that can be fixed with an Adaptec firmware upgrade, so not  
our problem here, and not a reason to disable NCQ in libata core.


It theoretically could be, but we are using the latest Adaptec  
firmware. Until there exists firmware that fixes it, it remains an  
issue. We worked with Adaptec to isolate this issue, but no  
resolution has been forthcoming from them. I agree that this does not  
mean that NCQ should be disabled in libata core, but some combination  
of controller/drive/firmware blacklist may need to be managed, as  
distasteful as that is.


Today SATA NCQ seems to be an area where few combinations work  
well. It seems so bad to me that a whitelist might be better than  
a blacklist. That is probably overstating it, but NCQ performance  
is certainly a big problem.


Real world testing disagrees with you.  NCQ has been enabled for a  
while now.  We would have screaming hordes of users if the majority  
of configurations were problematic.


I didn't say that it is a majority or that it doesn't work, it just  
often doesn't perform. If it didn't work there would be lots of  
howling for sure. I'm also not saying that it is a libata problem. It  
seems mostly to be controller and drive firmware issues - and the odd  
fan issue (if you saw the thread: [BUG 2.6.21-rc3-git9] SATA NCQ  
failure with Samsum HD401LJ).


I guess I am mainly lamenting the current state of SATA/NCQ devices  
and sharing what little I have picked up about it - which is that I  
want SAS disks in my next system!


--
Mark Rustad, [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-26 Thread Jeff Garzik

Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


Variables to take into account:

* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload (or in your case, benchmark tool)
* in particular, the threaded-ness of the apps

For the overwhelming majority of combinations, NCQ should not /hurt/ 
performance.


For the majority of combinations, NCQ helps (though it may not be often 
that you use more than 4-8 tags).


In some cases, NCQ firmware may be broken.  There is a Maxtor firmware 
id, and some Hitachi ids that people are leaning towards recommending be 
added to the libata 'horkage' list.


Jeff


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-26 Thread Jeff Garzik

Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


Variables to take into account:

* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload (or in your case, benchmark tool)
* in particular, the threaded-ness of the apps

For the overwhelming majority of combinations, NCQ should not /hurt/ 
performance.


For the majority of combinations, NCQ helps (though it may not be often 
that you use more than 4-8 tags).


In some cases, NCQ firmware may be broken.  There is a Maxtor firmware 
id, and some Hitachi ids that people are leaning towards recommending be 
added to the libata 'horkage' list.


Jeff


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Robert Hancock

Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64 > 
run.txt;


# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software 
RAID:
p34-ncq-on,7952M,43916.3,96.6667,151943,28.6667,75794.3,18.6667,48991.3,99,181687,24,558.033,0.33,16:10:16/64,867.667,9,29972.7,98.,2801.67,16,890.667,9.3,27743,94.,2115.33,15.6667 

# Average of 3 runs with NCQ off for Quad Raptor ADFD 150 RAID 5 
Software RAID:
p34-ncq-off,7952M,42470,97.,200409,36.,90240.3,22.6667,48656,99,198853,27,546.467,0,16:10:16/64,972.333,10,21833,72.,3697,21,995,10.6667,27901.7,95.6667,2681,20.6667 



http://home.comcast.net/~jpiszcz/ncq_vs_noncq/results.html

In general, for networking, etc, the kernel chooses 'optimized' 
defaults, therefore, I was curious why is NCQ enabled by default?


Normally NCQ is faster, though it depends on the drive firmware. It's 
also possible that software RAID is a case where there are negative 
interactions.


--
Robert Hancock  Saskatoon, SK, Canada
To email, remove "nospam" from [EMAIL PROTECTED]
Home Page: http://www.roberthancock.com/

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Alan Cox
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz <[EMAIL PROTECTED]> wrote:

> Without NCQ, performance is MUCH better on almost every operation, with 
> the exception of 2-3 items.

It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are probably still cases where we get bad
interactions in the kernel code that want tuning too
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Justin Piszcz
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64 > run.txt;

# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software RAID:
p34-ncq-on,7952M,43916.3,96.6667,151943,28.6667,75794.3,18.6667,48991.3,99,181687,24,558.033,0.33,16:10:16/64,867.667,9,29972.7,98.,2801.67,16,890.667,9.3,27743,94.,2115.33,15.6667
# Average of 3 runs with NCQ off for Quad Raptor ADFD 150 RAID 5 Software RAID:
p34-ncq-off,7952M,42470,97.,200409,36.,90240.3,22.6667,48656,99,198853,27,546.467,0,16:10:16/64,972.333,10,21833,72.,3697,21,995,10.6667,27901.7,95.6667,2681,20.6667

http://home.comcast.net/~jpiszcz/ncq_vs_noncq/results.html

In general, for networking, etc, the kernel chooses 'optimized' defaults, 
therefore, I was curious why is NCQ enabled by default?


Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Justin Piszcz
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64  run.txt;

# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software RAID:
p34-ncq-on,7952M,43916.3,96.6667,151943,28.6667,75794.3,18.6667,48991.3,99,181687,24,558.033,0.33,16:10:16/64,867.667,9,29972.7,98.,2801.67,16,890.667,9.3,27743,94.,2115.33,15.6667
# Average of 3 runs with NCQ off for Quad Raptor ADFD 150 RAID 5 Software RAID:
p34-ncq-off,7952M,42470,97.,200409,36.,90240.3,22.6667,48656,99,198853,27,546.467,0,16:10:16/64,972.333,10,21833,72.,3697,21,995,10.6667,27901.7,95.6667,2681,20.6667

http://home.comcast.net/~jpiszcz/ncq_vs_noncq/results.html

In general, for networking, etc, the kernel chooses 'optimized' defaults, 
therefore, I was curious why is NCQ enabled by default?


Justin.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Alan Cox
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz [EMAIL PROTECTED] wrote:

 Without NCQ, performance is MUCH better on almost every operation, with 
 the exception of 2-3 items.

It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are probably still cases where we get bad
interactions in the kernel code that want tuning too
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Why is NCQ enabled by default by libata? (2.6.20)

2007-03-24 Thread Robert Hancock

Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with 
the exception of 2-3 items.


/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64  
run.txt;


# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software 
RAID:
p34-ncq-on,7952M,43916.3,96.6667,151943,28.6667,75794.3,18.6667,48991.3,99,181687,24,558.033,0.33,16:10:16/64,867.667,9,29972.7,98.,2801.67,16,890.667,9.3,27743,94.,2115.33,15.6667 

# Average of 3 runs with NCQ off for Quad Raptor ADFD 150 RAID 5 
Software RAID:
p34-ncq-off,7952M,42470,97.,200409,36.,90240.3,22.6667,48656,99,198853,27,546.467,0,16:10:16/64,972.333,10,21833,72.,3697,21,995,10.6667,27901.7,95.6667,2681,20.6667 



http://home.comcast.net/~jpiszcz/ncq_vs_noncq/results.html

In general, for networking, etc, the kernel chooses 'optimized' 
defaults, therefore, I was curious why is NCQ enabled by default?


Normally NCQ is faster, though it depends on the drive firmware. It's 
also possible that software RAID is a case where there are negative 
interactions.


--
Robert Hancock  Saskatoon, SK, Canada
To email, remove nospam from [EMAIL PROTECTED]
Home Page: http://www.roberthancock.com/

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/