Re: NCQ general question
On Wed, Mar 08, 2006 at 12:17:51PM -0500, Jeff Garzik wrote: Louis-David Mitterrand wrote: Do you plan on updating your AHCI NCQ patch found in http://www.kernel.org/pub/linux/kernel/people/jgarzik/libata/archive/ It no longer applies cleanly to the latest 2.6.15.x kernel. No, but, Jens Axboe and Tejun Heo will have a better version. Is it available somewhere? - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Sun, Mar 05, 2006 at 02:29:15AM -0500, Jeff Garzik wrote: Raz Ben-Jehuda(caro) wrote: Is NCQ supported when setting the controller to JBOD instead of using HW raid? 1) The two have nothing to do with each other 2) It sounds like you haven't yet read http://linux-ata.org/faq-sata-raid.html Hello, Do you plan on updating your AHCI NCQ patch found in http://www.kernel.org/pub/linux/kernel/people/jgarzik/libata/archive/ It no longer applies cleanly to the latest 2.6.15.x kernel. Thanks, - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Louis-David Mitterrand wrote: On Sun, Mar 05, 2006 at 02:29:15AM -0500, Jeff Garzik wrote: Raz Ben-Jehuda(caro) wrote: Is NCQ supported when setting the controller to JBOD instead of using HW raid? 1) The two have nothing to do with each other 2) It sounds like you haven't yet read http://linux-ata.org/faq-sata-raid.html Hello, Do you plan on updating your AHCI NCQ patch found in http://www.kernel.org/pub/linux/kernel/people/jgarzik/libata/archive/ It no longer applies cleanly to the latest 2.6.15.x kernel. No, but, Jens Axboe and Tejun Heo will have a better version. Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Steve Byan wrote: On Mar 3, 2006, at 5:19 PM, Jeff Garzik wrote: Steve Byan wrote: it. It works OK for reads. TCQ was really invented as a way to allow CD-ROM drives to play nice on the same ATA bus as disks. Disagree, you are probably thinking about bus disconnect associated with the overlapped command set? Yep, I had the two concepts confused. Thanks for the clarification. Isn't the same bus disconnect used for TCQ, though? Yes. TCQ still has nothing to do with ATAPI though, which was the main point of disagreement :) Much to my chagrin, too, since ATAPI could use some queueing... Data integrity -and- performance. Performance increases for all the standard reasons that an asynchronous pipeline increases performance over a synchronous one. The write cache means that requests on the device can be processed asynchronously, but without NCQ there is still a synchronous bottleneck: the device-controller pipe. True, but I think that is fairly small compared to no-write-cache-and- no-queuing case. Write-caching is the major win; optimizing the data transfer is only a second-order effect. Measurements on NCQ in the field show a distinct performance improvement... 30% has been measured on Linux. Nothing to sneeze at. correctly. ATA disk write caching breaks this guarantee. To restore filesystem integrity on a careful-write filesystem like most unix filesystems, you have to disable write-caching in the drive. This False, as Linux has proven: barriers can be implemented with flush- cache commands. Disabling write cache is not your only choice, and using flush- cache gives you better performance than flat-out disabling the write cache. Yes, you're correct; I neglected to include that option. It's not as good as real FUA because it flushes the entire cache, not just the metadata which needs to be written through to the media. Agreed. Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Mar 3, 2006, at 5:19 PM, Jeff Garzik wrote: Steve Byan wrote: it. It works OK for reads. TCQ was really invented as a way to allow CD-ROM drives to play nice on the same ATA bus as disks. Disagree, you are probably thinking about bus disconnect associated with the overlapped command set? Yep, I had the two concepts confused. Thanks for the clarification. Isn't the same bus disconnect used for TCQ, though? Data integrity -and- performance. Performance increases for all the standard reasons that an asynchronous pipeline increases performance over a synchronous one. The write cache means that requests on the device can be processed asynchronously, but without NCQ there is still a synchronous bottleneck: the device-controller pipe. True, but I think that is fairly small compared to no-write-cache-and- no-queuing case. Write-caching is the major win; optimizing the data transfer is only a second-order effect. correctly. ATA disk write caching breaks this guarantee. To restore filesystem integrity on a careful-write filesystem like most unix filesystems, you have to disable write-caching in the drive. This False, as Linux has proven: barriers can be implemented with flush- cache commands. Disabling write cache is not your only choice, and using flush- cache gives you better performance than flat-out disabling the write cache. Yes, you're correct; I neglected to include that option. It's not as good as real FUA because it flushes the entire cache, not just the metadata which needs to be written through to the media. Regards, -Steve -- Steve Byan [EMAIL PROTECTED] Software Architect Egenera, Inc. 165 Forest Street Marlboro, MA 01752 (508) 858-3125 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Mar 4, 2006, at 2:10 PM, Jeff Garzik wrote: Steve Byan wrote: Data integrity -and- performance. Performance increases for all the standard reasons that an asynchronous pipeline increases performance over a synchronous one. The write cache means that requests on the device can be processed asynchronously, but without NCQ there is still a synchronous bottleneck: the device-controller pipe. True, but I think that is fairly small compared to no-write-cache- and- no-queuing case. Write-caching is the major win; optimizing the data transfer is only a second-order effect. Measurements on NCQ in the field show a distinct performance improvement... 30% has been measured on Linux. Nothing to sneeze at. Wow! 30% is amazing. I'd be interested in knowing how the costs break down; are these measurements published anywhere? Regards, -Steve -- Steve Byan [EMAIL PROTECTED] Software Architect Egenera, Inc. 165 Forest Street Marlboro, MA 01752 (508) 858-3125 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Is NCQ supported when setting the controller to JBOD instead of using HW raid? On 3/5/06, Eric D. Mudama [EMAIL PROTECTED] wrote: On 3/4/06, Steve Byan [EMAIL PROTECTED] wrote: On Mar 4, 2006, at 2:10 PM, Jeff Garzik wrote: Measurements on NCQ in the field show a distinct performance improvement... 30% has been measured on Linux. Nothing to sneeze at. Wow! 30% is amazing. I'd be interested in knowing how the costs break down; are these measurements published anywhere? Full-stroke random reads with small operations (4k or less) typically show 75-85% performance improvement, from the ability of a 7200rpm drive to carve 4ms out of their response time, as well as a huge chunk of seek distance. Random writes, since as you said they're already reordered with cache enabled, don't typically show any sort of increase in desktop applications. NCQ FUA writes or NCQ writes with cache disabled should show the same ballpark performance improvement as random reads in saturated workloads. Again however, this is for the full-stroke random case. Local area workloads need to be analyzed more thoroughly, and may differ in performance gain by manufacturer. --eric -- Raz - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Raz Ben-Jehuda(caro) wrote: Is NCQ supported when setting the controller to JBOD instead of using HW raid? 1) The two have nothing to do with each other 2) It sounds like you haven't yet read http://linux-ata.org/faq-sata-raid.html Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Mar 1, 2006, at 8:55 AM, Jens Axboe wrote: On Wed, Mar 01 2006, Mark Lord wrote: NCQ vs. TCQ: NCQ has a much more efficient low-level protocol, making the host-side (controller, operating-system) quite a bit simpler than with NCQ. Or in laymens terms - TCQ sucks and NCQ doesn't :-) NCQ has many more advantages than TCQ, apart from both a more efficient low level protocol and ease of implementation. TCQ basically just allows the drive to do some reordering, it still serializes everything and requires too many interrupts. The problem with TCQ is that the host can't disconnect on writes after sending the data to the drive but before receiving the status. The host can only disconnect between sending the command and moving the data. Consequently TCQ is useless for writes, which is where you really need it. It works OK for reads. TCQ was really invented as a way to allow CD-ROM drives to play nice on the same ATA bus as disks. The reason you need write queuing is for data integrity reasons, not for performance. ATA disks effectively get command-queuing on writes even without TCQ and NCQ - they simply park the data in a volatile RAM cache, tell the host that the data is saved on persistent storage, and then asynchronously write the queued data to the physical media. The drive reorders those writes and will gather sequential writes. However, note that all filesystems that make even a pretense of trying to maintain filesystem integrity after a power failure (note that the Windows NT implementation of FAT32 does not attempt to maintain filesystem integrity after a power failure) depend on knowing when data makes it to persistent storage, so they can order their writes correctly. ATA disk write caching breaks this guarantee. To restore filesystem integrity on a careful-write filesystem like most unix filesystems, you have to disable write-caching in the drive. This causes such a drastic loss of performance (you basically get only one sequential write per disk revolution), that you must then implement command-queuing to allow the drive to gather sequential writes to make the system usable. As an alternative, if you have a journalling filesystem, you can leave the disk cache enabled but selectively write-through your metadata using force-unit-access (FUA). Regards, -Steve -- Steve Byan [EMAIL PROTECTED] Software Architect Egenera, Inc. 165 Forest Street Marlboro, MA 01752 (508) 858-3125 - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
(don't top post) On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote: i can see the NCQ realy bother people. i am using a promise card sata TX4 150. does any of you has a patch for the driver so it would support NCQ ? I don't know of any documentation for the promise cards (or whether they support NCQ). Does the binary promise driver support NCQ? Jeff likely knows a lot more. -- Jens Axboe - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Jens Axboe wrote: (don't top post) On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote: i can see the NCQ realy bother people. i am using a promise card sata TX4 150. does any of you has a patch for the driver so it would support NCQ ? I don't know of any documentation for the promise cards (or whether they support NCQ). Does the binary promise driver support NCQ? Jeff likely knows a lot more. The sata2 tx4 150 supports NCQ, and I have docs. sata tx4 150 does not support NCQ. Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Thank you Mr Garzik. Is there a list of all drivers and there features they give ? Raz. On 3/2/06, Jeff Garzik [EMAIL PROTECTED] wrote: Jens Axboe wrote: (don't top post) On Thu, Mar 02 2006, Raz Ben-Jehuda(caro) wrote: i can see the NCQ realy bother people. i am using a promise card sata TX4 150. does any of you has a patch for the driver so it would support NCQ ? I don't know of any documentation for the promise cards (or whether they support NCQ). Does the binary promise driver support NCQ? Jeff likely knows a lot more. The sata2 tx4 150 supports NCQ, and I have docs. sata tx4 150 does not support NCQ. Jeff -- Raz - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Mark Lord wrote: Gentoopower wrote: Raz Ben-Jehuda(caro) wrote: i am thinking of buying a promise card sataII pcix. they have two types, a card which support NCQ and another that does not. What is the bennifit of buying a card with NCQ tagging ? How about: http://en.wikipedia.org/wiki/Native_command_queueing Yuck.. what a lousy wiki entry. NCQ vs. TCQ: NCQ has a much more efficient low-level protocol, making the host-side (controller, operating-system) quite a bit simpler than with NCQ. Both use 32-deep queue depths, and neither of them are worth a damn on Linux yet. Except possibly in the libata ahci driver, or vendor-provided drivers (open source, even) for some chipsets. In theory, NCQ/TCQ can speed up a very busy fileserver that is handling mostly tiny I/O requests. Practically no measurable benefit for single-user systems. That's a lousy comment:-) Single-User systems can have lots of I/O requests too. If I compile something in the backround, listen to music, while copying files from one drive to the other. I also have lots of I/O while booting. I have two seagates in my box a 160GB 7200.7 and 160GB 7200.9(SATAII NCQ), using NFORCE4. I can defintely feel the speed difference between the two drives. P.S. Just waiting to see NCQ support for my nforce system in libata:-) Cheers - To unsubscribe from this list: send the line unsubscribe linux-ide in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html ___ Telefonate ohne weitere Kosten vom PC zum PC: http://messenger.yahoo.de - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Wed, Mar 01 2006, Gentoopower wrote: I have two seagates in my box a 160GB 7200.7 and 160GB 7200.9(SATAII NCQ), using NFORCE4. I can defintely feel the speed difference between the two drives. Well that can't be because of NCQ, since it isn't active :-) P.S. Just waiting to see NCQ support for my nforce system in libata:-) Don't hold your breath, it's unlikely to get supported as nvidia wont open the specs. ahci is a really really nice controller, if you want ncq I suggest going with that. sil is probably the next in line for ncq support. -- Jens Axboe - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
Jens Axboe wrote: On Wed, Mar 01 2006, Gentoopower wrote: P.S. Just waiting to see NCQ support for my nforce system in libata:-) Don't hold your breath, it's unlikely to get supported as nvidia wont open the specs. ahci is a really really nice controller, if you want ncq I suggest going with that. sil is probably the next in line for ncq support. Actually. * Old-nvidia is ADMA, and I have docs under NDA * nvidia themselves say they are uninterested in NCQ support for their older ADMA controllers, though they don't mind if I implement it * New-nvidia is AHCI, and thus will support NCQ when AHCI does * slight correction to the above: sil24 will do NCQ, I don't think sil does Jeff - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: NCQ general question
On Wed, Mar 01 2006, Jeff Garzik wrote: Jens Axboe wrote: On Wed, Mar 01 2006, Gentoopower wrote: P.S. Just waiting to see NCQ support for my nforce system in libata:-) Don't hold your breath, it's unlikely to get supported as nvidia wont open the specs. ahci is a really really nice controller, if you want ncq I suggest going with that. sil is probably the next in line for ncq support. Actually. * Old-nvidia is ADMA, and I have docs under NDA * nvidia themselves say they are uninterested in NCQ support for their older ADMA controllers, though they don't mind if I implement it So it's up to you if it'll happen or not. I'm sure people would appreciate nforce NCQ support :-) * New-nvidia is AHCI, and thus will support NCQ when AHCI does Great! The sane choice, for both producer and consumer. * slight correction to the above: sil24 will do NCQ, I don't think sil does Ok, it was more of an umbrella sil label, I haven't looked into specific models. -- Jens Axboe - To unsubscribe from this list: send the line unsubscribe linux-raid in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html