Phillip Susi wrote:
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with
reads (of which most are, by nature, synchronous at the app level)
from multiple threads or apps. It helps with writes, even with write
cache on, by allowing multiple commands to be
Phillip Susi wrote:
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with
reads (of which most are, by nature, synchronous at the app level)
from multiple threads or apps. It helps with writes, even with write
cache on, by allowing multiple commands to be
Mark Rustad wrote:
On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about
your data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as
Mark Rustad wrote:
On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about
your data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as
On Thu, Mar 29, 2007 at 02:47:20PM -0700, David Schwartz wrote:
> Which sounds faster to you:
>
> 1) "Do A, B, C, and D."
>"Okay, I've finished A, B, C, and B."
> or
>
> 2) "Do A."
>"Okay."
>"Do B."
>"Okay."
>"Do C."
>"Okay."
>"Do D."
>"Okay."
>
> The first
On Thu, Mar 29, 2007 at 02:47:20PM -0700, David Schwartz wrote:
Which sounds faster to you:
1) Do A, B, C, and D.
Okay, I've finished A, B, C, and B.
or
2) Do A.
Okay.
Do B.
Okay.
Do C.
Okay.
Do D.
Okay.
The first looks a bit more efficient to me.
It
> But when writing, what is the difference between queuing multiple tagged
> writes, and sending down multiple untagged cached writes that complete
> immediately and actually hit the disk later? Either way the host keeps
> sending writes to the disk until it's buffers are full, and the disk
O> writes, and sending down multiple untagged cached writes that complete
> immediately and actually hit the disk later? Either way the host keeps
> sending writes to the disk until it's buffers are full, and the disk is
> constantly trying to commit those buffers to the media in the most
>
Phillip Susi wrote:
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with
reads (of which most are, by nature, synchronous at the app level)
from multiple threads or apps. It helps with writes, even with write
cache on, by allowing multiple commands to be
> But when writing, what is the difference between queuing multiple tagged
> writes, and sending down multiple untagged cached writes that complete
> immediately and actually hit the disk later? Either way the host keeps
> sending writes to the disk until it's buffers are full, and the disk is
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with reads
(of which most are, by nature, synchronous at the app level) from
multiple threads or apps. It helps with writes, even with write cache
on, by allowing multiple commands to be submitted and/or retired
But when writing, what is the difference between queuing multiple tagged
writes, and sending down multiple untagged cached writes that complete
immediately and actually hit the disk later? Either way the host keeps
sending writes to the disk until it's buffers are full, and the disk is
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with reads
(of which most are, by nature, synchronous at the app level) from
multiple threads or apps. It helps with writes, even with write cache
on, by allowing multiple commands to be submitted and/or retired
But when writing, what is the difference between queuing multiple tagged
writes, and sending down multiple untagged cached writes that complete
immediately and actually hit the disk later? Either way the host keeps
sending writes to the disk until it's buffers are full, and the disk is
Phillip Susi wrote:
Jeff Garzik wrote:
NCQ provides for a more asynchronous flow. It helps greatly with
reads (of which most are, by nature, synchronous at the app level)
from multiple threads or apps. It helps with writes, even with write
cache on, by allowing multiple commands to be
O writes, and sending down multiple untagged cached writes that complete
immediately and actually hit the disk later? Either way the host keeps
sending writes to the disk until it's buffers are full, and the disk is
constantly trying to commit those buffers to the media in the most
Phillip Susi wrote:
Justin Piszcz wrote:
I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)
Shouldn't NCQ only help write performance if write caching is
_disabled_? Since write cache
Justin Piszcz wrote:
I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)
Shouldn't NCQ only help write performance if write caching is
_disabled_? Since write cache essentially is just
Justin Piszcz wrote:
I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)
Shouldn't NCQ only help write performance if write caching is
_disabled_? Since write cache essentially is just
Phillip Susi wrote:
Justin Piszcz wrote:
I would try with write-caching enabled.
Also, the RAID5/RAID10 you mention seems like each volume is on part of
the platter, a strange setup you got there :)
Shouldn't NCQ only help write performance if write caching is
_disabled_? Since write cache
On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about
your data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as well.
Turning
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about your
data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as well.
Turning write cache off has always been a performance-killing
On Mar 27, 2007, at 12:59 AM, Jeff Garzik wrote:
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation,
with the exception of 2-3 items.
Variables to take into account:
* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
I meant you do not allocate the entire disk per raidset, which may alter
performance numbers.
No, that would be silly. It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway,
> I meant you do not allocate the entire disk per raidset, which may alter
> performance numbers.
No, that would be silly. It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so
> 04:00.0 RAID bus controller: Silicon
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz <[EMAIL PROTECTED]>
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED],
>From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz <[EMAIL PROTECTED]>
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject:
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
Here's some more data.
6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel
Tested able to sustain reads at 60 MB/sec/drive simultaneously.
Here's some more data.
6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel
Tested able to sustain reads at 60 MB/sec/drive simultaneously.
RAID-10 is across 6 drives, first part of drive.
Jeff Garzik wrote:
In some cases, NCQ firmware may be broken. There is a Maxtor firmware
id, and some Hitachi ids that people are leaning towards recommending be
added to the libata 'horkage' list.
Western Digital "Raptor" drives (the 10K rpm things) are also somewhat
borked in NCQ mode,
On Tue, 27 Mar 2007, Tejun Heo wrote:
Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech,
hothardware and others, they generally all come to the same conclusion if
there is only 1 thread using I/O (single user system) then NCQ off is the
best.
Are they
Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech,
hothardware and others, they generally all come to the same conclusion
if there is only 1 thread using I/O (single user system) then NCQ off is
the best.
Are they testing using Linux? I/O performance is
On Sat, 24 Mar 2007, Alan Cox wrote:
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz <[EMAIL PROTECTED]> wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
It depends on the drive. Generally NCQ is better but some drive
On Sat, 24 Mar 2007, Alan Cox wrote:
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz [EMAIL PROTECTED] wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
It depends on the drive. Generally NCQ is better but some drive firmware
Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech,
hothardware and others, they generally all come to the same conclusion
if there is only 1 thread using I/O (single user system) then NCQ off is
the best.
Are they testing using Linux? I/O performance is
On Tue, 27 Mar 2007, Tejun Heo wrote:
Justin Piszcz wrote:
Checking the benchmarks on various hardware websites, anandtech,
hothardware and others, they generally all come to the same conclusion if
there is only 1 thread using I/O (single user system) then NCQ off is the
best.
Are they
Jeff Garzik wrote:
In some cases, NCQ firmware may be broken. There is a Maxtor firmware
id, and some Hitachi ids that people are leaning towards recommending be
added to the libata 'horkage' list.
Western Digital Raptor drives (the 10K rpm things) are also somewhat
borked in NCQ mode,
Here's some more data.
6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel
Tested able to sustain reads at 60 MB/sec/drive simultaneously.
RAID-10 is across 6 drives, first part of drive.
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
Here's some more data.
6x ST3400832AS (Seagate 7200.8) 400 GB drives.
3x SiI3232 PCIe SATA controllers
2.2 GHz Athlon 64, 1024k cache (3700+), 2 GB RAM
Linux 2.6.20.4, 64-bit kernel
Tested able to sustain reads at 60 MB/sec/drive simultaneously.
From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz [EMAIL PROTECTED]
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED], linux-ide@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re:
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
From [EMAIL PROTECTED] Tue Mar 27 16:25:58 2007
Date: Tue, 27 Mar 2007 12:25:52 -0400 (EDT)
From: Justin Piszcz [EMAIL PROTECTED]
X-X-Sender: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc: [EMAIL PROTECTED], [EMAIL PROTECTED],
I meant you do not allocate the entire disk per raidset, which may alter
performance numbers.
No, that would be silly. It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway, so
04:00.0 RAID bus controller: Silicon Image,
On Tue, 27 Mar 2007, [EMAIL PROTECTED] wrote:
I meant you do not allocate the entire disk per raidset, which may alter
performance numbers.
No, that would be silly. It does lower the average performance of the
large RAID-5 area, but I don't know how ext3fs is allocating the blocks
anyway,
On Mar 27, 2007, at 12:59 AM, Jeff Garzik wrote:
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation,
with the exception of 2-3 items.
Variables to take into account:
* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about your
data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as well.
Turning write cache off has always been a performance-killing
On Mar 27, 2007, at 1:38 PM, Jeff Garzik wrote:
Mark Rustad wrote:
reorder any queued operations. Of course if you really care about
your data, you don't really want to turn write cache on.
That's a gross exaggeration. FLUSH CACHE and FUA both ensure data
integrity as well.
Turning
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
Variables to take into account:
* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
Variables to take into account:
* the drive (NCQ performance wildly varies)
* the IO scheduler
* the filesystem (if not measuring direct to blkdev)
* application workload
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64 >
run.txt;
# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software
RAID:
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz <[EMAIL PROTECTED]> wrote:
> Without NCQ, performance is MUCH better on almost every operation, with
> the exception of 2-3 items.
It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64 > run.txt;
# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software RAID:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64 run.txt;
# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software RAID:
On Sat, 24 Mar 2007 12:38:02 -0400 (EDT)
Justin Piszcz [EMAIL PROTECTED] wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
It depends on the drive. Generally NCQ is better but some drive firmware
isn't too bright and there are probably
Justin Piszcz wrote:
Without NCQ, performance is MUCH better on almost every operation, with
the exception of 2-3 items.
/usr/sbin/bonnie++ -d /x/bonnie -s 7952 -m p34 -n 16:10:16:64
run.txt;
# Average of 3 runs with NCQ on for Quad Raptor ADFD 150 RAID 5 Software
RAID:
54 matches
Mail list logo