Re: SAS v SATA interface performance

2007-12-10 Thread Mark Lord

Tejun Heo wrote:
..

NCQ is not more advanced than SCSI TCQ.  NCQ is "native" and "advanced"
compared to old IDE style bus-releasing queueing support which was one
ugly beast which no one really supported well.  The only example I can
remember which actually worked was first gen raptors paired with
specific controller with custom driver on windows.

..

I wrote PATA drivers for some chipsets that had hardware support for TCQ,
and it did make a very impressive throughput difference when enabled.
The IBM/Hitachi Deathst.. err.. Deskstar.. drives always had the best
support in firmware.  I believe we also used some WD drives, though there
firmware didn't perform as well.

ISTR that NCQ wins over TCQ (ATA) because multiple drives can interleave
their data transfers on the bus -- with TCQ, a drive took over the bus
at the start of data transfer and never released it until the command completed.

Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-10 Thread Mark Lord

Jens Axboe wrote:

On Mon, Dec 10 2007, Tejun Heo wrote:

There's one thing we can do to improve the situation tho.  Several
drives including raptors and 7200.11s suffer serious performance hit if
sequential transfer is performed by multiple NCQ commands.  My 7200.11
can do > 100MB/s if non-NCQ command is used or only upto two NCQ
commands are issued; however, if all 31 (maximum currently supported by
libata) are used, the transfer rate drops to miserable 70MB/s.

It seems that what we need to do is not issuing too many commands to one
sequential stream.  In fact, there isn't much to gain by issuing more
than two commands to one sequential stream.


Well... CFQ wont go to deep queue depths across processes if they are
doing streaming IO, but it wont stop a single process from doing so. I'd
like to know what real life process would issue a streaming IO in some
async manner as to get 31 pending commands sequentially? Not very likely

..

In the case of the WD Raptors, their firmware has changed slightly over
the years.  The ones I had here would *disable* internal read-ahead
for TCQ/NCQ commands, effectively killing any hope of sequential throughput
even for a queuesize of "1".   This was acknowledged by people with inside
knowledge of the firmware at the time.

Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-10 Thread Mark Lord

Tejun Heo wrote:

..
Mark, how is marvell PMP support going?

..

It will be good once it happens -- the newer 6042/7042 chips support
full FIS-based switching, as well as command-based switching,
with large queues and all of the trimmings.

Currently stuck in legalese, though.

Cheers


-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-10 Thread James Bottomley
On Mon, 2007-12-10 at 16:33 +0900, Tejun Heo wrote:
> There's one thing we can do to improve the situation tho.  Several
> drives including raptors and 7200.11s suffer serious performance hit if
> sequential transfer is performed by multiple NCQ commands.  My 7200.11
> can do > 100MB/s if non-NCQ command is used or only upto two NCQ
> commands are issued; however, if all 31 (maximum currently supported by
> libata) are used, the transfer rate drops to miserable 70MB/s.
> 
> It seems that what we need to do is not issuing too many commands to one
> sequential stream.  In fact, there isn't much to gain by issuing more
> than two commands to one sequential stream.

You're entering an area of perennial debate even for SCSI drives.  What
we know is that for drives whose firmware elevator doesn't perform very
well is that a lower TCQ depth (2-4) is better than a high one, the only
use tags have being to saturate the transport.  For high end arrays and
better performing firmware drives, the situation is much more murky.  It
boils down to whose elevator do you trust, the drive/array's or the
kernel's.  If the latter, then you want a depth of around 4 and if the
former, you want a depth as high as possible (arrays like 64-128).

Given the way IDE drives are made, I'd bet they fall into the category
of firmware elevator that doesn't perform very well, so you probably
want a low NCQ depth with them (just sufficient to saturate the
transport, but not high enough to allow the drive to make too many head
scheduling decisions).

James


-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-10 Thread Jens Axboe
On Mon, Dec 10 2007, Tejun Heo wrote:
> There's one thing we can do to improve the situation tho.  Several
> drives including raptors and 7200.11s suffer serious performance hit if
> sequential transfer is performed by multiple NCQ commands.  My 7200.11
> can do > 100MB/s if non-NCQ command is used or only upto two NCQ
> commands are issued; however, if all 31 (maximum currently supported by
> libata) are used, the transfer rate drops to miserable 70MB/s.
> 
> It seems that what we need to do is not issuing too many commands to one
> sequential stream.  In fact, there isn't much to gain by issuing more
> than two commands to one sequential stream.

Well... CFQ wont go to deep queue depths across processes if they are
doing streaming IO, but it wont stop a single process from doing so. I'd
like to know what real life process would issue a streaming IO in some
async manner as to get 31 pending commands sequentially? Not very likely
:-)

So I'd consider your case above a microbenchmark results. I'd also claim
that the firmware is very crappy, if it performs like described.

There's another possibility as well - that the queueing by the drive
generates a worse issue IO pattern, and that is why the performance
drops. Did you check with blktrace what the generated IO looks like?

> Both raptors and 7200.11 perform noticeably better on random workload
> with NCQ enabled.  So, it's about time to update IO schedulers
> accordingly, it seems.

Definitely. Again microbenchmarks are able to show 30-40% improvements
when I last tested. That's a pure random workload though, again not
something that you would see in real life.

I tend to always run with a depth around 4 here. It seems to be a good
value, you get some benefits from NCQ but you don't allow the drive
firmware to screw you over.

-- 
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-09 Thread Tejun Heo
(cc'ing Jens as it contains some discussion about IO scheduling)

Michael Tokarev wrote:
> Richard Scobie wrote:
>> If one disregards the rotational speed and access time advantage that
>> SAS drives have over SATA, does the SAS interface offer any performance
>> advantage?
> 
> It's a very good question, to which I wish I have an answer myself ;)
> Since I never tried actual SAS controllers with SAS drives, I'll
> reply from ol'good SCSI vs SATA perspective.

Purely from transport layer protocol perspective, SATA has slightly
shorter latency thanks to its simplicity but compared to actual IO
latency, this is negligible and if you throw NCQ and TCQ into play, this
theoretical advantage becomes completely negligible.

> They says that modern SATA drives has NCQ, which is "more
> advanced" than ol'good TCQ used in SCSI (and SAS) drives.
> I've no idea what's "advanced" in it, except of that it
> just does not work.  There's almost no difference with
> NCQ turned on or off, and in many cases turning NCQ ON
> actually REDUCES performance.

NCQ is not more advanced than SCSI TCQ.  NCQ is "native" and "advanced"
compared to old IDE style bus-releasing queueing support which was one
ugly beast which no one really supported well.  The only example I can
remember which actually worked was first gen raptors paired with
specific controller with custom driver on windows.

If you compare protocol to protocol, NCQ should be able to perform as
good as TCQ unless you're talking about monster storage enclosure device
which can have a lot of spindles behind it.  Again, NCQ has lower
overhead but bus latency / overhead don't really matter.

However, that is not to say SATA drives with NCQ support perform as good
as SCSI drives with TCQ support.  SCSI drivers are simply faster and
tend to have better firmware.  There is nothing much operating system
can do about it.

There's one thing we can do to improve the situation tho.  Several
drives including raptors and 7200.11s suffer serious performance hit if
sequential transfer is performed by multiple NCQ commands.  My 7200.11
can do > 100MB/s if non-NCQ command is used or only upto two NCQ
commands are issued; however, if all 31 (maximum currently supported by
libata) are used, the transfer rate drops to miserable 70MB/s.

It seems that what we need to do is not issuing too many commands to one
sequential stream.  In fact, there isn't much to gain by issuing more
than two commands to one sequential stream.

Both raptors and 7200.11 perform noticeably better on random workload
with NCQ enabled.  So, it's about time to update IO schedulers
accordingly, it seems.

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-09 Thread Tejun Heo
Mark Lord wrote:
> Alan Cox wrote:
>>> The comment I saw, which I'm trying to verify, mentioned the SATA
>>> drives "held the bus" or similar longer than SAS ones.
>>
>> SATA normally uses one link per device so the device side isn't contended
>> unless you descend into the murky world of port multipliers. 
> ..
> 
> And that's where NCQ comes into it's own, allowing full bus release
> so that other drives on the same port multiplier can burst as needed.

Non-NCQ R/W does full bus release too.  It's basically packet based so
nothing really holds the bus.  Even the ATAPI commands don't hold the bus.

> I've only had a port multiplier here for a few days, and used only a pair
> of *notebook* SATA drives on it thus far.  Both drives can stream at full
> rate without any slowdown -- that's 55MByte/sec from each drive, at the
> same time, for sequential reading, 100MByte/sec total.  Notebook drives.

With FIS-based switching, PMP won't impose performance limits on most
configurations until it fills up the connection bandwidth.  Catches are...

1. All currently supported controllers have only upto 32 command slots
per port, which means all devices sharing a PMP will compete for command
slots.  This doesn't really matter unless the PMP is very large (say >
16 ports).

2. The only FIS-based switching controller we currently support is
sata_sil24.  It generally works impressively well.  Unfortunately, it
has some limitations - it for some reason can't fully fill the
bandwidth.  It seems the silicon itself is limited.  This problem is
reported on different operating systems too.  I don't remember the exact
number but it's somewhere between 100 and 150MB/s.

3. sil24/32 family controllers have another quirk which basically forces
command-switching mode for ATAPI devices, so if you connect an ATAPI
device to PMP, the ATAPI device virtually holds the bus while command is
in progress.

I have untested code for ahci 1.2 FIS-based switching PMP support and
the next gen JMB and ICHs are slated to support FIS-based switching.  I
just need more hardware samples and time which seems so scarce these
days.  :-P

Mark, how is marvell PMP support going?

Thanks.

-- 
tejun
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Mark Lord

Oh, more fiction:


Because SATA uses point-to-point connectivity, the scaling
available with SAS controllers is not possible with SATA
controllers. SATA drives must be connected on a one-to-one
basis with the SATA connectors on the controller – i.e, a fourport
SATA controller can connect up to four drives, an eight-port
SATA controller can connect up to eight drives, etc. This means
that the number of drives needed must be known prior to
purchasing a SATA controller, or additional hardware costs will
be incurred.


Ignoring port multipliers (hubs) again there.

A SATA hub can connect up to 31 drives to a single SATA port,
or up to 248 drives per 8-port PCI(X/e) controller card.

I don't know if anyone currently sells a 31-port multiplier, though.

Cheers

-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Mark Lord

Richard Scobie wrote:



Jeff Garzik wrote:

Mark Lord wrote:

SATA port multipliers (think, "hub") permit multiple drives
to be active simultaneously.


Quite true, although the host controller could artificially limit 
this, giving the user a mistaken impression of their port multiplier 
being limited to one-command-per-N-drives.


Interesting. I was basing my comments on what may well be a vested 
interest slanted paper - see the sidebar on page 2.


http://www.xtore.com/Downloads/WhitePapers/SAS_SATAValue%20Whitepaper_final.pdf 



For the modest extra cost of a non-RAID SAS HBA and JBOD enclosure with 
SATA drives, over a port multiplied setup, there would seem to be some 
advantages.


Or have I been taken in by the hype... :)

..

Here's the "hype" part from that biased paper:


Performance: Port Multipliers only support one active host
connection at a time, signicantly degrading eective
throughput. Each time communication is initiated with a drive
time-consuming drive reset must occur.

Data Integrity: PMs must close the connection to one drive
to open a new one to another. When a connection is closed
drive history (e.g., data source, destination drive, data &
command context) is lost; with each opened connection the
chance of misidentification and sending data to the wrong
drive is increased.


Fiction.  Or rather, heavily biased.
Modern SATA hosts and PMs have no such issues.
The key SATA term to ask for is "FIS-based switching".

The biggest difference between SATA and SAS,
is the same one we previously had between ATA and SCSI:

  Vendors like to position SAS/SCSI as a "premium" brand,
  and therefore cripple SATA/ATA with lower spin-rates
  (7200rpm max, or 1rpm for WD Raptors, vs. 2rpm
  for high end SAS/SCSI).

There may be other firmware algorithm differences as well,
but "RAID edition" SATA/ATA drives have similar low-readahead
and fast-seek programming as their SAS/SCSI counterparts.

Simple spin-rate (RPM) is the most significant distinguishing
factor in nearly all scenarios.  SAS/SCSI may also still win when
connecting a ridiculously large number of drives to a single port.

Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Richard Scobie



Jeff Garzik wrote:

Mark Lord wrote:

SATA port multipliers (think, "hub") permit multiple drives
to be active simultaneously.


Quite true, although the host controller could artificially limit this, 
giving the user a mistaken impression of their port multiplier being 
limited to one-command-per-N-drives.


Interesting. I was basing my comments on what may well be a vested 
interest slanted paper - see the sidebar on page 2.


http://www.xtore.com/Downloads/WhitePapers/SAS_SATAValue%20Whitepaper_final.pdf

For the modest extra cost of a non-RAID SAS HBA and JBOD enclosure with 
SATA drives, over a port multiplied setup, there would seem to be some 
advantages.


Or have I been taken in by the hype... :)

Regards,

Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Jeff Garzik

Mark Lord wrote:

SATA port multipliers (think, "hub") permit multiple drives
to be active simultaneously.


Quite true, although the host controller could artificially limit this, 
giving the user a mistaken impression of their port multiplier being 
limited to one-command-per-N-drives.




:)

Jeff

-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Mark Lord

Richard Scobie wrote:



Greg Freemyer wrote:


Also, if you have Port Multiplexers (PMPs) in use, that would be
interesting to know.  I don't even know if PMPs are supported via SAS
controllers in 2.6.24 or not.  ie. PMP support is new to 2.6.24 and
only a few Sata controllers will have PMP support in 2.6.24.


No, port multipliers are not in use here, the technology that SAS uses 
is called port expansion. While I do not know much about the low level 
operational differences, from a performance perspective, this is my 
understanding.


Port multiplication only permits one drive to be accessed at once, 
whereas port expansion allows multiple drives to be accessed 
simultaneously.

..

SATA port multipliers (think, "hub") permit multiple drives
to be active simultaneously.

-ml
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Richard Scobie



Greg Freemyer wrote:


Also, if you have Port Multiplexers (PMPs) in use, that would be
interesting to know.  I don't even know if PMPs are supported via SAS
controllers in 2.6.24 or not.  ie. PMP support is new to 2.6.24 and
only a few Sata controllers will have PMP support in 2.6.24.


No, port multipliers are not in use here, the technology that SAS uses 
is called port expansion. While I do not know much about the low level 
operational differences, from a performance perspective, this is my 
understanding.


Port multiplication only permits one drive to be accessed at once, 
whereas port expansion allows multiple drives to be accessed simultaneously.


Port expanders are selfcontained and do not require kernel support - the 
only kernel requirement, is support for the SAS HBA or RAID controller.


See Xtore, Adaptec or Dell (MD1000), for suitable JBOD enclosures.

Regards,

Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-12-01 Thread Greg Freemyer
On Dec 1, 2007 2:43 AM, Richard Scobie <[EMAIL PROTECTED]> wrote:
>
>
> Alan Cox wrote:
>
> > If you want really high performance use multiple drives, on multiple PCIE
> > controllers. Just make sure your backup planning of raid 1+0 setup is
> > done right as many drives means a lot more drive fails.
>
> Thanks again. For what it's worth, I shall be attempting this with SATA
> drives in a RAID 50 configuration - 2 x 8 drives, using md RAID and an 8
> lane PCIe SAS HBA.
>
I suspect many of us will be curious of the performance results.

Also, if you have Port Multiplexers (PMPs) in use, that would be
interesting to know.  I don't even know if PMPs are supported via SAS
controllers in 2.6.24 or not.  ie. PMP support is new to 2.6.24 and
only a few Sata controllers will have PMP support in 2.6.24.

Greg
-- 
Greg Freemyer
Litigation Triage Solutions Specialist
http://www.linkedin.com/in/gregfreemyer
First 99 Days Litigation White Paper -
http://www.norcrossgroup.com/forms/whitepapers/99%20Days%20whitepaper.pdf

The Norcross Group
The Intersection of Evidence & Technology
http://www.norcrossgroup.com
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Richard Scobie



Alan Cox wrote:


If you want really high performance use multiple drives, on multiple PCIE
controllers. Just make sure your backup planning of raid 1+0 setup is
done right as many drives means a lot more drive fails.


Thanks again. For what it's worth, I shall be attempting this with SATA 
drives in a RAID 50 configuration - 2 x 8 drives, using md RAID and an 8 
lane PCIe SAS HBA.


Regards,

Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Mark Lord

Alan Cox wrote:
The comment I saw, which I'm trying to verify, mentioned the SATA drives 
"held the bus" or similar longer than SAS ones.


SATA normally uses one link per device so the device side isn't contended
unless you descend into the murky world of port multipliers. 

..

And that's where NCQ comes into it's own, allowing full bus release
so that other drives on the same port multiplier can burst as needed.

I've only had a port multiplier here for a few days, and used only a pair
of *notebook* SATA drives on it thus far.  Both drives can stream at full
rate without any slowdown -- that's 55MByte/sec from each drive, at the
same time, for sequential reading, 100MByte/sec total.  Notebook drives.

Cheers
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Alan Cox
> The comment I saw, which I'm trying to verify, mentioned the SATA drives 
> "held the bus" or similar longer than SAS ones.

SATA normally uses one link per device so the device side isn't contended
unless you descend into the murky world of port multipliers. 

On the host side an AHCI controller offloads all the tedious operations
to the controller so you don't get long slow PCI side accesses slowing up
the CPU either.
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Richard Scobie

Thanks for the comments.

It was really protocol/bus behaviour differences (if any), between SATA 
drives in a SAS environment, vs SAS drives, that I am looking at.


I do know that SATA drives only support a subset of the SCSI commands 
and wondered if the SAS drives were more "clever" in a multi drive scenario.


The comment I saw, which I'm trying to verify, mentioned the SATA drives 
"held the bus" or similar longer than SAS ones.


Regards,

Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Alan Cox
> First of all, I've yet to see a controller that is really able
> to handle multiple requests in parallel.  Usually, multiple
> I/O threads gets exactly the same summary performance as a
> single thread - UNLIKE of linux software raid which clearly

Thats usually true of hardware raid cards as they lack enough CPU power
and they are on a single PCI slot. With PCIE and things like AHCI where
the host is doing the hard work and the controller engine is doing the
NCQ low level work and channel work you've got a much better chance.

> They says that modern SATA drives has NCQ, which is "more
> advanced" than ol'good TCQ used in SCSI (and SAS) drives.
> I've no idea what's "advanced" in it, except of that it
> just does not work.  There's almost no difference with
> NCQ turned on or off, and in many cases turning NCQ ON
> actually REDUCES performance.

NCQ is straight 32 way tagged queueing. On some workloads it makes a big
difference but its dependent on controller and drive firmware.

The big big constraint on a modern drive is still ops/second. Its
mechanical and the mechanical behaviour is limited by the laws of physics
and hasn't much changed in years

If you want really high performance use multiple drives, on multiple PCIE
controllers. Just make sure your backup planning of raid 1+0 setup is
done right as many drives means a lot more drive fails.
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SAS v SATA interface performance

2007-11-30 Thread Michael Tokarev
Richard Scobie wrote:
> If one disregards the rotational speed and access time advantage that
> SAS drives have over SATA, does the SAS interface offer any performance
> advantage?

It's a very good question, to which I wish I have an answer myself ;)
Since I never tried actual SAS controllers with SAS drives, I'll
reply from ol'good SCSI vs SATA perspective.

> For example, assume a SAS drive and a SATA drive can both sustained
> stream 70MB/s. A 16 drive JBOD SAS enclosure with internal SAS expander
> is connected via a 4port SAS RAID controller, configured for RAID 5
> across all 16 drives.
> 
> If tests are then run reading and writing a multi gigabyte file to empty
> arrays made up of 16 SAS drives and 16 SATA drives, would the results be
> identical?

In this scenario, I don't think there will be any difference between
the two.  With SCSI, one is limited by SCSI bus speed, which is 320Mb/sec
(640Mb/sec for two-channels etc).   Hopefully there isn't such a
limit in SATA/SAS world, at least not in theoretical configuration.
In your case, 4port SAS controller is about 1200Mb/sec theoretical
maximum (300Mb/sec per port), which is further limited by system
bus speed (PCIe or PCI-X).

But note that while you're doing an operation in one stream, you'll
most likely be limited by a speed of a SINGLE drive, since at any
given time, you're writing/reading only to/from one drive (I'm
simplifying things here, -- omitting calculating/writing parity
info in raid5, which may be further limited by controller's
XOR operation speed -- let's assume it's a raid0 array for now,
not raid5).

The only case where more than one drive is affected is when you
cross strips - go from one drive to another in a raid layout.
When a controller receives a request which spans strips, it
can split the request into two and issue parallel requests to
two drives. Here, we will see speed 2x larger than of a single
drive...

Where things starts to become different is when there are
MULTIPLE readers and writers, so that an I/O subsystem has
much more chances for parallelism.

And here, it all depends on the controller, and probably on
the driver too, AND on the drives.

Here are several observation I collected in several years of
expirience with various raid controllers and drives.

First of all, I've yet to see a controller that is really able
to handle multiple requests in parallel.  Usually, multiple
I/O threads gets exactly the same summary performance as a
single thread - UNLIKE of linux software raid which clearly
scales exactly to the number of drives (having in mind raid0
again and omitting raid5 complexity).  All raid cards from
Adaptec (at least up to and including their first SATA cards),
from HP (SmartArrays), from LSI shows this behavior - no matter
how many drives you throw at them, no matter how many threads
you start, the result will be the same.  Many external "smart"
disk arrays (not JBOD boxes but with RAID controllers inside,
connected using FC or iSCSI, with built-in stacability and
external redundancy) also show this behavior.

Maybe, just maybe, I've only dealt with their "cheap" products
so far, but I lost any hopes long ago, and decided that HW
raid solutions is something to avoid as much as possible.

Now for the actual difference between SATA and SCSI drives.
And here, again, is something interesting.

They says that modern SATA drives has NCQ, which is "more
advanced" than ol'good TCQ used in SCSI (and SAS) drives.
I've no idea what's "advanced" in it, except of that it
just does not work.  There's almost no difference with
NCQ turned on or off, and in many cases turning NCQ ON
actually REDUCES performance.

It's in a very good contrast with SCSI drives, where -
depending on workload (I'm again talking about multiple
I/O processes - for single I/O there's no difference
in NCQ/TCQ or raid arrays/controllers), you may get 20..
400% increase in speed - of a SINGLE drive.  Even 9Gb
SCSI disks from Seagate - circa 1998 - shows this behavior
(they're alot slower than todays ones, but speed increase
due to TCQ is very well of the same extent).

With linux software RAID configuration and with TCQ-enabled
SCSI drives, there's a very significant difference between
SATA and SCSI.  Linux raid code can do things in parallel,
and individual SCSI drives plays here nicely too, so the
end result is very good.  With SATA, linux raid still plays
its role as with SCSI, but the drives don't scale at all.
And with hardware raid controller, even the raid code doesnt
scale - so end result is like single drive in single-stream
test, which is the worst of all.

I'd LOVE to be proven wrong, but so far the only evidence I
see is the one that proves my observations.

Recently I removed an IBM SeveRaid module from a rackmount
IBM server (had to find how to disable the damn thing and
expose AIC79xx controller - it turned out to be quite
difficult) and replaced the raid with linux software raid
solution - server speed (it's a database server) inc

SAS v SATA interface performance

2007-11-30 Thread Richard Scobie
If one disregards the rotational speed and access time advantage that 
SAS drives have over SATA, does the SAS interface offer any performance 
advantage?


For example, assume a SAS drive and a SATA drive can both sustained 
stream 70MB/s. A 16 drive JBOD SAS enclosure with internal SAS expander 
is connected via a 4port SAS RAID controller, configured for RAID 5 
across all 16 drives.


If tests are then run reading and writing a multi gigabyte file to empty 
arrays made up of 16 SAS drives and 16 SATA drives, would the results be 
identical?


I ask, as I have seen a comment to the effect that SATA drives are less 
efficient interacting at the bus/interface level in this situation, but 
I have had no luck confirming this after extensive searching.


Regards,

Richard
-
To unsubscribe from this list: send the line "unsubscribe linux-ide" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html