Lionel Bouton <lionel-subscript...@bouton.name> writes:

> 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked
> you need 2 processes to read from 2 devices at once) and I've never seen
> anyone arguing that the current md code is unstable.

This indeed seems to be the case on my MD RAID1 HDD.

But on MD SSD RAID10 it does use all the four devices (using dd on the
md raid device and inspection iostat at the samy time).

So the lack of MD RAID1 doing it on HDDs seems to be for devices that
don't perform well in random access scenarios, as you mentioned.

In practical terms I seem to be getting about 1.1 GB/s from the 4 SSDs
with 'dd', whereas I seem to be getting ~650 MB/s when I dd from two
fastest components of the MD device at the same time. As it seems that I
get 330 M/s from two of the SSDs and 150M/s from the other two, it seems
the concurrent RAID10 IO is scaling linearly.

(In fact maybe I should look into why the two devices are getting lower
speeds overall - they used to be fast.)

I didn't calculate how large the linearly transferred chunks would need
to be to overcome the seek altency. Probably quite large.

-- 
  _____________________________________________________________________
     / __// /__ ____  __               http://www.modeemi.fi/~flux/\   \
    / /_ / // // /\ \/ /                                            \  /
   /_/  /_/ \___/ /_/\_\@modeemi.fi                                  \/

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to