On 24/02/2014 06:27, Facundo Curti wrote:
Hi. I am again, with a similar question to previous.

I want to install RAID on SSD's.

Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The
performance would be something like this:

n= number of disks

reads:
   raid1: n*2
   raid0: n*2

writes:
   raid1: n
   raid0: n*2

But, in real life, the reads from raid 0 doesn't work at all, because if
you use "chunk size" from 4k, and you need to read just 2kb (most binary
files, txt files, etc..). the read speed should be just of n.

While the workload does matter, that's not really how it works. Be aware that Linux implements read-ahead (defaulting to 128K):-

# blockdev --getra /dev/sda
256

That's enough to populate 32 pages in pagecache, given that PAGESIZE is 4K on i386/am64.


On the other side, I read over the net, that kernel don't support
multithread reads on raid1. So, the read speed will be just n. Always.
¿It is true?

No, it is not true. Read balancing is implemented in RAID-1.


Anyway, my question is. ¿Who have the best read speed for the day to
day? I'm not asking about reads off large files. I'm just asking in the
"normal" use. Opening firefox, X, regular files, etc..

For casual usage, it shouldn't make any difference.


I can't find the guide definitive. It allways are talking about
theoretically performance, or about "real life" but without benchmarks
or reliable data.

Having a RAID0 with SSD, and following [2] on "SSD Stripe Optimization"
should I have the same speed as an RAID1?

I would highly recommend conducting your own benchmarks. I find sysbench to be particularly useful.



My question is because i'm between. 4 disks raid1, or RAID10 (I want
redundancy anyway..). And as "raid 10" = 1+ 0. I need to know raid0
performance to take a choice... I don't need write speed, just read.

In Linux, RAID-10 is not really nested because the mirroring and striping is fully integrated. If you want the best read performance with RAID-10 then the "far" layout is supposed to be the best [1].

Here is an example of how to choose this layout:

# mdadm -C /dev/md0 -n 4 -l 10 -p f2 /dev/sda /dev/sdb /dev/sdc /dev/sdd

Note, however, that the far layout will exhibit worse performance than the "near" layout if the array is in a degraded state. Also, it increases seek time in random/mixed workloads but this should not matter if you are using SSDs.

--Kerin

[1] http://neil.brown.name/blog/20040827225440

Reply via email to