On 08/13/2013 02:56 AM, Dmitry Postrigan wrote:
I am currently installing some backup servers with 6x3TB drives in them. I 
played with RAID-10 but I was not
impressed at all with how it performs during a recovery.

Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be 
local, so I could simply create
6 local OSDs + a monitor, right? Is there anything I need to watch out for in 
such configuration?

You can do that. Although it's nice to play with and everything, I
wouldn't recommend doing it. It will give you more pain than pleasure.

Any specific reason? I just got it up and running, an after simulating some 
failures, I like it much better than
mdraid. Again, this only applies to large arrays (6x3TB in my case). I would 
not use ceph to replace a RAID-1
array of course, but it looks like a good idea to replace a large RAID10 array 
with a local ceph installation.

The only thing I do not enjoy about ceph is performance. Probably need to do 
more tweaking, but so far numbers
are not very impressive. I have two exactly same servers running same OS, 
kernel, etc. Each server has 6x 3TB
drives (same model and firmware #).

Server 1 runs ceph (2 replicas)
Server 2 runs mdraid (raid-10)

I ran some very basic benchmarks on both servers:

dd if=/dev/zero of=/storage/test.bin bs=1M count=100000
Ceph: 113 MB/s
mdraid: 467 MB/s


dd if=/storage/test.bin of=/dev/null bs=1M
Ceph: 114 MB/s
mdraid: 550 MB/s


As you can see, mdraid is by far faster than ceph. It could be "by design", or 
perhaps I am not doing it
right. Even despite such difference in speed, I would still go with ceph 
because *I think* it is more reliable.

couple of things:

1) Ceph is doing full data journal writes so is going to eat (at least) half of your write performance right there.

2) Ceph tends to like lots of concurrency. You'll probably see higher numbers with multiple dd reads/writes going at once.

3) Ceph is a lot more complex than something like mdraid. It gives you a lot more power and flexibility but the cost is greater complexity. There are probably things you can tune to get your numbers up, but it could take some work.

Having said all of this, my primary test box is a single server and I can get 90MB/s+ per drive out of Ceph (with 24 drives!), but if I was building a production box and never planned to expand to multiple servers, I'd certainly be looking into zfs or btrfs RAID.

Mark


Dmitry

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to