>> I am currently installing some backup servers with 6x3TB drives in them. I 
>> played with RAID-10 but I was not
>> impressed at all with how it performs during a recovery.
>> 
>> Anyway, I thought what if instead of RAID-10 I use ceph? All 6 disks will be 
>> local, so I could simply create
>> 6 local OSDs + a monitor, right? Is there anything I need to watch out for 
>> in such configuration?

> You can do that. Although it's nice to play with and everything, I
> wouldn't recommend doing it. It will give you more pain than pleasure.

Any specific reason? I just got it up and running, an after simulating some 
failures, I like it much better than
mdraid. Again, this only applies to large arrays (6x3TB in my case). I would 
not use ceph to replace a RAID-1
array of course, but it looks like a good idea to replace a large RAID10 array 
with a local ceph installation.

The only thing I do not enjoy about ceph is performance. Probably need to do 
more tweaking, but so far numbers
are not very impressive. I have two exactly same servers running same OS, 
kernel, etc. Each server has 6x 3TB
drives (same model and firmware #).

Server 1 runs ceph (2 replicas)
Server 2 runs mdraid (raid-10)

I ran some very basic benchmarks on both servers:

dd if=/dev/zero of=/storage/test.bin bs=1M count=100000
Ceph: 113 MB/s
mdraid: 467 MB/s


dd if=/storage/test.bin of=/dev/null bs=1M 
Ceph: 114 MB/s
mdraid: 550 MB/s


As you can see, mdraid is by far faster than ceph. It could be "by design", or 
perhaps I am not doing it
right. Even despite such difference in speed, I would still go with ceph 
because *I think* it is more reliable.

Dmitry

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to