On 08/13/2013 09:47 AM, Dmitry Postrigan wrote:

>> Why would you want to make this switch?
> 
> I do not think RAID-10 on 6 3TB disks is going to be reliable at all. I have 
> simulated several failures, and
> it looks like a rebuild will take a lot of time. Funnily, during one of these 
> experiments, another drive
> failed, and I had lost the entire array. Good luck recovering from that...

good point.

> I feel that Ceph is better than mdraid because:
> 1) When ceph cluster is far from being full, 'rebuilding' will be much faster 
> vs mdraid

true

> 2) You can easily change the number of replicas

true

> 3) When multiple disks have bad sectors, I suspect ceph will be much easier 
> to recover data from than from
> mdraid which will simply never finish rebuilding.

maybe not true. also if you have one disk that is starting to be slow
(because of upcoming failure), ceph will slow down drastically, and you
need to find the failing disk.

> 4) If we need to migrate data over to a different server with no downtime, we 
> just add more OSDs, wait, and
> then remove the old ones :-)

true. but maybe not as easy and painless as you would expect it to be.
also bear in mind that ceph needs a monitor up and running all time.

> This is my initial observation though, so please correct me if I am wrong.

ceph is easier to maintain than most distributed systems I know, but
still harder than a local RAID. Keep that in mind.

> Dmitry

Wolfgang

> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to