Currently Ceph uses replication. Each pool is set with a replication factor. A replication factor of 1 obviously offers no redundancy. Replication factors of 2 or 3 are common. So, Ceph currently halfs or thirds your usable storage, accordingly. Also, note you can co-mingle pools of various replication factors, so the actual math can get more complicated.

There is a team of developers building an Erasure Coding backend for Ceph that will allow for more options.

http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend

http://wiki.ceph.com/01Planning/02Blueprints/Emperor/Erasure_coded_storage_backend_%28step_2%29

Initial release is scheduled for Ceph's Firefly release in February 2014.


Thanks,

Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC

On 10/3/2013 2:44 PM, Aronesty, Erik wrote:
Does Ceph really halve your storage like that?

If if you specify N+1,does it really store two copies, or just compute 
checksums across MxN stripes?  I guess Raid5+Ceph with a large array (12 disks 
say) would be not too bad (2.2TB for each 1).

But It would be nicer, if I had 12 storage units in a single rack on a single 
network, for me to tell CEPH to stripe across them in a RAIDZ fashion, so that 
I'm only losing 10% of my storage to redundancy... not 50%.

-----Original Message-----
From: ceph-users-boun...@lists.ceph.com 
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John-Paul Robinson
Sent: Thursday, October 03, 2013 12:08 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph and RAID

What is this take on such a configuration?

Is it worth the effort of tracking "rebalancing" at two layers, RAID
mirror and possibly Ceph if the pool has a redundancy policy.  Or is it
better to just let ceph rebalance itself when you lose a non-mirrored disk?

If following the "raid mirror" approach, would you then skip redundency
at the ceph layer to keep your total overhead the same?  It seems that
would be risky in the even you loose your storage server with the
raid-1'd drives.  No Ceph level redunancy would then be fatal.  But if
you do raid-1 plus ceph redundancy, doesn't that mean it takes 4TB for
each 1 real TB?

~jpr

On 10/02/2013 10:03 AM, Dimitri Maziuk wrote:
I would consider (mdadm) raid-1, dep. on the hardware & budget,
because this way a single disk failure will not trigger a cluster-wide
rebalance.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to