Hi,

>> My question is how much total CEPH storage does this allow me? Only 2.3TB? 
>> or does the way CEPH duplicates data enable more than 1/3 of the storage?
> 3 means 3, so 2.3TB. Note that Ceph is spare, so that can help quite a bit.

To expand on this, you probably want to keep some margins and not run at your 
cluster 100% :) (especially if you are running RBD with thin provisioning). By 
default, “ceph status” will issue a warning at 85% full (osd nearfull ratio). 
You should also consider that you need some free space for auto healing to work 
(if you plan to use more than 3 OSDs on a size=3 pool).

Cheers,
Maxime 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to