Thank you!  That helps alot.
On Mar 12, 2015 10:40 AM, "Steve Anthony" <sma...@lehigh.edu> wrote:

>  Actually, it's more like 41TB. It's a bad idea to run at near full
> capacity (by default past 85%) because you need some space where Ceph can
> replicate data as part of its healing process in the event of disk or node
> failure. You'll get a health warning when you exceed this ratio.
>
> You can use erasure coding to increase the amount of data you can store
> beyond 41TB, but you'll still need some replicated disk as a caching layer
> in front of the erasure coded pool if you're using RBD. See:
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/036430.html
>
> As to how much space you can save with erasure coding, that will depend on
> if you're using RBD and need a cache layer and the values you set for k and
> m (number of data chunks and coding chunks). There's been some discussion
> on the list with regards to choosing those values.
>
> -Steve
>
> On 03/12/2015 10:07 AM, Thomas Foster wrote:
>
> I am looking into how I can maximize my space with replication, and I am
> trying to understand how I can do that.
>
>  I have 145TB of space and a replication of 3 for the pool and was
> thinking that the max data I can have in the cluster is ~47TB in my cluster
> at one time..is that correct?  Or is there a way to get more data into the
> cluster with less space using erasure coding?
>
>  Any help would be greatly appreciated.
>
>
>
>
> _______________________________________________
> ceph-users mailing 
> listceph-us...@lists.ceph.comhttp://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> --
> Steve Anthony
> LTS HPC Support Specialist
> Lehigh universitysma...@lehigh.edu
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to