On 05/12/17 09:20, Ronny Aasen wrote:
> On 05. des. 2017 00:14, Karun Josy wrote:
>> Thank you for detailed explanation!
>>
>> Got one another doubt,
>>
>> This is the total space available in the cluster :
>>
>> TOTAL : 23490G
>> Use  : 10170G
>> Avail : 13320G
>>
>>
>> But ecpool shows max avail as just 3 TB. What am I missing ?
>>
>> Karun Josy
> 
> without knowing details of your cluster, this is just assumption guessing, 
> but...
> 
> perhaps one of your hosts have less free space then the others, replicated 
> can pick 3 of the hosts that have plenty of space, but erasure perhaps 
> require more hosts, so the host with least space is the limiting factor.
> 
> check
> ceph osd df tree
> 
> to see how it looks.
> 
> 
> kinds regards
> Ronny Aasen

From previous emails the erasure code profile is k=5,m=3, with a host failure 
domain, so the EC pool does use all eight hosts for every object. I agree it's 
very likely that the problem is that your hosts currently have heterogeneous 
capacity and the maximum data in the EC pool will be limited by the size of the 
smallest host.

Also remember that with this profile, you have a 3/5 overhead on your data, so 
1GB of real data stored in the pool translates to 1.6GB of raw data on disk. 
The pool usage and max available stats are given in terms of real data, but the 
cluster TOTAL usage/availability is expressed in terms of the raw space (since 
real usable data will vary depending on pool settings). If you check, you will 
probably find that your lowest-capacity host has near 6TB of space free, which 
would let you store a little over 3.5TB of real data in your EC pool.

Rich

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to