[ceph-users] ceph df shows 100% used

2018-01-18 Thread Webert de Souza Lima
Hello, I'm running near-out-of service radosgw (very slow to write new objects) and I suspect it's because of ceph df is showing 100% usage in some pools, though I don't know what that information comes from. Pools: #~ ceph osd pool ls detail -> http://termbin.com/lsd0 Crush Rules (important is

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread Webert de Souza Lima
Also, there is no quota set for the pools Here is "ceph osd pool get xxx all": http://termbin.com/ix0n Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* ___ ceph-users mailing list ceph-users@lists

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread Webert de Souza Lima
Sorry I forgot, this is a ceph jewel 10.2.10 Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-cep

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread David Turner
You can have overall space available in your cluster because not all of your disks are in the same crush root. You have multiple roots corresponding to multiple crush rulesets. All pools using crush ruleset 0 are full because all of the osds in that crush rule are full. On Thu, Jan 18, 2018 at 3

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread David Turner
`ceph osd df` is a good command for you to see what's going on. Compare the osd numbers with `ceph osd tree`. On Thu, Jan 18, 2018 at 5:03 PM David Turner wrote: > You can have overall space available in your cluster because not all of > your disks are in the same crush root. You have multiple

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread David Turner
You hosts are also not balanced in your default root. Your failure domain is host, but one of your hosts has 8.5TB of storage in it compared to 26.6TB and 29.6TB. You only have size=2 (along with min_size=1 which is bad for a lot of reasons) so it should still be able to place data mostly between

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread Webert de Souza Lima
Hi David, thanks for replying. On Thu, Jan 18, 2018 at 5:03 PM David Turner wrote: > You can have overall space available in your cluster because not all of > your disks are in the same crush root. You have multiple roots > corresponding to multiple crush rulesets. All pools using crush rules

Re: [ceph-users] ceph df shows 100% used

2018-01-18 Thread Webert de Souza Lima
With the help of robbat2 and llua on IRC channel I was able to solve this situation by taking down the 2-OSD only hosts. After crush reweighting OSDs 8 and 23 from host mia1-master-fe02 to 0, ceph df showed the expected storage capacity usage (about 70%) With this in mind, those guys have told me

Re: [ceph-users] ceph df shows 100% used

2018-01-19 Thread Webert de Souza Lima
While it seemed to be solved yesterday, today the %USED has grown a lot again. See: ~# ceph osd df tree http://termbin.com/0zhk ~# ceph df detail http://termbin.com/thox 94% USED while there is about 21TB worth of data, size = 2 menas ~42TB RAW Usage, but the OSDs in that root sum ~70TB availabl

Re: [ceph-users] ceph df shows 100% used

2018-01-19 Thread QR
'MAX AVAIL' in the 'ceph df' output represents the amount of data that can be used before the first OSD becomes full, and not the sum of all free space across a set of OSDs. 原始邮件 发件人: Webert de Souza Lima收件人: ceph-users发送时间: 2018年1月19日(

Re: [ceph-users] ceph df shows 100% used

2018-01-22 Thread Webert de Souza Lima
Hi, On Fri, Jan 19, 2018 at 8:31 PM, zhangbingyin wrote: > 'MAX AVAIL' in the 'ceph df' output represents the amount of data that can > be used before the first OSD becomes full, and not the sum of all free > space across a set of OSDs. > Thank you very much. I figured this out by the end of t