Thanks. To summarize
Your data, images+volumes = 27.15% space used
Raw used = 81.71% used

This is a big difference that I can’t account for? Can anyone? So is your 
cluster actually full?

I had the same problem with my small cluster. Raw used was about 85% and actual 
data, with replication, was about 30%. My OSDs were also BRTFS. BRTFS was 
causing its own problems. I fixed my problem by removing each OSD one at a time 
and re-adding as the default XFS filesystem. Doing so brought the percentages 
used to be about the same and it’s good now. My observation is that ceph wasn’t 
reclaiming space used.

My version was Hammer

/don



From: Dimitar Boichev [mailto:dimitar.boic...@axsmarine.com]
Sent: Friday, February 19, 2016 1:19 AM
To: Dimitar Boichev <dimitar.boic...@axsmarine.com>; Vlad Blando 
<vbla...@morphlabs.com>; Don Laursen <don.laur...@itopia.ca>
Cc: ceph-users <ceph-users@lists.ceph.com>
Subject: RE: [ceph-users] How to properly deal with NEAR FULL OSD

Sorry, reply to a wrong message.

Regards.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Dimitar Boichev
Sent: Friday, February 19, 2016 10:19 AM
To: Vlad Blando; Don Laursen
Cc: ceph-users
Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD

I have seen this when there was a recovery going on some PGs and we were 
deleting big amounts of data.
They disappeared when the recovery process finished.
This was on Firefly 0.80.7


Regards.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vlad 
Blando
Sent: Friday, February 19, 2016 3:31 AM
To: Don Laursen
Cc: ceph-users
Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD

I changed my volume PGs from 300 to 512 to even out the distribution, right now 
it is backfilling and remapping and I noticed that it's working.

---
osd.2 is near full at 85%
osd.4 is near full at 85%
osd.5 is near full at 85%
osd.6 is near full at 85%
osd.7 is near full at 86%
osd.8 is near full at 88%
osd.9 is near full at 85%
osd.11 is near full at 85%
osd.12 is near full at 86%
osd.16 is near full at 86%
osd.17 is near full at 85%
osd.20 is near full at 85%
osd.23 is near full at 86%
---

We will be adding a new node to the cluster after this.

Another question, I'de like to adjust the near full OSD warning from 85% to 90% 
temporarily. I cant remember the command.


@don
ceph df
---
[root@controller-node ~]# ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED
    100553G     18391G     82161G       81.71
POOLS:
    NAME        ID     USED       %USED     OBJECTS
    images      4      8927G      8.88      1143014
    volumes     5      18374G     18.27     4721934
[root@controller-node ~]#
---


[https://mailfoogae.appspot.com/t?sender=admJsYW5kb0Btb3JwaGxhYnMuY29t&type=zerocontent&guid=9c934871-a414-4d80-858c-6b83435d112d]ᐧ
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to