If you do have a large enough drive on all of your mons (and always intend to 
do so) you can increase the mon store warning threshold in the config file so 
that it no longer warns at 15360 MB.

________________________________

[cid:image5e6ea5.JPG@20abe996.44926dad]<https://storagecraft.com>       David 
Turner | Cloud Operations Engineer | StorageCraft Technology 
Corporation<https://storagecraft.com>
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943

________________________________

If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.

________________________________

________________________________________
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Wido den 
Hollander [w...@42on.com]
Sent: Tuesday, January 31, 2017 2:35 AM
To: Martin Palma; CEPH list
Subject: Re: [ceph-users] mon.mon01 store is getting too big! 18119 MB >= 15360 
MB -- 94% avail

> Op 31 januari 2017 om 10:22 schreef Martin Palma <mar...@palma.bz>:
>
>
> Hi all,
>
> our cluster is currently performing a big expansion and is in recovery
> mode (we doubled in size and osd# from 600 TB to 1,2 TB).
>

Yes, that is to be expected. When not all PGs are active+clean the MONs will 
not trim their datastore.

> Now we get the following message from our monitor nodes:
>
> mon.mon01 store is getting too big! 18119 MB >= 15360 MB -- 94% avail
>
> Reading [0] it says that it is normal in a state of active data
> rebalance and after it is finished it will be compacted.
>
> Should we wait until the recovery is finished or should we perform
> "ceph tell mon.{id} compact" now during recovery?
>

Mainly wait and make sure there is enough disk space. You can try a compact, 
but that can take the mon offline temp.

Just make sure you have enough diskspace :)

Wido

> Best,
> Martin
>
> [0] https://access.redhat.com/solutions/1982273
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to