OK so it’s telling you that the near full OSD holds PGs for these three pools.
JC > On Dec 19, 2017, at 08:05, Karun Josy <karunjo...@gmail.com> wrote: > > No, I haven't. > > Interestingly, the POOL_NEARFULL flag is shown only when there is > OSD_NEARFULL flag. > I have recently upgraded to Luminous 12.2.2, haven't seen this flag in 12.2.1 > > > > Karun Josy > > On Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez <jelo...@redhat.com > <mailto:jelo...@redhat.com>> wrote: > Hi > > did you set quotas on these pools? > > See this page for explanation of most error messages: > http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full > > <http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full> > > JC > >> On Dec 19, 2017, at 01:48, Karun Josy <karunjo...@gmail.com >> <mailto:karunjo...@gmail.com>> wrote: >> >> Hello, >> >> In one of our clusters, health is showing these warnings : >> --------- >> OSD_NEARFULL 1 nearfull osd(s) >> osd.22 is near full >> POOL_NEARFULL 3 pool(s) nearfull >> pool 'templates' is nearfull >> pool 'cvm' is nearfull >> pool 'ecpool' is nearfull >> ------------ >> >> One osd is above 85% used, which I know caused the OSD_Nearfull flag. >> But what does pool(s) nearfull mean ? >> And how can I correct it ? >> >> ]$ ceph df >> GLOBAL: >> SIZE AVAIL RAW USED %RAW USED >> 31742G 11147G 20594G 64.88 >> POOLS: >> NAME ID USED %USED MAX AVAIL OBJECTS >> templates 5 196G 23.28 645G 50202 >> cvm 6 6528 0 1076G 770 >> ecpool 7 10260G 83.56 2018G 3004031 >> >> >> >> Karun >> _______________________________________________ >> ceph-users mailing list >> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com