Hi Paul,

Thank you for this straight explanation. It is very helpful while waiting
for the fix.

Best regards,

On Mon, Sep 30, 2019, 16:38 Paul Emmerich <paul.emmer...@croit.io> wrote:

> It's just a display bug in ceph -s:
>
> https://tracker.ceph.com/issues/40011
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Sep 29, 2019 at 4:41 PM Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> >
> > Hi,
> >
> > I'm starting with Nautilus and do create and delete some pools. When I
> > check with "ceph status" I find something weird with "pools" number
> > when tall pools have been deleted. I the meaning of "pools" number
> > different than Luminous? As there is no pool and PG, why there is
> > usage on "ceph status"?
> >
> > Best regards,
> >
> >   cluster:
> >     id:     e53af8e4-8ef7-48ad-ae4e-3d0486ba0d72
> >     health: HEALTH_OK
> >
> >   services:
> >     mon: 3 daemons, quorum c08-ctrl,c09-ctrl,c10-ctrl (age 3m)
> >     mgr: c08-ctrl(active, since 7d), standbys: c09-ctrl, c10-ctrl
> >     osd: 88 osds: 88 up (since 7d), 88 in (since 7d)
> >
> >   data:
> >     pools:   7 pools, 0 pgs
> >     objects: 0 objects, 0 B
> >     usage:   1.6 TiB used, 262 TiB / 264 TiB avail
> >     pgs:
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to