Yes, it include all the available pools on the cluster:

*# ceph df*
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED
    53650G     42928G       10722G         19.99
POOLS:
    NAME                ID     USED      %USED     MAX AVAIL     OBJECTS
    volumes             13     2979G     33.73         5854G      767183
    db                  18      856G      4.65        17563G     1657174
    cephfs_data         22       880         0         5854G           6
    cephfs_metadata     23      977k         0         5854G          65

*# rados lspools*
volumes
db
cephfs_data
cephfs_metadata

The goods news is that after restarting the ceph-mgr, it started to work :)
but like you said, it would be nice to know how the system got into this.

Thanks a lot John :)

Best,


*German*

2017-12-11 12:17 GMT-03:00 John Spray <jsp...@redhat.com>:

> On Mon, Dec 11, 2017 at 3:13 PM, German Anders <gand...@despegar.com>
> wrote:
> > Hi John,
> >
> > how are you? no problem :) . Unfortunately the error on the 'ceph fs
> status'
> > command is still happening:
>
> OK, can you check:
>  - does the "ceph df" output include all the pools?
>  - does restarting ceph-mgr clear the issue?
>
> We probably need to modify this code to handle stats-less pools
> anyway, but I'm curious about how the system got into this state.
>
> John
>
>
> > # ceph fs status
> > Error EINVAL: Traceback (most recent call last):
> >   File "/usr/lib/ceph/mgr/status/module.py", line 301, in handle_command
> >     return self.handle_fs_status(cmd)
> >   File "/usr/lib/ceph/mgr/status/module.py", line 219, in
> handle_fs_status
> >     stats = pool_stats[pool_id]
> > KeyError: (15L,)
> >
> >
> >
> > German
> > 2017-12-11 12:08 GMT-03:00 John Spray <jsp...@redhat.com>:
> >>
> >> On Mon, Dec 4, 2017 at 6:37 PM, German Anders <gand...@despegar.com>
> >> wrote:
> >> > Hi,
> >> >
> >> > I just upgrade a ceph cluster from version 12.2.0 (rc) to 12.2.2
> >> > (stable),
> >> > and i'm getting a traceback while trying to run:
> >> >
> >> > # ceph fs status
> >> >
> >> > Error EINVAL: Traceback (most recent call last):
> >> >   File "/usr/lib/ceph/mgr/status/module.py", line 301, in
> handle_command
> >> >     return self.handle_fs_status(cmd)
> >> >   File "/usr/lib/ceph/mgr/status/module.py", line 219, in
> >> > handle_fs_status
> >> >     stats = pool_stats[pool_id]
> >> > KeyError: (15L,)
> >> >
> >> >
> >> > # ceph fs ls
> >> > name: cephfs, metadata pool: cephfs_metadata, data pools:
> [cephfs_data ]
> >> >
> >> >
> >> > Any ideas?
> >>
> >> (I'm a bit late but...)
> >>
> >> Is this still happening or did it self-correct?  It could have been
> >> happening when the pool had just been created but the mgr hadn't heard
> >> about any stats from the OSDs about that pool yet (which we should
> >> fix, anyway)
> >>
> >> John
> >>
> >>
> >> >
> >> > Thanks in advance,
> >> >
> >> > Germ
> >> > an
> >> >
> >> > _______________________________________________
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to