Hi,

I have a healthy (test) cluster running 17.2.5:

root@cephtest20:~# ceph status
  cluster:
    id:     ba37db20-2b13-11eb-b8a9-871ba11409f6
    health: HEALTH_OK
services:
    mon:         3 daemons, quorum cephtest31,cephtest41,cephtest21 (age 2d)
    mgr:         cephtest22.lqzdnk(active, since 4d), standbys: 
cephtest32.ybltym, cephtest42.hnnfaf
    mds:         1/1 daemons up, 1 standby, 1 hot standby
    osd:         48 osds: 48 up (since 4d), 48 in (since 4M)
    rgw:         2 daemons active (2 hosts, 1 zones)
    tcmu-runner: 6 portals active (3 hosts)
data:
    volumes: 1/1 healthy
    pools:   17 pools, 513 pgs
    objects: 28.25k objects, 4.7 GiB
    usage:   26 GiB used, 4.7 TiB / 4.7 TiB avail
    pgs:     513 active+clean
io:
    client:   4.3 KiB/s rd, 170 B/s wr, 5 op/s rd, 0 op/s wr

CephFS is mounted and can be used without any issue.

But I get an error when I when querying its status:

root@cephtest20:~# ceph fs status
Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1757, in _handle_command
    return CLICommand.COMMANDS[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/status/module.py", line 159, in handle_fs_status
    assert metadata
AssertionError


The dashboard's filesystem page shows no error and displays
all information about cephfs.

Where does this AssertionError come from?

Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to