Hi everyone,

I just rebuild a (test) cluster using :

OS : Ubuntu 20.04.2 LTS
CEPH : ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus 
(stable)
3 nodes : monitor/storage


1.      The cluster looks good :

# ceph -s
cluster:
    id:     9a89aa5a-1702-4f87-a99c-f94c9f2cdabd
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum dao-wkr-04,dao-wkr-05,dao-wkr-06 (age 7m)
    mgr: dao-wkr-05(active, since 8m), standbys: dao-wkr-04, dao-wkr-06
    mds: cephfs:1 {0=dao-wkr-04=up:active} 2 up:standby
    osd: 9 osds: 9 up (since 7m), 9 in (since 4h)
    rgw: 3 daemons active (dao-wkr-04.rgw0, dao-wkr-05.rgw0, dao-wkr-06.rgw0)

  task status:

  data:
    pools:   7 pools, 121 pgs
    objects: 234 objects, 16 KiB
    usage:   9.0 GiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     121 active+clean


2.      except that the main pools for the radosgw are not there

# sudo ceph osd lspools

1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
4 .rgw.root
5 default.rgw.log
6 default.rgw.control
7 default.rgw.meta


Missing : default.rgw.buckets.index  & default.rgw.buckets.data

What do you think ?
Thx !

Sylvain


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to