Hi everyone,

We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are 2TB 
spinning disks. 
# ceph osd count-metadata osd_objectstore
    "bluestore": 44
# ceph osd pool get one size
size: 3
# ceph df
RAW STORAGE:
    CLASS     SIZE       AVAIL      USED       RAW USED     %RAW USED
    hdd       80 TiB     33 TiB     47 TiB       47 TiB         58.26
    TOTAL     80 TiB     33 TiB     47 TiB       47 TiB         58.26

POOLS:
    POOL      ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    one        2      15 TiB       4.06M      47 TiB     68.48       7.1 TiB
    bench      5     250 MiB          67     250 MiB         0        21 TiB

Why pool's stats are showing incorrect values for %USED and MAX AVAIL?
They should be much bigger.
The first 24 OSDs was created on jewell release and the osd_objectstore was 
'filestore'.
While we were with mimic release, we added 20 more 'bluestore' OSDs. The first 
24 was destroyed and recreated as 'bluestore'.
After the upgrade from mimic release, all the OSD's was updated with 
ceph-bluestore-tool repair.
The incorrect values appeared after the upgrade from 14.2.1 to 14.2.2.
Any help will be appreciated :)

BR,
NAlexandrov
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to