Hi Friends,

 

We have some inconsistent storage space usage reporting. We used only 46TB
with single copy but the space used on the pool is close to 128TB.

 

Any idea where's the extra space is utilized and how to reclaim it?

 

Ceph Version : 12.2.11 with XFS OSDs. We are planning to upgrade soon.

 

# ceph df detail

GLOBAL:

    SIZE       AVAIL      RAW USED     %RAW USED     OBJECTS

    363TiB     131TiB       231TiB         63.83      43.80M

POOLS:

    NAME        ID     QUOTA OBJECTS     QUOTA BYTES     USED        %USED
MAX AVAIL     OBJECTS      DIRTY      READ        WRITE       RAW USED

    fcp         15     N/A               N/A             23.6TiB     42.69
31.7TiB      3053801      3.05M     6.10GiB     12.6GiB      47.3TiB

    nfs         16     N/A               N/A              128TiB     66.91
63.4TiB     33916181     33.92M     3.93GiB     4.73GiB       128TiB

 

 

 

 

# df -h

Filesystem      Size  Used Avail Use% Mounted on

/dev/nbd0       200T   46T  155T  23% /vol/dir_research

 

 

#ceph osd pool get nfs all

size: 1

min_size: 1

crash_replay_interval: 0

pg_num: 128

pgp_num: 128

crush_rule: replicated_ruleset

hashpspool: true

nodelete: false

nopgchange: false

nosizechange: false

write_fadvise_dontneed: false

noscrub: false

nodeep-scrub: false

use_gmt_hitset: 1

auid: 0

fast_read: 0

 

 

Appreciate your help.

 

Thanks,

-Vikas

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to