Here's a different cluster we upgraded luminous -> nautilus in October:

2020-10-14 13:22:51.860 7f78e3d20a80  0 ceph version 14.2.11
(f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable), process
ceph-osd, pid 119714
...
2020-10-14 13:27:50.368 7f78e3d20a80  1
bluestore(/var/lib/ceph/osd/ceph-136) _fsck_on_open checking
shared_blobs
2020-10-14 13:27:50.368 7f78e3d20a80  1
bluestore(/var/lib/ceph/osd/ceph-136) _fsck_on_open checking
pool_statfs
2020-10-14 13:27:50.368 7f78e3d20a80 -1
bluestore(/var/lib/ceph/osd/ceph-136) fsck error: legacy statfs record
found, removing
2020-10-14 13:27:50.368 7f78e3d20a80 -1
bluestore(/var/lib/ceph/osd/ceph-136) fsck error: missing Pool StatFS
record for pool 1
2020-10-14 13:27:50.368 7f78e3d20a80 -1
bluestore(/var/lib/ceph/osd/ceph-136) fsck error: missing Pool StatFS
record for pool 2
2020-10-14 13:27:50.368 7f78e3d20a80 -1
bluestore(/var/lib/ceph/osd/ceph-136) fsck error: missing Pool StatFS
record for pool 5
2020-10-14 13:27:50.368 7f78e3d20a80 -1
bluestore(/var/lib/ceph/osd/ceph-136) fsck error: missing Pool StatFS
record for pool ffffffffffffffff

and ceph df right now showing stored == used in the pool stats:

# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       3.2 PiB     2.3 PiB     842 TiB      843 TiB         26.01
    TOTAL     3.2 PiB     2.3 PiB     842 TiB      843 TiB         26.00

POOLS:
    POOL                ID     STORED      OBJECTS     USED
%USED     MAX AVAIL
    cephfs_data          1     252 TiB     270.90M     252 TiB
20.48       326 TiB
    cephfs_metadata      2      72 GiB      22.03M      72 GiB
0       326 TiB
    test                 5     9.3 GiB      10.96k     9.3 GiB
0       326 TiB

Cheers, Dan

On Thu, Nov 26, 2020 at 6:00 PM Dan van der Ster <d...@vanderster.com> wrote:
>
> Hi Igor,
>
> No BLUESTORE_LEGACY_STATFS warning, and
> bluestore_warn_on_legacy_statfs is the default true on this (and all)
> clusters.
> I'm quite sure we did the statfs conversion during one of the recent
> upgrades (I forget which one exactly).
>
> # ceph tell osd.* config get bluestore_warn_on_legacy_statfs | grep -v true
> #
>
> Is there a command to see the statfs reported by an individual OSD ?
> We have a mix of ~year old and recently recreated OSDs, so I could try
> to see if they differ.
>
> Thanks!
>
> Dan
>
>
> On Thu, Nov 26, 2020 at 5:50 PM Igor Fedotov <ifedo...@suse.de> wrote:
> >
> > Hi Dan
> >
> > don't you have BLUESTORE_LEGACY_STATFS alert raised (might be silenced
> > by bluestore_warn_on_legacy_statfs param) for the older cluster?
> >
> >
> > Thanks,
> >
> > Igor
> >
> >
> > On 11/26/2020 7:29 PM, Dan van der Ster wrote:
> > > Hi,
> > >
> > > Depending on which cluster I look at (all running v14.2.11), the
> > > bytes_used is reporting raw space or stored bytes variably.
> > >
> > > Here's a 7 year old cluster:
> > >
> > > # ceph df -f json | jq .pools[0]
> > > {
> > >    "name": "volumes",
> > >    "id": 4,
> > >    "stats": {
> > >      "stored": 1229308190855881,
> > >      "objects": 294401604,
> > >      "kb_used": 1200496280133,
> > >      "bytes_used": 1229308190855881,
> > >      "percent_used": 0.4401889145374298,
> > >      "max_avail": 521125025021952
> > >    }
> > > }
> > >
> > > Note that stored == bytes_used for that pool. (this is a 3x replica pool).
> > >
> > > But here's a newer cluster (installed recently with nautilus)
> > >
> > > # ceph df -f json  | jq .pools[0]
> > > {
> > >    "name": "volumes",
> > >    "id": 1,
> > >    "stats": {
> > >      "stored": 680977600893041,
> > >      "objects": 163155803,
> > >      "kb_used": 1995736271829,
> > >      "bytes_used": 2043633942351985,
> > >      "percent_used": 0.23379847407341003,
> > >      "max_avail": 2232457428467712
> > >    }
> > > }
> > >
> > > In the second cluster, bytes_used is 3x stored.
> > >
> > > Does anyone know why these are not reported consistently?
> > > Noticing this just now, I'll update our monitoring to plot stored
> > > rather than bytes_used from now on.
> > >
> > > Thanks!
> > >
> > > Dan
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@ceph.io
> > > To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to