On 9/24/21 08:33, Rainer Krienke wrote:
Hallo Dan,

I am also running a productive  14.2.22 Cluster with 144 HDD-OSDs and I am thinking if I should stay with this release or upgrade to octopus. So your info is very valuable...

One more question: You described that OSDs do an expected fsck and that this took roughly 10min. I guess the fsck is done in parallel for all OSDs of one host? So the total down-time for one host regarding fsck should not be much more than say 15min, isn't it?

You can also turn it off and do that separately at a more convenient time:

ceph config set osd bluestore_fsck_quick_fix_on_mount false

After the upgrade is done, you can do the OSD bluestore fsck

systemctl stop ceph-osd.target
wait till processes are finished


for osd in `ls /var/lib/ceph/osd/`; do ceph-bluestore-tool repair --path /var/lib/ceph/osd/$osd;done

...
bunch of output like Dan posted earlier
...

systemctl start ceph-osd.target

That works at least for OSDs without a separate WAL+DB.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to