[ceph-users] MDS metadata pool recovery procedure - multiple data pools

2021-10-30 Thread mgrzybowski
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ecpoolk3m1osd ecpoolk5m1osd ecpoolk4m2osd ] ( first/default pool is replicated, rest if EC ) Should i scan every data pool one by one or only the firs/default one ? -- Kind Regards mgrzybowski

[ceph-users] Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-10-25 Thread mgrzybowski
Hi Igor In ceph.conf: [osd] debug bluestore = 10/30 systemctl start ceph-osd@2 ~# ls -alh /var/log/ceph/ceph-osd.2.log -rw-r--r-- 1 ceph ceph 416M paź 25 21:08 /var/log/ceph/ceph-osd.2.log /var/log/ceph/ceph-osd.2.log | gzip > ceph-osd.2.log.gz Full compressed log on gdrive:

[ceph-users] Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-10-22 Thread mgrzybowski
Hi Igor In ceph.conf i added: [osd] debug bluestore = 20 next: systemctl start ceph-osd@2 Log is large : # ls -alh /var/log/ceph/ceph-osd.2.log -rw-r--r-- 1 ceph ceph 1,5G paź 22 21:14 /var/log/ceph/ceph-osd.2.log Entire file:

[ceph-users] Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-10-20 Thread mgrzybowski
m. Deep fsck did not found anything: ~# ceph-bluestore-tool --command fsck --deep yes --path /var/lib/ceph/osd/ceph-2 fsck success Any ideas what could cause this crashes and is it possible to bring online crashed osd this way ? -- mgrzybowski _