name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data
ecpoolk3m1osd ecpoolk5m1osd ecpoolk4m2osd ]
( first/default pool is replicated, rest if EC )
Should i scan every data pool one by one or only the firs/default one ?
--
Kind Regards
mgrzybowski
Hi Igor
In ceph.conf:
[osd]
debug bluestore = 10/30
systemctl start ceph-osd@2
~# ls -alh /var/log/ceph/ceph-osd.2.log
-rw-r--r-- 1 ceph ceph 416M paź 25 21:08 /var/log/ceph/ceph-osd.2.log
/var/log/ceph/ceph-osd.2.log | gzip > ceph-osd.2.log.gz
Full compressed log on gdrive:
Hi Igor
In ceph.conf i added:
[osd]
debug bluestore = 20
next: systemctl start ceph-osd@2
Log is large :
# ls -alh /var/log/ceph/ceph-osd.2.log
-rw-r--r-- 1 ceph ceph 1,5G paź 22 21:14 /var/log/ceph/ceph-osd.2.log
Entire file:
m.
Deep fsck did not found anything:
~# ceph-bluestore-tool --command fsck --deep yes --path
/var/lib/ceph/osd/ceph-2
fsck success
Any ideas what could cause this crashes and is it possible to bring online
crashed osd this way ?
--
mgrzybowski
_