When I try
ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-$OSD fsck
I get:
2020-06-08 16:05:39.393 7fc589500d80 1
bluestore(/var/lib/ceph/osd/ceph-244) _mount path /var/lib/ceph/osd/ceph-244
2020-06-08 16:05:39.393 7fc589500d80 1 bdev create path
/var/lib/ceph/osd/ceph-244/block type k
Hi Harald,
was this exact OSD suffering from "ceph_assert(h->file->fnode.ino != 1)"?
Could you please collect extended log with debug-bluefs set ot 20?
Thanks,
Igor
On 6/8/2020 4:48 PM, Harald Staub wrote:
This is again about our bad cluster, with far too many objects. Now
another OSD crash
Hi Igor
Thank you for looking into this! I attached the complete log of today,
with the preceding "ceph_assert(h->file->fnode.ino != 1)" at
13:13:22.609, the first "FAILED ceph_assert(is_valid_io(off, len))" at
13:44:52.059, the debug log starting at 16:42:20.883.
Cheers
Harry
On 08.06.20
(really sorry for spamming, but it is still waiting for moderator, so
trying with xz ...)
On 08.06.20 17:21, Harald Staub wrote:
(and now with trimmed attachment because of size restriction: only the
debug log)
On 08.06.20 16:53, Harald Staub wrote:
(and now with attachment ...)
On 08.06.20
I think it's better to put the log to some public cloud and paste the
link here..
On 6/8/2020 6:27 PM, Harald Staub wrote:
(really sorry for spamming, but it is still waiting for moderator, so
trying with xz ...)
On 08.06.20 17:21, Harald Staub wrote:
(and now with trimmed attachment because
https://drive.switch.ch/index.php/s/Jwk0Kgy7Q1EIxuE
On 08.06.20 17:30, Igor Fedotov wrote:
I think it's better to put the log to some public cloud and paste the
link here..
On 6/8/2020 6:27 PM, Harald Staub wrote:
(really sorry for spamming, but it is still waiting for moderator, so
trying w