[ceph-users] Re: LVM osds loose connection to disk

2022-11-18 Thread Frank Schilder
t to impossible to reproduce a realistic ceph-osd IO pattern for > testing. Is there any tool available for this? > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Frank Schilder > Sent: 14 November 2022 1

[ceph-users] Re: LVM osds loose connection to disk

2022-11-18 Thread Dan van der Ster
to reproduce a realistic ceph-osd IO pattern for > testing. Is there any tool available for this? > > Best regards, > = > Frank Schilder > AIT Risø Campus > Bygning 109, rum S14 > > > From: Frank Schilder > Sent: 14 November 2022 13:03:58 > To: Igor Fedot

[ceph-users] Re: LVM osds loose connection to disk

2022-11-17 Thread Igor Fedotov
Schilder Sent: 14 November 2022 13:03:58 To: Igor Fedotov;ceph-users@ceph.io Subject: [ceph-users] Re: LVM osds loose connection to disk I can't reproduce the problem with artificial workloads, I need to get one of these OSDs running in the meta-data pool until it crashes. My plan is to redu

[ceph-users] Re: LVM osds loose connection to disk

2022-11-17 Thread Frank Schilder
o: Igor Fedotov; ceph-users@ceph.io Subject: [ceph-users] Re: LVM osds loose connection to disk I can't reproduce the problem with artificial workloads, I need to get one of these OSDs running in the meta-data pool until it crashes. My plan is to reduce time-outs and increase log level

[ceph-users] Re: LVM osds loose connection to disk

2022-11-14 Thread Frank Schilder
dotov; ceph-users@ceph.io Subject: [ceph-users] Re: LVM osds loose connection to disk Hi Igor, thanks for your reply. We only exchanged the mimic containers with the octopus ones. We didn't even reboot the servers during upgrade, only later for trouble shooting. The only change since the upgr

[ceph-users] Re: LVM osds loose connection to disk

2022-11-11 Thread Frank Schilder
idea what the other 2 are doing? Best regards, = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Igor Fedotov Sent: 10 November 2022 15:48:23 To: Frank Schilder; ceph-users@ceph.io Subject: Re: [ceph-users] Re: LVM osds loose co

[ceph-users] Re: LVM osds loose connection to disk

2022-11-10 Thread Igor Fedotov
o avoid hunting ghosts. Many thanks and best regards! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: 10 October 2022 23:33:32 To: Igor Fedotov; ceph-users@ceph.io Subject: [ceph-users] Re: LVM osds loose c

[ceph-users] Re: LVM osds loose connection to disk

2022-11-10 Thread Frank Schilder
I should look at, I would like to avoid hunting ghosts. Many thanks and best regards! = Frank Schilder AIT Risø Campus Bygning 109, rum S14 From: Frank Schilder Sent: 10 October 2022 23:33:32 To: Igor Fedotov; ceph-users@ceph.io Subject: [ceph-

[ceph-users] Re: LVM osds loose connection to disk

2022-10-10 Thread Frank Schilder
Hi Igor. The problem of OSD crashes was resolved after migrating just a little bit of the meta-data pool to other disks (we decided to evacuate the small OSDs onto larger disks to make space). Therefore, I don't think its an LVM or disk issue. The cluster is working perfectly now after migratin

[ceph-users] Re: LVM osds loose connection to disk

2022-10-09 Thread Igor Fedotov
Hi Frank, can't advise much on the disk issue  - just an obvious thought about upgrading the firmware and/or contacting the vendor. IIUC disk is totally inaccessible at this point, e.g. you're unable to read from it bypassing LVM as well, right? If so this definitely looks like a low-level pr