t to impossible to reproduce a realistic ceph-osd IO pattern for
> testing. Is there any tool available for this?
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Frank Schilder
> Sent: 14 November 2022 1
to reproduce a realistic ceph-osd IO pattern for
> testing. Is there any tool available for this?
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> From: Frank Schilder
> Sent: 14 November 2022 13:03:58
> To: Igor Fedot
Schilder
Sent: 14 November 2022 13:03:58
To: Igor Fedotov;ceph-users@ceph.io
Subject: [ceph-users] Re: LVM osds loose connection to disk
I can't reproduce the problem with artificial workloads, I need to get one of
these OSDs running in the meta-data pool until it crashes. My plan is to redu
o: Igor Fedotov; ceph-users@ceph.io
Subject: [ceph-users] Re: LVM osds loose connection to disk
I can't reproduce the problem with artificial workloads, I need to get one of
these OSDs running in the meta-data pool until it crashes. My plan is to reduce
time-outs and increase log level
dotov; ceph-users@ceph.io
Subject: [ceph-users] Re: LVM osds loose connection to disk
Hi Igor,
thanks for your reply. We only exchanged the mimic containers with the octopus
ones. We didn't even reboot the servers during upgrade, only later for trouble
shooting. The only change since the upgr
idea what the other 2 are doing?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Igor Fedotov
Sent: 10 November 2022 15:48:23
To: Frank Schilder; ceph-users@ceph.io
Subject: Re: [ceph-users] Re: LVM osds loose co
o avoid hunting ghosts.
Many thanks and best regards!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 10 October 2022 23:33:32
To: Igor Fedotov; ceph-users@ceph.io
Subject: [ceph-users] Re: LVM osds loose c
I should look at, I would
like to avoid hunting ghosts.
Many thanks and best regards!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Frank Schilder
Sent: 10 October 2022 23:33:32
To: Igor Fedotov; ceph-users@ceph.io
Subject: [ceph-
Hi Igor.
The problem of OSD crashes was resolved after migrating just a little bit of
the meta-data pool to other disks (we decided to evacuate the small OSDs onto
larger disks to make space). Therefore, I don't think its an LVM or disk issue.
The cluster is working perfectly now after migratin
Hi Frank,
can't advise much on the disk issue - just an obvious thought about
upgrading the firmware and/or contacting the vendor. IIUC disk is
totally inaccessible at this point, e.g. you're unable to read from it
bypassing LVM as well, right? If so this definitely looks like a
low-level pr
10 matches
Mail list logo