Sorry Frank, I typed the wrong name.
On Tue, Apr 30, 2024, 8:51 AM Mary Zhang wrote:
> Sounds good. Thank you Kevin and have a nice day!
>
> Best Regards,
> Mary
>
> On Tue, Apr 30, 2024, 8:21 AM Frank Schilder wrote:
>
>> I think you are panicking way too much. Chan
t administrate your cluster with common storage
> admin sense.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ____
> From: Mary Zhang
> Sent: Tuesday, April 30, 2024 5:00 P
ce to recover data.
> Look at the manual of ddrescue why it is important to stop IO from a
> failing disk as soon as possible.
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ____
>
of one or more
> hosts at a time, you don’t need to worry about a single disk. Just
> take it out and remove it (forcefully) so it doesn’t have any clients
> anymore. Ceph will immediately assign different primary OSDs and your
> clients will be happy again. ;-)
>
> Zitat von Mary
tried to find a way to have your cake and eat it to in relation to this
> "predicament" in this tracker issue: https://tracker.ceph.com/issues/44400
> but it was deemed "wont fix".
>
> Respectfully,
>
> *Wes Dillingham*
> LinkedIn <http://www.linkedin.com/in/wesleyd
his OSD, and in case of hardware
> failure it might lead to slow requests. It might make sense to
> forcefully remove the OSD without draining:
>
> - stop the osd daemon
> - mark it as out
> - osd purge [--force] [--yes-i-really-mean-it]
>
> Regards,
> Eugen
>
>
lity. Is our
expectation reasonable? What's the best way to handle osd with hardware
failures?
Thank you in advance for any comments or suggestions.
Best Regards,
Mary Zhang
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an e