On Sat, Mar 2, 2019 at 5:49 PM Alexandre Marangone
wrote:
>
> If you have no way to recover the drives, you can try to reboot the OSDs with
> `osd_find_best_info_ignore_history_les = true` (revert it afterwards), you'll
> lose data. If after this, the PGs are down, you can mark the OSDs
If you have no way to recover the drives, you can try to reboot the OSDs
with `osd_find_best_info_ignore_history_les = true` (revert it afterwards),
you'll lose data. If after this, the PGs are down, you can mark the OSDs
blocking the PGs from become active lost.
On Sat, Mar 2, 2019 at 6:08 AM
Hi
This is a luminous (v. 12.2.11) cluster
Thanks, Massimo
On Sat, Mar 2, 2019 at 2:49 PM Matthew H wrote:
> Hi Massimo!
>
> What version of Ceph is in use?
>
> Thanks,
>
> --
> *From:* ceph-users on behalf of
> Massimo Sgaravatto
> *Sent:* Friday, March 1, 2019
They all just started having read errors. Bus resets. Slow reads. Which is
one of the reasons the cluster didn't recover fast enough to compensate.
I tried to be mindful of the drive type and specifically avoided the larger
capacity Seagates that are SMR. Used 1 SM863 for every 6 drives for the
Did they break, or did something went wronng trying to replace them?
Jespe
Sent from myMail for iOS
Saturday, 2 March 2019, 14.34 +0100 from Daniel K :
>I bought the wrong drives trying to be cheap. They were 2TB WD Blue 5400rpm
>2.5 inch laptop drives.
>
>They've been replace now with
Hi Massimo!
What version of Ceph is in use?
Thanks,
From: ceph-users on behalf of Massimo
Sgaravatto
Sent: Friday, March 1, 2019 1:24 PM
To: Ceph Users
Subject: [ceph-users] Problems creating a balancer plan
Hi
I already used the balancer in my ceph
I bought the wrong drives trying to be cheap. They were 2TB WD Blue 5400rpm
2.5 inch laptop drives.
They've been replace now with HGST 10K 1.8TB SAS drives.
On Sat, Mar 2, 2019, 12:04 AM wrote:
>
>
> Saturday, 2 March 2019, 04.20 +0100 from satha...@gmail.com <
> satha...@gmail.com>:
>
> 56
You can force an rbd unmap with the command below:
rbd unmap -o force $DEV
If it still doesn't unmap, then you have pending IO blocking you.
As llya mentioned for good measure you should also check to see if LVM is in
use on this RBD volume. If it is, then that could be blocking you from