hev.axsmarine
>E-mail: dimitar.boic...@axsmarine.com
>
>
>-Original Message-
>From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>Stillwell, Bryan
>Sent: Tuesday, February 23, 2016 7:31 PM
>To: ceph-users@lists.ceph.com
>Subject: Re: [ceph-users]
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Stillwell, Bryan
Sent: Tuesday, February 23, 2016 7:31 PM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] osd not removed from crush map after ceph osd crush
remove
Dimitar,
I would agree with you that getting the cluster into a he
gt;
> Regards.
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Dimitar Boichev
> Sent: Thursday, February 18, 2016 5:06 PM
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] osd not removed from crush map after ceph osd crush
> re
1.1b8 (11.1b8) -> up [13,58,37] acting [13,58,37]
Bryan
From: Dimitar Boichev
Date: Tuesday, February 23, 2016 at 1:08 AM
To: CTG User , "ceph-users@lists.ceph.com"
Subject: RE: [ceph-users] osd not removed from crush map after ceph osd
crush remove
>Hello,
>Thank you Bryan.
ct: Re: [ceph-users] osd not removed from crush map after ceph osd crush
remove
Dimitar,
I'm not sure why those PGs would be stuck in the stale+active+clean state.
Maybe try upgrading to the 0.80.11 release to see if it's a bug that was fixed
already? You can use the 'ceph tell o
com>>,
"ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>"
mailto:ceph-users@lists.ceph.com>>
Subject: Re: [ceph-users] osd not removed from crush map after ceph osd crush
remove
Anyone ?
Regards.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com]
Anyone ?
Regards.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Dimitar Boichev
Sent: Thursday, February 18, 2016 5:06 PM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] osd not removed from crush map after ceph osd crush remove
Hello,
I am running a tiny cluster
Hello,
I am running a tiny cluster of 2 nodes.
ceph -v
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3)
One osd died and I added a new osd (not replacing the old one).
After that I wanted to remove the failed osd completely from the cluster.
Here is what I did:
ceph osd reweight osd.