licas which need to be
> brought back or marked lost.
> -Sam
>
>
> On 03/11/2015 07:29 AM, joel.merr...@gmail.com wrote:
>>
>> I'd like to not have to null them if possible, there's nothing
>> outlandishly valuable, its more the time to reprovision (use
:
> Yeah, get a ceph pg query on one of the stuck ones.
> -Sam
>
> On Tue, 2015-03-10 at 14:41 +0000, joel.merr...@gmail.com wrote:
>> Stuck unclean and stuck inactive. I can fire up a full query and
>> health dump somewhere useful if you want (full pg query info on ones
&g
ould you expand?
Thanks again!
Joel
On Wed, Mar 11, 2015 at 1:21 PM, Samuel Just wrote:
> Ok, you lost all copies from an interval where the pgs went active. The
> recovery from this is going to be complicated and fragile. Are the pools
> valuable?
> -Sam
>
>
> On 03/11/2
For clarity too, I've tried to drop the min_size before as suggested,
doesn't make a difference unfortunately
On Wed, Mar 11, 2015 at 9:50 AM, joel.merr...@gmail.com
wrote:
> Sure thing, n.b. I increased pg count to see if it would help. Alas not. :)
>
> Thanks again!
>
&
yesterday writing some bash to do this
programatically (might be useful to others, will throw on github)
On Tue, Mar 10, 2015 at 1:41 PM, Samuel Just wrote:
> What do you mean by "unblocked" but still "stuck"?
> -Sam
>
> On Mon, 2015-03-09 at 22:54 +, joel.merr...@g
On Mon, Mar 9, 2015 at 2:28 PM, Samuel Just wrote:
> You'll probably have to recreate osds with the same ids (empty ones),
> let them boot, stop them, and mark them lost. There is a feature in the
> tracker to improve this behavior: http://tracker.ceph.com/issues/10976
> -Sam
Thanks Sam, I've re
Hi,
I'm trying to fix an issue within 0.93 on our internal cloud related
to incomplete pg's (yes, I realise the folly of having the dev release
- it's a not-so-test env now, so I need to recover this really). I'll
detail the current outage info;
72 initial (now 65) OSDs
6 nodes
* Update to 0.92
Hi,
I'm trying to fix an issue within 0.93 on our internal cloud related
to incomplete pg's (yes, I realise the folly of having the dev release
- it's a not-so-test env now, so I need to recover this really). I'll
detail the current outage info;
72 initial (now 65) OSDs
6 nodes
* Update to 0.92