Hi Greg,
I’ve tested the patch below on top of the 0.94.5 hammer sources, and it
works beautifully. No more active+clean+replay stuck PGs.
Thanks!
Andras
On 10/27/15, 4:46 PM, "Andras Pataki" wrote:
>Yes, this definitely sounds plausible (the peering/activating process does
>take a long tim
Yes, this definitely sounds plausible (the peering/activating process does
take a long time). At the moment I’m trying to get our cluster back to a
more working state. Once everything works, I could try building a patched
set of ceph processes from source (currently I’m using the pre-built
centos
On Tue, Oct 27, 2015 at 11:22 AM, Andras Pataki
wrote:
> Hi Greg,
>
> No, unfortunately I haven¹t found any resolution to it. We are using
> cephfs, the whole installation is on 0.94.4. What I did notice is that
> performance is extremely poor when backfilling is happening. I wonder if
> timeou
Hi Greg,
No, unfortunately I haven¹t found any resolution to it. We are using
cephfs, the whole installation is on 0.94.4. What I did notice is that
performance is extremely poor when backfilling is happening. I wonder if
timeouts of some kind could cause PG¹s to get stuck in replay. I lowered
On Tue, Oct 27, 2015 at 11:03 AM, Gregory Farnum wrote:
> On Thu, Oct 22, 2015 at 3:58 PM, Andras Pataki
> wrote:
>> Hi ceph users,
>>
>> We’ve upgraded to 0.94.4 (all ceph daemons got restarted) – and are in the
>> middle of doing some rebalancing due to crush changes (removing some disks).
>> D
On Thu, Oct 22, 2015 at 3:58 PM, Andras Pataki
wrote:
> Hi ceph users,
>
> We’ve upgraded to 0.94.4 (all ceph daemons got restarted) – and are in the
> middle of doing some rebalancing due to crush changes (removing some disks).
> During the rebalance, I see that some placement groups get stuck in
Hi ceph users,
We’ve upgraded to 0.94.4 (all ceph daemons got restarted) – and are in the
middle of doing some rebalancing due to crush changes (removing some disks).
During the rebalance, I see that some placement groups get stuck in
‘active+clean+replay’ for a long time (essentially until I
Hi!
> I imagine you aren't actually using the data/metadata pool that these
> PGs are in, but it's a previously-reported bug we haven't identified:
> http://tracker.ceph.com/issues/8758
> They should go away if you restart the OSDs that host them (or just
> remove those pools), but it's not going
I imagine you aren't actually using the data/metadata pool that these
PGs are in, but it's a previously-reported bug we haven't identified:
http://tracker.ceph.com/issues/8758
They should go away if you restart the OSDs that host them (or just
remove those pools), but it's not going to hurt anythin
Hi!
16 pgs in our ceph cluster are in active+clean+replay state more then one day.
All clients are working fine.
Is this ok?
root@bastet-mon1:/# ceph -w
cluster fffeafa2-a664-48a7-979a-517e3ffa0da1
health HEALTH_OK
monmap e3: 3 mons at
{1=10.92.8.80:6789/0,2=10.92.8.81:6789/0,3=10.
10 matches
Mail list logo