Am 22.08.2013 05:34, schrieb Samuel Just:
> It's not really possible at this time to control that limit because
> changing the primary is actually fairly expensive and doing it
> unnecessarily would probably make the situation much worse
I'm sorry but remapping or backfilling is far less expensive
It's not really possible at this time to control that limit because
changing the primary is actually fairly expensive and doing it
unnecessarily would probably make the situation much worse (it's
mostly necessary for backfilling, which is expensive anyway). It
seems like forwarding IO on an object
Hi Sam,
Am 21.08.2013 21:13, schrieb Samuel Just:
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery.
Sure but remember if you have VM random 4K workload a lot of objects go
out of date pretty soon.
> A request
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery. A
request only waits on recovery if the particular object being read or
written must be recovered. Your issue was that recovering the
particular object being reques
Am 21.08.2013 17:32, schrieb Samuel Just:
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
This might sound a bug harsh but maybe due to my limited english skills ;-)
I still think that Cephs recovery system is broken by design. If an OSD
c
/5401)
>>>
>>> -Original Message-
>>> From: ceph-devel-ow...@vger.kernel.org
>>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
>>> Sent: mercredi 21 août 2013 17:33
>>> To: Mike Dawson
>>> Cc: Stefan Priebe - Prof
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: mercredi 21 août 2013 17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com;
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish
Ha
el.org] On Behalf Of Samuel Just
> Sent: mercredi 21 août 2013 17:33
> To: Mike Dawson
> Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com;
> ceph-devel@vger.kernel.org
> Subject: Re: still recovery issues with cuttlefish
>
> Have you tried setting osd_recovery_clone_overl
st AG; josh.dur...@inktank.com;
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish
Have you tried setting osd_recovery_clone_overlap to false? That seemed to
help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson wrote:
> Sam/Josh,
>
> We u
Have you tried setting osd_recovery_clone_overlap to false? That
seemed to help with Stefan's issue.
-Sam
On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson wrote:
> Sam/Josh,
>
> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this morning,
> hoping it would improve this situation, but
Sam/Josh,
We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
morning, hoping it would improve this situation, but there was no
appreciable change.
One node in our cluster fsck'ed after a reboot and got a bit behind. Our
instances backed by RBD volumes were OK at that point, b
the same problem still occours. Will need to check when i've time to
gather logs again.
Am 14.08.2013 01:11, schrieb Samuel Just:
> I'm not sure, but your logs did show that you had >16 recovery ops in
> flight, so it's worth a try. If it doesn't help, you should collect
> the same set of logs I'
I'm not sure, but your logs did show that you had >16 recovery ops in
flight, so it's worth a try. If it doesn't help, you should collect
the same set of logs I'll look again. Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam
On Tue, Aug 13, 2013 at 2:0
Am 13.08.2013 um 22:43 schrieb Samuel Just :
> I just backported a couple of patches from next to fix a bug where we
> weren't respecting the osd_recovery_max_active config in some cases
> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e). You can either try the
> current cuttlefish branch or wait for
I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e). You can either try the
current cuttlefish branch or wait for a 61.8 release.
-Sam
On Mon, Aug 12, 2013 at 10:34 PM,
I got swamped today. I should be able to look tomorrow. Sorry!
-Sam
On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
wrote:
> Did you take a look?
>
> Stefan
>
> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>
>> Great! I'll take a look on Monday.
>> -Sam
>>
>> On Sat, Aug 10, 2013
Did you take a look?
Stefan
Am 11.08.2013 um 05:50 schrieb Samuel Just :
> Great! I'll take a look on Monday.
> -Sam
>
> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe wrote:
>> Hi Samual,
>>
>> Am 09.08.2013 23:44, schrieb Samuel Just:
>>
>>> I think Stefan's problem is probably distinct
Great! I'll take a look on Monday.
-Sam
On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe wrote:
> Hi Samual,
>
> Am 09.08.2013 23:44, schrieb Samuel Just:
>
>> I think Stefan's problem is probably distinct from Mike's.
>>
>> Stefan: Can you reproduce the problem with
>>
>> debug osd = 20
>> debug
Hi Samual,
Am 09.08.2013 23:44, schrieb Samuel Just:
I think Stefan's problem is probably distinct from Mike's.
Stefan: Can you reproduce the problem with
debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20
on a few osds (including the restarted osd), and upload those osd lo
I think Stefan's problem is probably distinct from Mike's.
Stefan: Can you reproduce the problem with
debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20
on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd u
Hi Mike,
Am 08.08.2013 16:05, schrieb Mike Dawson:
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0] http://comments.gmane.org/gmane.comp.f
Hi Mike,
On 08/08/2013 04:05 PM, Mike Dawson wrote:
Stefan,
I see the same behavior and I theorize it is linked to an issue
detailed in another thread [0]. Do your VM guests ever hang while your
cluster is HEALTH_OK like described in that other thread?
[0] http://comments.gmane.org/gmane.co
Stefan,
I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?
[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982
A few observations:
-
Hi,
osd recovery max active = 1
osd max backfills = 1
osd recovery op priority = 5
still no difference...
Stefan
Am 02.08.2013 20:21, schrieb Samuel Just:
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10
Also, you have osd_recovery_op_priority at 50. That is close to the
priority of client IO. You want it below 10 (defaults to 10), perhaps
at 1. You can also adjust down osd_recovery_max_active.
-Sam
On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe wrote:
> I already tried both values this makes
I already tried both values this makes no difference. The drives are not
the bottleneck.
Am 02.08.2013 19:35, schrieb Samuel Just:
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe wrote:
Am 01.08.2013 23:23, schrieb Samuel Just:> Can you
Created #5844.
On Thu, Aug 1, 2013 at 10:38 PM, Samuel Just wrote:
> Is there a bug open for this? I suspect we don't sufficiently
> throttle the snapshot removal work.
> -Sam
>
> On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov wrote:
>> Second this. Also for long-lasting snapshot problem and r
You might try turning osd_max_backfills to 2 or 1.
-Sam
On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe wrote:
> Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd settings?
>
>> sudo ceph --admin-daemon ceph-osd..asok config show
>
> Sure.
>
>
>
> { "name": "osd.0",
> "cluster": "
Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd settings?
> sudo ceph --admin-daemon ceph-osd..asok config show
Sure.
{ "name": "osd.0",
"cluster": "ceph",
"none": "0\/5",
"lockdep": "0\/0",
"context": "0\/0",
"crush": "0\/0",
"mds": "0\/0",
"mds_balancer": "0\/0
Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd..asok config show
-Sam
On Thu, Aug 1, 2013 at 12:07 PM, Stefan Priebe wrote:
> Mike we already have the async patch running. Yes it helps but only helps it
> does not solve. It just hides the issue ...
> Am 01.08.2013 20:54, schrie
Mike we already have the async patch running. Yes it helps but only
helps it does not solve. It just hides the issue ...
Am 01.08.2013 20:54, schrieb Mike Dawson:
I am also seeing recovery issues with 0.61.7. Here's the process:
- ceph osd set noout
- Reboot one of the nodes hosting OSDs
I am also seeing recovery issues with 0.61.7. Here's the process:
- ceph osd set noout
- Reboot one of the nodes hosting OSDs
- VMs mounted from RBD volumes work properly
- I see the OSD's boot messages as they re-join the cluster
- Start seeing active+recovery_wait, peering, and active+re
here it is
Am 01.08.2013 20:36, schrieb Samuel Just:
For now, just the main ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe wrote:
m 01.08.2013 20:34, schrieb Samuel Just:
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the o
Is there a bug open for this? I suspect we don't sufficiently
throttle the snapshot removal work.
-Sam
On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov wrote:
> Second this. Also for long-lasting snapshot problem and related
> performance issues I may say that cuttlefish improved things greatly,
It doesn't have log levels, should be in /var/log/ceph/ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:36 AM, Samuel Just wrote:
> For now, just the main ceph.log.
> -Sam
>
> On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe wrote:
>> m 01.08.2013 20:34, schrieb Samuel Just:
>>
>>> Can you reproduce and a
For now, just the main ceph.log.
-Sam
On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe wrote:
> m 01.08.2013 20:34, schrieb Samuel Just:
>
>> Can you reproduce and attach the ceph.log from before you stop the osd
>> until after you have started the osd and it has recovered?
>> -Sam
>
>
> Sure which
m 01.08.2013 20:34, schrieb Samuel Just:
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam
Sure which log levels?
On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
wrote:
Hi,
i still have recove
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam
On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
wrote:
> Hi,
>
> i still have recovery issues with cuttlefish. After the OSD comes back
> it seem to
Second this. Also for long-lasting snapshot problem and related
performance issues I may say that cuttlefish improved things greatly,
but creation/deletion of large snapshot (hundreds of gigabytes of
commited data) still can bring down cluster for a minutes, despite
usage of every possible optimiza
39 matches
Mail list logo