Re: still recovery issues with cuttlefish

2013-08-22 Thread Stefan Priebe - Profihost AG
Am 22.08.2013 05:34, schrieb Samuel Just:
> It's not really possible at this time to control that limit because
> changing the primary is actually fairly expensive and doing it
> unnecessarily would probably make the situation much worse

I'm sorry but remapping or backfilling is far less expensive on all of
my machines than recovering.

While backfilling i've around 8-10% I/O waits while under recovery i
have 40%-50%


 (it's
> mostly necessary for backfilling, which is expensive anyway).  It
> seems like forwarding IO on an object which needs to be recovered to a
> replica with the object would be the next step.  Certainly something
> to consider for the future.

Yes this would be the solution.

Stefan

> -Sam
> 
> On Wed, Aug 21, 2013 at 12:37 PM, Stefan Priebe  wrote:
>> Hi Sam,
>> Am 21.08.2013 21:13, schrieb Samuel Just:
>>
>>> As long as the request is for an object which is up to date on the
>>> primary, the request will be served without waiting for recovery.
>>
>>
>> Sure but remember if you have VM random 4K workload a lot of objects go out
>> of date pretty soon.
>>
>>
>>> A request only waits on recovery if the particular object being read or
>>>
>>> written must be recovered.
>>
>>
>> Yes but on 4k load this can be a lot.
>>
>>
>>> Your issue was that recovering the
>>> particular object being requested was unreasonably slow due to
>>> silliness in the recovery code which you disabled by disabling
>>> osd_recover_clone_overlap.
>>
>>
>> Yes and no. It's better now but far away from being good or perfect. My VMs
>> do not crash anymore but i still have a bunch of slow requests (just around
>> 10 messages) and still a VERY high I/O load on the disks during recovery.
>>
>>
>>> In cases where the primary osd is significantly behind, we do make one
>>> of the other osds primary during recovery in order to expedite
>>> requests (pgs in this state are shown as remapped).
>>
>>
>> oh never seen that but at least in my case even 60s are a very long
>> timeframe and the OSD is very stressed during recovery. Is it possible for
>> me to set this value?
>>
>>
>> Stefan
>>
>>> -Sam
>>>
>>> On Wed, Aug 21, 2013 at 11:21 AM, Stefan Priebe 
>>> wrote:

 Am 21.08.2013 17:32, schrieb Samuel Just:

> Have you tried setting osd_recovery_clone_overlap to false?  That
> seemed to help with Stefan's issue.



 This might sound a bug harsh but maybe due to my limited english skills
 ;-)

 I still think that Cephs recovery system is broken by design. If an OSD
 comes back (was offline) all write requests regarding PGs where this one
 is
 primary are targeted immediatly to this OSD. If this one is not up2date
 for
 an PG it tries to recover that one immediatly which costs 4MB / block. If
 you have a lot of small write all over your OSDs and PGs you're sucked as
 your OSD has to recover ALL it's PGs immediatly or at least lots of them
 WHICH can't work. This is totally crazy.

 I think the right way would be:
 1.) if an OSD goes down the replicas got primaries

 or

 2.) an OSD which does not have an up2date PG should redirect to the OSD
 holding the secondary or third replica.

 Both results in being able to have a really smooth and slow recovery
 without
 any stress even under heavy 4K workloads like rbd backed VMs.

 Thanks for reading!

 Greets Stefan



> -Sam
>
> On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson 
> wrote:
>>
>>
>> Sam/Josh,
>>
>> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
>> morning,
>> hoping it would improve this situation, but there was no appreciable
>> change.
>>
>> One node in our cluster fsck'ed after a reboot and got a bit behind.
>> Our
>> instances backed by RBD volumes were OK at that point, but once the
>> node
>> booted fully and the OSDs started, all Windows instances with rbd
>> volumes
>> experienced very choppy performance and were unable to ingest video
>> surveillance traffic and commit it to disk. Once the cluster got back
>> to
>> HEALTH_OK, they resumed normal operation.
>>
>> I tried for a time with conservative recovery settings (osd max
>> backfills
>> =
>> 1, osd recovery op priority = 1, and osd recovery max active = 1). No
>> improvement for the guests. So I went to more aggressive settings to
>> get
>> things moving faster. That decreased the duration of the outage.
>>
>> During the entire period of recovery/backfill, the network looked
>> fine...no
>> where close to saturation. iowait on all drives look fine as well.
>>
>> Any ideas?
>>
>> Thanks,
>> Mike Dawson
>>
>>
>>
>> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>>
>>>
>>>
>>> the same problem still occours. Will need to check when i've t

Re: still recovery issues with cuttlefish

2013-08-21 Thread Samuel Just
It's not really possible at this time to control that limit because
changing the primary is actually fairly expensive and doing it
unnecessarily would probably make the situation much worse (it's
mostly necessary for backfilling, which is expensive anyway).  It
seems like forwarding IO on an object which needs to be recovered to a
replica with the object would be the next step.  Certainly something
to consider for the future.
-Sam

On Wed, Aug 21, 2013 at 12:37 PM, Stefan Priebe  wrote:
> Hi Sam,
> Am 21.08.2013 21:13, schrieb Samuel Just:
>
>> As long as the request is for an object which is up to date on the
>> primary, the request will be served without waiting for recovery.
>
>
> Sure but remember if you have VM random 4K workload a lot of objects go out
> of date pretty soon.
>
>
>> A request only waits on recovery if the particular object being read or
>>
>> written must be recovered.
>
>
> Yes but on 4k load this can be a lot.
>
>
>> Your issue was that recovering the
>> particular object being requested was unreasonably slow due to
>> silliness in the recovery code which you disabled by disabling
>> osd_recover_clone_overlap.
>
>
> Yes and no. It's better now but far away from being good or perfect. My VMs
> do not crash anymore but i still have a bunch of slow requests (just around
> 10 messages) and still a VERY high I/O load on the disks during recovery.
>
>
>> In cases where the primary osd is significantly behind, we do make one
>> of the other osds primary during recovery in order to expedite
>> requests (pgs in this state are shown as remapped).
>
>
> oh never seen that but at least in my case even 60s are a very long
> timeframe and the OSD is very stressed during recovery. Is it possible for
> me to set this value?
>
>
> Stefan
>
>> -Sam
>>
>> On Wed, Aug 21, 2013 at 11:21 AM, Stefan Priebe 
>> wrote:
>>>
>>> Am 21.08.2013 17:32, schrieb Samuel Just:
>>>
 Have you tried setting osd_recovery_clone_overlap to false?  That
 seemed to help with Stefan's issue.
>>>
>>>
>>>
>>> This might sound a bug harsh but maybe due to my limited english skills
>>> ;-)
>>>
>>> I still think that Cephs recovery system is broken by design. If an OSD
>>> comes back (was offline) all write requests regarding PGs where this one
>>> is
>>> primary are targeted immediatly to this OSD. If this one is not up2date
>>> for
>>> an PG it tries to recover that one immediatly which costs 4MB / block. If
>>> you have a lot of small write all over your OSDs and PGs you're sucked as
>>> your OSD has to recover ALL it's PGs immediatly or at least lots of them
>>> WHICH can't work. This is totally crazy.
>>>
>>> I think the right way would be:
>>> 1.) if an OSD goes down the replicas got primaries
>>>
>>> or
>>>
>>> 2.) an OSD which does not have an up2date PG should redirect to the OSD
>>> holding the secondary or third replica.
>>>
>>> Both results in being able to have a really smooth and slow recovery
>>> without
>>> any stress even under heavy 4K workloads like rbd backed VMs.
>>>
>>> Thanks for reading!
>>>
>>> Greets Stefan
>>>
>>>
>>>
 -Sam

 On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson 
 wrote:
>
>
> Sam/Josh,
>
> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
> morning,
> hoping it would improve this situation, but there was no appreciable
> change.
>
> One node in our cluster fsck'ed after a reboot and got a bit behind.
> Our
> instances backed by RBD volumes were OK at that point, but once the
> node
> booted fully and the OSDs started, all Windows instances with rbd
> volumes
> experienced very choppy performance and were unable to ingest video
> surveillance traffic and commit it to disk. Once the cluster got back
> to
> HEALTH_OK, they resumed normal operation.
>
> I tried for a time with conservative recovery settings (osd max
> backfills
> =
> 1, osd recovery op priority = 1, and osd recovery max active = 1). No
> improvement for the guests. So I went to more aggressive settings to
> get
> things moving faster. That decreased the duration of the outage.
>
> During the entire period of recovery/backfill, the network looked
> fine...no
> where close to saturation. iowait on all drives look fine as well.
>
> Any ideas?
>
> Thanks,
> Mike Dawson
>
>
>
> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>
>>
>>
>> the same problem still occours. Will need to check when i've time to
>> gather logs again.
>>
>> Am 14.08.2013 01:11, schrieb Samuel Just:
>>>
>>>
>>>
>>> I'm not sure, but your logs did show that you had >16 recovery ops in
>>> flight, so it's worth a try.  If it doesn't help, you should collect
>>> the same set of logs I'll look again.  Also, there are a few other
>>> patches between 61.7 and current cuttlefish which may help.
>>> -Sam

Re: still recovery issues with cuttlefish

2013-08-21 Thread Stefan Priebe

Hi Sam,
Am 21.08.2013 21:13, schrieb Samuel Just:

As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery.


Sure but remember if you have VM random 4K workload a lot of objects go 
out of date pretty soon.


> A request only waits on recovery if the particular object being read or

written must be recovered.


Yes but on 4k load this can be a lot.


Your issue was that recovering the
particular object being requested was unreasonably slow due to
silliness in the recovery code which you disabled by disabling
osd_recover_clone_overlap.


Yes and no. It's better now but far away from being good or perfect. My 
VMs do not crash anymore but i still have a bunch of slow requests (just 
around 10 messages) and still a VERY high I/O load on the disks during 
recovery.



In cases where the primary osd is significantly behind, we do make one
of the other osds primary during recovery in order to expedite
requests (pgs in this state are shown as remapped).


oh never seen that but at least in my case even 60s are a very long 
timeframe and the OSD is very stressed during recovery. Is it possible 
for me to set this value?


Stefan


-Sam

On Wed, Aug 21, 2013 at 11:21 AM, Stefan Priebe  wrote:

Am 21.08.2013 17:32, schrieb Samuel Just:


Have you tried setting osd_recovery_clone_overlap to false?  That
seemed to help with Stefan's issue.



This might sound a bug harsh but maybe due to my limited english skills ;-)

I still think that Cephs recovery system is broken by design. If an OSD
comes back (was offline) all write requests regarding PGs where this one is
primary are targeted immediatly to this OSD. If this one is not up2date for
an PG it tries to recover that one immediatly which costs 4MB / block. If
you have a lot of small write all over your OSDs and PGs you're sucked as
your OSD has to recover ALL it's PGs immediatly or at least lots of them
WHICH can't work. This is totally crazy.

I think the right way would be:
1.) if an OSD goes down the replicas got primaries

or

2.) an OSD which does not have an up2date PG should redirect to the OSD
holding the secondary or third replica.

Both results in being able to have a really smooth and slow recovery without
any stress even under heavy 4K workloads like rbd backed VMs.

Thanks for reading!

Greets Stefan




-Sam

On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson 
wrote:


Sam/Josh,

We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
morning,
hoping it would improve this situation, but there was no appreciable
change.

One node in our cluster fsck'ed after a reboot and got a bit behind. Our
instances backed by RBD volumes were OK at that point, but once the node
booted fully and the OSDs started, all Windows instances with rbd volumes
experienced very choppy performance and were unable to ingest video
surveillance traffic and commit it to disk. Once the cluster got back to
HEALTH_OK, they resumed normal operation.

I tried for a time with conservative recovery settings (osd max backfills
=
1, osd recovery op priority = 1, and osd recovery max active = 1). No
improvement for the guests. So I went to more aggressive settings to get
things moving faster. That decreased the duration of the outage.

During the entire period of recovery/backfill, the network looked
fine...no
where close to saturation. iowait on all drives look fine as well.

Any ideas?

Thanks,
Mike Dawson



On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:



the same problem still occours. Will need to check when i've time to
gather logs again.

Am 14.08.2013 01:11, schrieb Samuel Just:



I'm not sure, but your logs did show that you had >16 recovery ops in
flight, so it's worth a try.  If it doesn't help, you should collect
the same set of logs I'll look again.  Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam

On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
 wrote:




Am 13.08.2013 um 22:43 schrieb Samuel Just :


I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
current cuttlefish branch or wait for a 61.8 release.




Thanks! Are you sure that this is the issue? I don't believe that but
i'll give it a try. I already tested a branch from sage where he fixed
a
race regarding max active some weeks ago. So active recovering was max
1 but
the issue didn't went away.

Stefan


-Sam

On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just 
wrote:



I got swamped today.  I should be able to look tomorrow.  Sorry!
-Sam

On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
 wrote:



Did you take a look?

Stefan

Am 11.08.2013 um 05:50 schrieb Samuel Just :


Great!  I'll take a look on Monday.
-Sam

On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
 wrote:



Hi Samual,

Am 09.08.2013 23:44, sc

Re: still recovery issues with cuttlefish

2013-08-21 Thread Samuel Just
As long as the request is for an object which is up to date on the
primary, the request will be served without waiting for recovery.  A
request only waits on recovery if the particular object being read or
written must be recovered.  Your issue was that recovering the
particular object being requested was unreasonably slow due to
silliness in the recovery code which you disabled by disabling
osd_recover_clone_overlap.

In cases where the primary osd is significantly behind, we do make one
of the other osds primary during recovery in order to expedite
requests (pgs in this state are shown as remapped).
-Sam

On Wed, Aug 21, 2013 at 11:21 AM, Stefan Priebe  wrote:
> Am 21.08.2013 17:32, schrieb Samuel Just:
>
>> Have you tried setting osd_recovery_clone_overlap to false?  That
>> seemed to help with Stefan's issue.
>
>
> This might sound a bug harsh but maybe due to my limited english skills ;-)
>
> I still think that Cephs recovery system is broken by design. If an OSD
> comes back (was offline) all write requests regarding PGs where this one is
> primary are targeted immediatly to this OSD. If this one is not up2date for
> an PG it tries to recover that one immediatly which costs 4MB / block. If
> you have a lot of small write all over your OSDs and PGs you're sucked as
> your OSD has to recover ALL it's PGs immediatly or at least lots of them
> WHICH can't work. This is totally crazy.
>
> I think the right way would be:
> 1.) if an OSD goes down the replicas got primaries
>
> or
>
> 2.) an OSD which does not have an up2date PG should redirect to the OSD
> holding the secondary or third replica.
>
> Both results in being able to have a really smooth and slow recovery without
> any stress even under heavy 4K workloads like rbd backed VMs.
>
> Thanks for reading!
>
> Greets Stefan
>
>
>
>> -Sam
>>
>> On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson 
>> wrote:
>>>
>>> Sam/Josh,
>>>
>>> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
>>> morning,
>>> hoping it would improve this situation, but there was no appreciable
>>> change.
>>>
>>> One node in our cluster fsck'ed after a reboot and got a bit behind. Our
>>> instances backed by RBD volumes were OK at that point, but once the node
>>> booted fully and the OSDs started, all Windows instances with rbd volumes
>>> experienced very choppy performance and were unable to ingest video
>>> surveillance traffic and commit it to disk. Once the cluster got back to
>>> HEALTH_OK, they resumed normal operation.
>>>
>>> I tried for a time with conservative recovery settings (osd max backfills
>>> =
>>> 1, osd recovery op priority = 1, and osd recovery max active = 1). No
>>> improvement for the guests. So I went to more aggressive settings to get
>>> things moving faster. That decreased the duration of the outage.
>>>
>>> During the entire period of recovery/backfill, the network looked
>>> fine...no
>>> where close to saturation. iowait on all drives look fine as well.
>>>
>>> Any ideas?
>>>
>>> Thanks,
>>> Mike Dawson
>>>
>>>
>>>
>>> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:


 the same problem still occours. Will need to check when i've time to
 gather logs again.

 Am 14.08.2013 01:11, schrieb Samuel Just:
>
>
> I'm not sure, but your logs did show that you had >16 recovery ops in
> flight, so it's worth a try.  If it doesn't help, you should collect
> the same set of logs I'll look again.  Also, there are a few other
> patches between 61.7 and current cuttlefish which may help.
> -Sam
>
> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
>  wrote:
>>
>>
>>
>> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>>
>>> I just backported a couple of patches from next to fix a bug where we
>>> weren't respecting the osd_recovery_max_active config in some cases
>>> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
>>> current cuttlefish branch or wait for a 61.8 release.
>>
>>
>>
>> Thanks! Are you sure that this is the issue? I don't believe that but
>> i'll give it a try. I already tested a branch from sage where he fixed
>> a
>> race regarding max active some weeks ago. So active recovering was max
>> 1 but
>> the issue didn't went away.
>>
>> Stefan
>>
>>> -Sam
>>>
>>> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just 
>>> wrote:


 I got swamped today.  I should be able to look tomorrow.  Sorry!
 -Sam

 On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
  wrote:
>
>
> Did you take a look?
>
> Stefan
>
> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>
>> Great!  I'll take a look on Monday.
>> -Sam
>>
>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
>>  wrote:
>>>
>>>

Re: still recovery issues with cuttlefish

2013-08-21 Thread Stefan Priebe

Am 21.08.2013 17:32, schrieb Samuel Just:

Have you tried setting osd_recovery_clone_overlap to false?  That
seemed to help with Stefan's issue.


This might sound a bug harsh but maybe due to my limited english skills ;-)

I still think that Cephs recovery system is broken by design. If an OSD 
comes back (was offline) all write requests regarding PGs where this one 
is primary are targeted immediatly to this OSD. If this one is not 
up2date for an PG it tries to recover that one immediatly which costs 
4MB / block. If you have a lot of small write all over your OSDs and PGs 
you're sucked as your OSD has to recover ALL it's PGs immediatly or at 
least lots of them WHICH can't work. This is totally crazy.


I think the right way would be:
1.) if an OSD goes down the replicas got primaries

or

2.) an OSD which does not have an up2date PG should redirect to the OSD 
holding the secondary or third replica.


Both results in being able to have a really smooth and slow recovery 
without any stress even under heavy 4K workloads like rbd backed VMs.


Thanks for reading!

Greets Stefan



-Sam

On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson  wrote:

Sam/Josh,

We upgraded from 0.61.7 to 0.67.1 during a maintenance window this morning,
hoping it would improve this situation, but there was no appreciable change.

One node in our cluster fsck'ed after a reboot and got a bit behind. Our
instances backed by RBD volumes were OK at that point, but once the node
booted fully and the OSDs started, all Windows instances with rbd volumes
experienced very choppy performance and were unable to ingest video
surveillance traffic and commit it to disk. Once the cluster got back to
HEALTH_OK, they resumed normal operation.

I tried for a time with conservative recovery settings (osd max backfills =
1, osd recovery op priority = 1, and osd recovery max active = 1). No
improvement for the guests. So I went to more aggressive settings to get
things moving faster. That decreased the duration of the outage.

During the entire period of recovery/backfill, the network looked fine...no
where close to saturation. iowait on all drives look fine as well.

Any ideas?

Thanks,
Mike Dawson



On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:


the same problem still occours. Will need to check when i've time to
gather logs again.

Am 14.08.2013 01:11, schrieb Samuel Just:


I'm not sure, but your logs did show that you had >16 recovery ops in
flight, so it's worth a try.  If it doesn't help, you should collect
the same set of logs I'll look again.  Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam

On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
 wrote:



Am 13.08.2013 um 22:43 schrieb Samuel Just :


I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
current cuttlefish branch or wait for a 61.8 release.



Thanks! Are you sure that this is the issue? I don't believe that but
i'll give it a try. I already tested a branch from sage where he fixed a
race regarding max active some weeks ago. So active recovering was max 1 but
the issue didn't went away.

Stefan


-Sam

On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just 
wrote:


I got swamped today.  I should be able to look tomorrow.  Sorry!
-Sam

On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
 wrote:


Did you take a look?

Stefan

Am 11.08.2013 um 05:50 schrieb Samuel Just :


Great!  I'll take a look on Monday.
-Sam

On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
 wrote:


Hi Samual,

Am 09.08.2013 23:44, schrieb Samuel Just:


I think Stefan's problem is probably distinct from Mike's.

Stefan: Can you reproduce the problem with

debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20

on a few osds (including the restarted osd), and upload those osd
logs
along with the ceph.log from before killing the osd until after
the
cluster becomes clean again?




done - you'll find the logs at cephdrop folder:
slow_requests_recovering_cuttlefish

osd.52 was the one recovering

Thanks!

Greets,
Stefan


--
To unsubscribe from this list: send the line "unsubscribe
ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel"
in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordo

Re: still recovery issues with cuttlefish

2013-08-21 Thread Samuel Just
If the raring guest was fine, I suspect that the issue is not on the OSDs.
-Sam

On Wed, Aug 21, 2013 at 10:55 AM, Mike Dawson  wrote:
> Sam,
>
> Tried it. Injected with 'ceph tell osd.* injectargs --
> --no_osd_recover_clone_overlap', then stopped one OSD for ~1 minute. Upon
> restart, all my Windows VMs have issues until HEALTH_OK.
>
> The recovery was taking an abnormally long time, so I reverted away from
> --no_osd_recover_clone_overlap after about 10mins, to get back to HEALTH_OK.
>
> Interestingly, a Raring guest running a different video surveillance package
> proceeded without any issue whatsoever.
>
> Here is an image of the traffic to some of these Windows guests:
>
> http://www.gammacode.com/upload/rbd-hang-with-clone-overlap.jpg
>
> Ceph is outside of HEALTH_OK between ~12:55 and 13:10. Most of these
> instances rebooted due to an app error caused by the i/o hang shortly after
> 13:10.
>
> These Windows instances are booted as COW clones from a Glance image using
> Cinder. They also have a second RBD volume for bulk storage. I'm using qemu
> 1.5.2.
>
> Thanks,
> Mike
>
>
>
> On 8/21/2013 1:12 PM, Samuel Just wrote:
>>
>> Ah, thanks for the correction.
>> -Sam
>>
>> On Wed, Aug 21, 2013 at 9:25 AM, Yann ROBIN 
>> wrote:
>>>
>>> It's osd recover clone overlap (see http://tracker.ceph.com/issues/5401)
>>>
>>> -Original Message-
>>> From: ceph-devel-ow...@vger.kernel.org
>>> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
>>> Sent: mercredi 21 août 2013 17:33
>>> To: Mike Dawson
>>> Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com;
>>> ceph-devel@vger.kernel.org
>>> Subject: Re: still recovery issues with cuttlefish
>>>
>>> Have you tried setting osd_recovery_clone_overlap to false?  That seemed
>>> to help with Stefan's issue.
>>> -Sam
>>>
>>> On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson 
>>> wrote:
>>>>
>>>> Sam/Josh,
>>>>
>>>> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
>>>> morning, hoping it would improve this situation, but there was no
>>>> appreciable change.
>>>>
>>>> One node in our cluster fsck'ed after a reboot and got a bit behind.
>>>> Our instances backed by RBD volumes were OK at that point, but once
>>>> the node booted fully and the OSDs started, all Windows instances with
>>>> rbd volumes experienced very choppy performance and were unable to
>>>> ingest video surveillance traffic and commit it to disk. Once the
>>>> cluster got back to HEALTH_OK, they resumed normal operation.
>>>>
>>>> I tried for a time with conservative recovery settings (osd max
>>>> backfills = 1, osd recovery op priority = 1, and osd recovery max
>>>> active = 1). No improvement for the guests. So I went to more
>>>> aggressive settings to get things moving faster. That decreased the
>>>> duration of the outage.
>>>>
>>>> During the entire period of recovery/backfill, the network looked
>>>> fine...no where close to saturation. iowait on all drives look fine as
>>>> well.
>>>>
>>>> Any ideas?
>>>>
>>>> Thanks,
>>>> Mike Dawson
>>>>
>>>>
>>>>
>>>> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>>>>
>>>>>
>>>>> the same problem still occours. Will need to check when i've time to
>>>>> gather logs again.
>>>>>
>>>>> Am 14.08.2013 01:11, schrieb Samuel Just:
>>>>>>
>>>>>>
>>>>>> I'm not sure, but your logs did show that you had >16 recovery ops
>>>>>> in flight, so it's worth a try.  If it doesn't help, you should
>>>>>> collect the same set of logs I'll look again.  Also, there are a few
>>>>>> other patches between 61.7 and current cuttlefish which may help.
>>>>>> -Sam
>>>>>>
>>>>>> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
>>>>>>  wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>>>>>>>
>>>>>>>> I just backported a couple of patches from next to fix a bug where
>>&g

Re: still recovery issues with cuttlefish

2013-08-21 Thread Mike Dawson

Sam,

Tried it. Injected with 'ceph tell osd.* injectargs -- 
--no_osd_recover_clone_overlap', then stopped one OSD for ~1 minute. 
Upon restart, all my Windows VMs have issues until HEALTH_OK.


The recovery was taking an abnormally long time, so I reverted away from 
--no_osd_recover_clone_overlap after about 10mins, to get back to HEALTH_OK.


Interestingly, a Raring guest running a different video surveillance 
package proceeded without any issue whatsoever.


Here is an image of the traffic to some of these Windows guests:

http://www.gammacode.com/upload/rbd-hang-with-clone-overlap.jpg

Ceph is outside of HEALTH_OK between ~12:55 and 13:10. Most of these 
instances rebooted due to an app error caused by the i/o hang shortly 
after 13:10.


These Windows instances are booted as COW clones from a Glance image 
using Cinder. They also have a second RBD volume for bulk storage. I'm 
using qemu 1.5.2.


Thanks,
Mike


On 8/21/2013 1:12 PM, Samuel Just wrote:

Ah, thanks for the correction.
-Sam

On Wed, Aug 21, 2013 at 9:25 AM, Yann ROBIN  wrote:

It's osd recover clone overlap (see http://tracker.ceph.com/issues/5401)

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: mercredi 21 août 2013 17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com; 
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish

Have you tried setting osd_recovery_clone_overlap to false?  That seemed to 
help with Stefan's issue.
-Sam

On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson  wrote:

Sam/Josh,

We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
morning, hoping it would improve this situation, but there was no appreciable 
change.

One node in our cluster fsck'ed after a reboot and got a bit behind.
Our instances backed by RBD volumes were OK at that point, but once
the node booted fully and the OSDs started, all Windows instances with
rbd volumes experienced very choppy performance and were unable to
ingest video surveillance traffic and commit it to disk. Once the
cluster got back to HEALTH_OK, they resumed normal operation.

I tried for a time with conservative recovery settings (osd max
backfills = 1, osd recovery op priority = 1, and osd recovery max
active = 1). No improvement for the guests. So I went to more
aggressive settings to get things moving faster. That decreased the duration of 
the outage.

During the entire period of recovery/backfill, the network looked
fine...no where close to saturation. iowait on all drives look fine as well.

Any ideas?

Thanks,
Mike Dawson



On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:


the same problem still occours. Will need to check when i've time to
gather logs again.

Am 14.08.2013 01:11, schrieb Samuel Just:


I'm not sure, but your logs did show that you had >16 recovery ops
in flight, so it's worth a try.  If it doesn't help, you should
collect the same set of logs I'll look again.  Also, there are a few
other patches between 61.7 and current cuttlefish which may help.
-Sam

On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
 wrote:



Am 13.08.2013 um 22:43 schrieb Samuel Just :


I just backported a couple of patches from next to fix a bug where
we weren't respecting the osd_recovery_max_active config in some
cases (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either
try the current cuttlefish branch or wait for a 61.8 release.



Thanks! Are you sure that this is the issue? I don't believe that
but i'll give it a try. I already tested a branch from sage where
he fixed a race regarding max active some weeks ago. So active
recovering was max 1 but the issue didn't went away.

Stefan


-Sam

On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just

wrote:


I got swamped today.  I should be able to look tomorrow.  Sorry!
-Sam

On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
 wrote:


Did you take a look?

Stefan

Am 11.08.2013 um 05:50 schrieb Samuel Just :


Great!  I'll take a look on Monday.
-Sam

On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
 wrote:


Hi Samual,

Am 09.08.2013 23:44, schrieb Samuel Just:


I think Stefan's problem is probably distinct from Mike's.

Stefan: Can you reproduce the problem with

debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20

on a few osds (including the restarted osd), and upload those
osd logs along with the ceph.log from before killing the osd
until after the cluster becomes clean again?




done - you'll find the logs at cephdrop folder:
slow_requests_recovering_cuttlefish

osd.52 was the one recovering

Thanks!

Greets,
Stefan


--
To unsubscribe from this list: send the line "unsubscribe
ceph-devel" in the body of a message to
majord...@vger.kernel.org More majordomo info at
http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-21 Thread Samuel Just
Ah, thanks for the correction.
-Sam

On Wed, Aug 21, 2013 at 9:25 AM, Yann ROBIN  wrote:
> It's osd recover clone overlap (see http://tracker.ceph.com/issues/5401)
>
> -Original Message-
> From: ceph-devel-ow...@vger.kernel.org 
> [mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
> Sent: mercredi 21 août 2013 17:33
> To: Mike Dawson
> Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com; 
> ceph-devel@vger.kernel.org
> Subject: Re: still recovery issues with cuttlefish
>
> Have you tried setting osd_recovery_clone_overlap to false?  That seemed to 
> help with Stefan's issue.
> -Sam
>
> On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson  wrote:
>> Sam/Josh,
>>
>> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this
>> morning, hoping it would improve this situation, but there was no 
>> appreciable change.
>>
>> One node in our cluster fsck'ed after a reboot and got a bit behind.
>> Our instances backed by RBD volumes were OK at that point, but once
>> the node booted fully and the OSDs started, all Windows instances with
>> rbd volumes experienced very choppy performance and were unable to
>> ingest video surveillance traffic and commit it to disk. Once the
>> cluster got back to HEALTH_OK, they resumed normal operation.
>>
>> I tried for a time with conservative recovery settings (osd max
>> backfills = 1, osd recovery op priority = 1, and osd recovery max
>> active = 1). No improvement for the guests. So I went to more
>> aggressive settings to get things moving faster. That decreased the duration 
>> of the outage.
>>
>> During the entire period of recovery/backfill, the network looked
>> fine...no where close to saturation. iowait on all drives look fine as well.
>>
>> Any ideas?
>>
>> Thanks,
>> Mike Dawson
>>
>>
>>
>> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>>
>>> the same problem still occours. Will need to check when i've time to
>>> gather logs again.
>>>
>>> Am 14.08.2013 01:11, schrieb Samuel Just:
>>>>
>>>> I'm not sure, but your logs did show that you had >16 recovery ops
>>>> in flight, so it's worth a try.  If it doesn't help, you should
>>>> collect the same set of logs I'll look again.  Also, there are a few
>>>> other patches between 61.7 and current cuttlefish which may help.
>>>> -Sam
>>>>
>>>> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
>>>>  wrote:
>>>>>
>>>>>
>>>>> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>>>>>
>>>>>> I just backported a couple of patches from next to fix a bug where
>>>>>> we weren't respecting the osd_recovery_max_active config in some
>>>>>> cases (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either
>>>>>> try the current cuttlefish branch or wait for a 61.8 release.
>>>>>
>>>>>
>>>>> Thanks! Are you sure that this is the issue? I don't believe that
>>>>> but i'll give it a try. I already tested a branch from sage where
>>>>> he fixed a race regarding max active some weeks ago. So active
>>>>> recovering was max 1 but the issue didn't went away.
>>>>>
>>>>> Stefan
>>>>>
>>>>>> -Sam
>>>>>>
>>>>>> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just
>>>>>> 
>>>>>> wrote:
>>>>>>>
>>>>>>> I got swamped today.  I should be able to look tomorrow.  Sorry!
>>>>>>> -Sam
>>>>>>>
>>>>>>> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
>>>>>>>  wrote:
>>>>>>>>
>>>>>>>> Did you take a look?
>>>>>>>>
>>>>>>>> Stefan
>>>>>>>>
>>>>>>>> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>>>>>>>>
>>>>>>>>> Great!  I'll take a look on Monday.
>>>>>>>>> -Sam
>>>>>>>>>
>>>>>>>>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
>>>>>>>>>  wrote:
>>>>>>>>>>
>>>>>>>>>> Hi Samual,
>>>>>>>>>>
&

RE: still recovery issues with cuttlefish

2013-08-21 Thread Yann ROBIN
It's osd recover clone overlap (see http://tracker.ceph.com/issues/5401)

-Original Message-
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Samuel Just
Sent: mercredi 21 août 2013 17:33
To: Mike Dawson
Cc: Stefan Priebe - Profihost AG; josh.dur...@inktank.com; 
ceph-devel@vger.kernel.org
Subject: Re: still recovery issues with cuttlefish

Have you tried setting osd_recovery_clone_overlap to false?  That seemed to 
help with Stefan's issue.
-Sam

On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson  wrote:
> Sam/Josh,
>
> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this 
> morning, hoping it would improve this situation, but there was no appreciable 
> change.
>
> One node in our cluster fsck'ed after a reboot and got a bit behind. 
> Our instances backed by RBD volumes were OK at that point, but once 
> the node booted fully and the OSDs started, all Windows instances with 
> rbd volumes experienced very choppy performance and were unable to 
> ingest video surveillance traffic and commit it to disk. Once the 
> cluster got back to HEALTH_OK, they resumed normal operation.
>
> I tried for a time with conservative recovery settings (osd max 
> backfills = 1, osd recovery op priority = 1, and osd recovery max 
> active = 1). No improvement for the guests. So I went to more 
> aggressive settings to get things moving faster. That decreased the duration 
> of the outage.
>
> During the entire period of recovery/backfill, the network looked 
> fine...no where close to saturation. iowait on all drives look fine as well.
>
> Any ideas?
>
> Thanks,
> Mike Dawson
>
>
>
> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>
>> the same problem still occours. Will need to check when i've time to 
>> gather logs again.
>>
>> Am 14.08.2013 01:11, schrieb Samuel Just:
>>>
>>> I'm not sure, but your logs did show that you had >16 recovery ops 
>>> in flight, so it's worth a try.  If it doesn't help, you should 
>>> collect the same set of logs I'll look again.  Also, there are a few 
>>> other patches between 61.7 and current cuttlefish which may help.
>>> -Sam
>>>
>>> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG 
>>>  wrote:
>>>>
>>>>
>>>> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>>>>
>>>>> I just backported a couple of patches from next to fix a bug where 
>>>>> we weren't respecting the osd_recovery_max_active config in some 
>>>>> cases (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either 
>>>>> try the current cuttlefish branch or wait for a 61.8 release.
>>>>
>>>>
>>>> Thanks! Are you sure that this is the issue? I don't believe that 
>>>> but i'll give it a try. I already tested a branch from sage where 
>>>> he fixed a race regarding max active some weeks ago. So active 
>>>> recovering was max 1 but the issue didn't went away.
>>>>
>>>> Stefan
>>>>
>>>>> -Sam
>>>>>
>>>>> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just 
>>>>> 
>>>>> wrote:
>>>>>>
>>>>>> I got swamped today.  I should be able to look tomorrow.  Sorry!
>>>>>> -Sam
>>>>>>
>>>>>> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG 
>>>>>>  wrote:
>>>>>>>
>>>>>>> Did you take a look?
>>>>>>>
>>>>>>> Stefan
>>>>>>>
>>>>>>> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>>>>>>>
>>>>>>>> Great!  I'll take a look on Monday.
>>>>>>>> -Sam
>>>>>>>>
>>>>>>>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe 
>>>>>>>>  wrote:
>>>>>>>>>
>>>>>>>>> Hi Samual,
>>>>>>>>>
>>>>>>>>> Am 09.08.2013 23:44, schrieb Samuel Just:
>>>>>>>>>
>>>>>>>>>> I think Stefan's problem is probably distinct from Mike's.
>>>>>>>>>>
>>>>>>>>>> Stefan: Can you reproduce the problem with
>>>>>>>>>>
>>>>>>>>>> debug osd = 20
>>>>>>>>>> debug fil

Re: still recovery issues with cuttlefish

2013-08-21 Thread Samuel Just
Have you tried setting osd_recovery_clone_overlap to false?  That
seemed to help with Stefan's issue.
-Sam

On Wed, Aug 21, 2013 at 8:28 AM, Mike Dawson  wrote:
> Sam/Josh,
>
> We upgraded from 0.61.7 to 0.67.1 during a maintenance window this morning,
> hoping it would improve this situation, but there was no appreciable change.
>
> One node in our cluster fsck'ed after a reboot and got a bit behind. Our
> instances backed by RBD volumes were OK at that point, but once the node
> booted fully and the OSDs started, all Windows instances with rbd volumes
> experienced very choppy performance and were unable to ingest video
> surveillance traffic and commit it to disk. Once the cluster got back to
> HEALTH_OK, they resumed normal operation.
>
> I tried for a time with conservative recovery settings (osd max backfills =
> 1, osd recovery op priority = 1, and osd recovery max active = 1). No
> improvement for the guests. So I went to more aggressive settings to get
> things moving faster. That decreased the duration of the outage.
>
> During the entire period of recovery/backfill, the network looked fine...no
> where close to saturation. iowait on all drives look fine as well.
>
> Any ideas?
>
> Thanks,
> Mike Dawson
>
>
>
> On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:
>>
>> the same problem still occours. Will need to check when i've time to
>> gather logs again.
>>
>> Am 14.08.2013 01:11, schrieb Samuel Just:
>>>
>>> I'm not sure, but your logs did show that you had >16 recovery ops in
>>> flight, so it's worth a try.  If it doesn't help, you should collect
>>> the same set of logs I'll look again.  Also, there are a few other
>>> patches between 61.7 and current cuttlefish which may help.
>>> -Sam
>>>
>>> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
>>>  wrote:


 Am 13.08.2013 um 22:43 schrieb Samuel Just :

> I just backported a couple of patches from next to fix a bug where we
> weren't respecting the osd_recovery_max_active config in some cases
> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
> current cuttlefish branch or wait for a 61.8 release.


 Thanks! Are you sure that this is the issue? I don't believe that but
 i'll give it a try. I already tested a branch from sage where he fixed a
 race regarding max active some weeks ago. So active recovering was max 1 
 but
 the issue didn't went away.

 Stefan

> -Sam
>
> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just 
> wrote:
>>
>> I got swamped today.  I should be able to look tomorrow.  Sorry!
>> -Sam
>>
>> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
>>  wrote:
>>>
>>> Did you take a look?
>>>
>>> Stefan
>>>
>>> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>>>
 Great!  I'll take a look on Monday.
 -Sam

 On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe
  wrote:
>
> Hi Samual,
>
> Am 09.08.2013 23:44, schrieb Samuel Just:
>
>> I think Stefan's problem is probably distinct from Mike's.
>>
>> Stefan: Can you reproduce the problem with
>>
>> debug osd = 20
>> debug filestore = 20
>> debug ms = 1
>> debug optracker = 20
>>
>> on a few osds (including the restarted osd), and upload those osd
>> logs
>> along with the ceph.log from before killing the osd until after
>> the
>> cluster becomes clean again?
>
>
>
> done - you'll find the logs at cephdrop folder:
> slow_requests_recovering_cuttlefish
>
> osd.52 was the one recovering
>
> Thanks!
>
> Greets,
> Stefan

 --
 To unsubscribe from this list: send the line "unsubscribe
 ceph-devel" in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-21 Thread Mike Dawson

Sam/Josh,

We upgraded from 0.61.7 to 0.67.1 during a maintenance window this 
morning, hoping it would improve this situation, but there was no 
appreciable change.


One node in our cluster fsck'ed after a reboot and got a bit behind. Our 
instances backed by RBD volumes were OK at that point, but once the node 
booted fully and the OSDs started, all Windows instances with rbd 
volumes experienced very choppy performance and were unable to ingest 
video surveillance traffic and commit it to disk. Once the cluster got 
back to HEALTH_OK, they resumed normal operation.


I tried for a time with conservative recovery settings (osd max 
backfills = 1, osd recovery op priority = 1, and osd recovery max active 
= 1). No improvement for the guests. So I went to more aggressive 
settings to get things moving faster. That decreased the duration of the 
outage.


During the entire period of recovery/backfill, the network looked 
fine...no where close to saturation. iowait on all drives look fine as well.


Any ideas?

Thanks,
Mike Dawson


On 8/14/2013 3:04 AM, Stefan Priebe - Profihost AG wrote:

the same problem still occours. Will need to check when i've time to
gather logs again.

Am 14.08.2013 01:11, schrieb Samuel Just:

I'm not sure, but your logs did show that you had >16 recovery ops in
flight, so it's worth a try.  If it doesn't help, you should collect
the same set of logs I'll look again.  Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam

On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
 wrote:


Am 13.08.2013 um 22:43 schrieb Samuel Just :


I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
current cuttlefish branch or wait for a 61.8 release.


Thanks! Are you sure that this is the issue? I don't believe that but i'll give 
it a try. I already tested a branch from sage where he fixed a race regarding 
max active some weeks ago. So active recovering was max 1 but the issue didn't 
went away.

Stefan


-Sam

On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just  wrote:

I got swamped today.  I should be able to look tomorrow.  Sorry!
-Sam

On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
 wrote:

Did you take a look?

Stefan

Am 11.08.2013 um 05:50 schrieb Samuel Just :


Great!  I'll take a look on Monday.
-Sam

On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  wrote:

Hi Samual,

Am 09.08.2013 23:44, schrieb Samuel Just:


I think Stefan's problem is probably distinct from Mike's.

Stefan: Can you reproduce the problem with

debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20

on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd until after the
cluster becomes clean again?



done - you'll find the logs at cephdrop folder:
slow_requests_recovering_cuttlefish

osd.52 was the one recovering

Thanks!

Greets,
Stefan

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-14 Thread Stefan Priebe - Profihost AG
the same problem still occours. Will need to check when i've time to
gather logs again.

Am 14.08.2013 01:11, schrieb Samuel Just:
> I'm not sure, but your logs did show that you had >16 recovery ops in
> flight, so it's worth a try.  If it doesn't help, you should collect
> the same set of logs I'll look again.  Also, there are a few other
> patches between 61.7 and current cuttlefish which may help.
> -Sam
> 
> On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
>  wrote:
>>
>> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>>
>>> I just backported a couple of patches from next to fix a bug where we
>>> weren't respecting the osd_recovery_max_active config in some cases
>>> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
>>> current cuttlefish branch or wait for a 61.8 release.
>>
>> Thanks! Are you sure that this is the issue? I don't believe that but i'll 
>> give it a try. I already tested a branch from sage where he fixed a race 
>> regarding max active some weeks ago. So active recovering was max 1 but the 
>> issue didn't went away.
>>
>> Stefan
>>
>>> -Sam
>>>
>>> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just  wrote:
 I got swamped today.  I should be able to look tomorrow.  Sorry!
 -Sam

 On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
  wrote:
> Did you take a look?
>
> Stefan
>
> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>
>> Great!  I'll take a look on Monday.
>> -Sam
>>
>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  
>> wrote:
>>> Hi Samual,
>>>
>>> Am 09.08.2013 23:44, schrieb Samuel Just:
>>>
 I think Stefan's problem is probably distinct from Mike's.

 Stefan: Can you reproduce the problem with

 debug osd = 20
 debug filestore = 20
 debug ms = 1
 debug optracker = 20

 on a few osds (including the restarted osd), and upload those osd logs
 along with the ceph.log from before killing the osd until after the
 cluster becomes clean again?
>>>
>>>
>>> done - you'll find the logs at cephdrop folder:
>>> slow_requests_recovering_cuttlefish
>>>
>>> osd.52 was the one recovering
>>>
>>> Thanks!
>>>
>>> Greets,
>>> Stefan
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-13 Thread Samuel Just
I'm not sure, but your logs did show that you had >16 recovery ops in
flight, so it's worth a try.  If it doesn't help, you should collect
the same set of logs I'll look again.  Also, there are a few other
patches between 61.7 and current cuttlefish which may help.
-Sam

On Tue, Aug 13, 2013 at 2:03 PM, Stefan Priebe - Profihost AG
 wrote:
>
> Am 13.08.2013 um 22:43 schrieb Samuel Just :
>
>> I just backported a couple of patches from next to fix a bug where we
>> weren't respecting the osd_recovery_max_active config in some cases
>> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
>> current cuttlefish branch or wait for a 61.8 release.
>
> Thanks! Are you sure that this is the issue? I don't believe that but i'll 
> give it a try. I already tested a branch from sage where he fixed a race 
> regarding max active some weeks ago. So active recovering was max 1 but the 
> issue didn't went away.
>
> Stefan
>
>> -Sam
>>
>> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just  wrote:
>>> I got swamped today.  I should be able to look tomorrow.  Sorry!
>>> -Sam
>>>
>>> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
>>>  wrote:
 Did you take a look?

 Stefan

 Am 11.08.2013 um 05:50 schrieb Samuel Just :

> Great!  I'll take a look on Monday.
> -Sam
>
> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  
> wrote:
>> Hi Samual,
>>
>> Am 09.08.2013 23:44, schrieb Samuel Just:
>>
>>> I think Stefan's problem is probably distinct from Mike's.
>>>
>>> Stefan: Can you reproduce the problem with
>>>
>>> debug osd = 20
>>> debug filestore = 20
>>> debug ms = 1
>>> debug optracker = 20
>>>
>>> on a few osds (including the restarted osd), and upload those osd logs
>>> along with the ceph.log from before killing the osd until after the
>>> cluster becomes clean again?
>>
>>
>> done - you'll find the logs at cephdrop folder:
>> slow_requests_recovering_cuttlefish
>>
>> osd.52 was the one recovering
>>
>> Thanks!
>>
>> Greets,
>> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-13 Thread Stefan Priebe - Profihost AG

Am 13.08.2013 um 22:43 schrieb Samuel Just :

> I just backported a couple of patches from next to fix a bug where we
> weren't respecting the osd_recovery_max_active config in some cases
> (1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
> current cuttlefish branch or wait for a 61.8 release.

Thanks! Are you sure that this is the issue? I don't believe that but i'll give 
it a try. I already tested a branch from sage where he fixed a race regarding 
max active some weeks ago. So active recovering was max 1 but the issue didn't 
went away.

Stefan

> -Sam
> 
> On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just  wrote:
>> I got swamped today.  I should be able to look tomorrow.  Sorry!
>> -Sam
>> 
>> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
>>  wrote:
>>> Did you take a look?
>>> 
>>> Stefan
>>> 
>>> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>>> 
 Great!  I'll take a look on Monday.
 -Sam
 
 On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  
 wrote:
> Hi Samual,
> 
> Am 09.08.2013 23:44, schrieb Samuel Just:
> 
>> I think Stefan's problem is probably distinct from Mike's.
>> 
>> Stefan: Can you reproduce the problem with
>> 
>> debug osd = 20
>> debug filestore = 20
>> debug ms = 1
>> debug optracker = 20
>> 
>> on a few osds (including the restarted osd), and upload those osd logs
>> along with the ceph.log from before killing the osd until after the
>> cluster becomes clean again?
> 
> 
> done - you'll find the logs at cephdrop folder:
> slow_requests_recovering_cuttlefish
> 
> osd.52 was the one recovering
> 
> Thanks!
> 
> Greets,
> Stefan
 --
 To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-13 Thread Samuel Just
I just backported a couple of patches from next to fix a bug where we
weren't respecting the osd_recovery_max_active config in some cases
(1ea6b56170fc9e223e7c30635db02fa2ad8f4b4e).  You can either try the
current cuttlefish branch or wait for a 61.8 release.
-Sam

On Mon, Aug 12, 2013 at 10:34 PM, Samuel Just  wrote:
> I got swamped today.  I should be able to look tomorrow.  Sorry!
> -Sam
>
> On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
>  wrote:
>> Did you take a look?
>>
>> Stefan
>>
>> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>>
>>> Great!  I'll take a look on Monday.
>>> -Sam
>>>
>>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  
>>> wrote:
 Hi Samual,

 Am 09.08.2013 23:44, schrieb Samuel Just:

> I think Stefan's problem is probably distinct from Mike's.
>
> Stefan: Can you reproduce the problem with
>
> debug osd = 20
> debug filestore = 20
> debug ms = 1
> debug optracker = 20
>
> on a few osds (including the restarted osd), and upload those osd logs
> along with the ceph.log from before killing the osd until after the
> cluster becomes clean again?


 done - you'll find the logs at cephdrop folder:
 slow_requests_recovering_cuttlefish

 osd.52 was the one recovering

 Thanks!

 Greets,
 Stefan
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-12 Thread Samuel Just
I got swamped today.  I should be able to look tomorrow.  Sorry!
-Sam

On Mon, Aug 12, 2013 at 9:39 PM, Stefan Priebe - Profihost AG
 wrote:
> Did you take a look?
>
> Stefan
>
> Am 11.08.2013 um 05:50 schrieb Samuel Just :
>
>> Great!  I'll take a look on Monday.
>> -Sam
>>
>> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  
>> wrote:
>>> Hi Samual,
>>>
>>> Am 09.08.2013 23:44, schrieb Samuel Just:
>>>
 I think Stefan's problem is probably distinct from Mike's.

 Stefan: Can you reproduce the problem with

 debug osd = 20
 debug filestore = 20
 debug ms = 1
 debug optracker = 20

 on a few osds (including the restarted osd), and upload those osd logs
 along with the ceph.log from before killing the osd until after the
 cluster becomes clean again?
>>>
>>>
>>> done - you'll find the logs at cephdrop folder:
>>> slow_requests_recovering_cuttlefish
>>>
>>> osd.52 was the one recovering
>>>
>>> Thanks!
>>>
>>> Greets,
>>> Stefan
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-12 Thread Stefan Priebe - Profihost AG
Did you take a look?

Stefan

Am 11.08.2013 um 05:50 schrieb Samuel Just :

> Great!  I'll take a look on Monday.
> -Sam
> 
> On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  wrote:
>> Hi Samual,
>> 
>> Am 09.08.2013 23:44, schrieb Samuel Just:
>> 
>>> I think Stefan's problem is probably distinct from Mike's.
>>> 
>>> Stefan: Can you reproduce the problem with
>>> 
>>> debug osd = 20
>>> debug filestore = 20
>>> debug ms = 1
>>> debug optracker = 20
>>> 
>>> on a few osds (including the restarted osd), and upload those osd logs
>>> along with the ceph.log from before killing the osd until after the
>>> cluster becomes clean again?
>> 
>> 
>> done - you'll find the logs at cephdrop folder:
>> slow_requests_recovering_cuttlefish
>> 
>> osd.52 was the one recovering
>> 
>> Thanks!
>> 
>> Greets,
>> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-10 Thread Samuel Just
Great!  I'll take a look on Monday.
-Sam

On Sat, Aug 10, 2013 at 12:08 PM, Stefan Priebe  wrote:
> Hi Samual,
>
> Am 09.08.2013 23:44, schrieb Samuel Just:
>
>> I think Stefan's problem is probably distinct from Mike's.
>>
>> Stefan: Can you reproduce the problem with
>>
>> debug osd = 20
>> debug filestore = 20
>> debug ms = 1
>> debug optracker = 20
>>
>> on a few osds (including the restarted osd), and upload those osd logs
>> along with the ceph.log from before killing the osd until after the
>> cluster becomes clean again?
>
>
> done - you'll find the logs at cephdrop folder:
> slow_requests_recovering_cuttlefish
>
> osd.52 was the one recovering
>
> Thanks!
>
> Greets,
> Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-10 Thread Stefan Priebe

Hi Samual,

Am 09.08.2013 23:44, schrieb Samuel Just:

I think Stefan's problem is probably distinct from Mike's.

Stefan: Can you reproduce the problem with

debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20

on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd until after the
cluster becomes clean again?


done - you'll find the logs at cephdrop folder: 
slow_requests_recovering_cuttlefish


osd.52 was the one recovering

Thanks!

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-09 Thread Samuel Just
I think Stefan's problem is probably distinct from Mike's.

Stefan: Can you reproduce the problem with

debug osd = 20
debug filestore = 20
debug ms = 1
debug optracker = 20

on a few osds (including the restarted osd), and upload those osd logs
along with the ceph.log from before killing the osd until after the
cluster becomes clean again?
-Sam

On Thu, Aug 8, 2013 at 11:13 AM, Stefan Priebe  wrote:
> Hi Mike,
>
> Am 08.08.2013 16:05, schrieb Mike Dawson:
>
>> Stefan,
>>
>> I see the same behavior and I theorize it is linked to an issue detailed
>> in another thread [0]. Do your VM guests ever hang while your cluster is
>> HEALTH_OK like described in that other thread?
>>
>> [0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982
>
>
> mhm no can't see that. All our VMs are working fine even under high load
> while ceph is OK.
>
>
>> A few observations:
>>
>> - The VMs that hang do lots of writes (video surveillance).
>> - I use rbd and qemu. The problem exists in both qemu 1.4.x and 1.5.2.
>> - The problem exists with or without joshd's qemu async flush patch.
>> - Windows VMs seem to be more vulnerable than Linux VMs.
>> - If I restart the qemu-system-x86_64 process, the guest will come back
>> to life.
>> - A partial workaround seems to be console input (NoVNC or 'virsh
>> screenshot'), but restarting qemu-system-x86_64 works better.
>> - The issue of VMs hanging seems worse with RBD writeback cache enabled
>> - I typically run virtio, but I believe I've seen it with e1000, too.
>> - VM guests hang at different times, not all at once on a host (or
>> across all hosts).
>> - I co-mingle VM guests on servers that host ceph OSDs.
>>
>>
>>
>> Oliver,
>>
>> If your cluster has to recover/backfill, do your guest VMs hang with
>> more frequency than under normal HEALTH_OK conditions, even if you
>> prioritize client IO as Sam wrote below?
>>
>>
>> Sam,
>>
>> Turning down all the settings you mentioned certainly does slow the
>> recover/backfill process, but it doesn't prevent the VM guests backed by
>> RBD volumes from hanging. In fact, I often try to prioritize
>> recovery/backfill because my guests tend to hang until I get back to
>> HEALTH_OK. Given this apparent bug, completing recovery/backfill quicker
>> leads to less total outage, it seems.
>>
>>
>> Josh,
>>
>> How can I help you investigate if RBD is the common source of both of
>> these issues?
>>
>>
>> Thanks,
>> Mike Dawson
>>
>>
>> On 8/2/2013 2:46 PM, Stefan Priebe wrote:
>>>
>>> Hi,
>>>
>>>  osd recovery max active = 1
>>>  osd max backfills = 1
>>>  osd recovery op priority = 5
>>>
>>> still no difference...
>>>
>>> Stefan
>>>
>>> Am 02.08.2013 20:21, schrieb Samuel Just:

 Also, you have osd_recovery_op_priority at 50.  That is close to the
 priority of client IO.  You want it below 10 (defaults to 10), perhaps
 at 1.  You can also adjust down osd_recovery_max_active.
 -Sam

 On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe 
 wrote:
>
> I already tried both values this makes no difference. The drives are
> not the
> bottleneck.
>
> Am 02.08.2013 19:35, schrieb Samuel Just:
>
>> You might try turning osd_max_backfills to 2 or 1.
>> -Sam
>>
>> On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 
>> wrote:
>>>
>>>
>>> Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
>>> settings?
>>>
 sudo ceph --admin-daemon ceph-osd..asok config show
>>>
>>>
>>>
>>> Sure.
>>>
>>>
>>>
>>> { "name": "osd.0",
>>> "cluster": "ceph",
>>> "none": "0\/5",
>>> "lockdep": "0\/0",
>>> "context": "0\/0",
>>> "crush": "0\/0",
>>> "mds": "0\/0",
>>> "mds_balancer": "0\/0",
>>> "mds_locker": "0\/0",
>>> "mds_log": "0\/0",
>>> "mds_log_expire": "0\/0",
>>> "mds_migrator": "0\/0",
>>> "buffer": "0\/0",
>>> "timer": "0\/0",
>>> "filer": "0\/0",
>>> "striper": "0\/1",
>>> "objecter": "0\/0",
>>> "rados": "0\/0",
>>> "rbd": "0\/0",
>>> "journaler": "0\/0",
>>> "objectcacher": "0\/0",
>>> "client": "0\/0",
>>> "osd": "0\/0",
>>> "optracker": "0\/0",
>>> "objclass": "0\/0",
>>> "filestore": "0\/0",
>>> "journal": "0\/0",
>>> "ms": "0\/0",
>>> "mon": "0\/0",
>>> "monc": "0\/0",
>>> "paxos": "0\/0",
>>> "tp": "0\/0",
>>> "auth": "0\/0",
>>> "crypto": "1\/5",
>>> "finisher": "0\/0",
>>> "heartbeatmap": "0\/0",
>>> "perfcounter": "0\/0",
>>> "rgw": "0\/0",
>>> "hadoop": "0\/0",
>>> "javaclient": "1\/5",
>>> "asok": "0\/0",
>>> "throttle": "0\/0",
>>> "host": "cloud1-1268",
>>> "fsid": "----",
>>> "public_addr": "10.255.0.90:0\

Re: still recovery issues with cuttlefish

2013-08-08 Thread Stefan Priebe

Hi Mike,

Am 08.08.2013 16:05, schrieb Mike Dawson:

Stefan,

I see the same behavior and I theorize it is linked to an issue detailed
in another thread [0]. Do your VM guests ever hang while your cluster is
HEALTH_OK like described in that other thread?

[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982


mhm no can't see that. All our VMs are working fine even under high load 
while ceph is OK.



A few observations:

- The VMs that hang do lots of writes (video surveillance).
- I use rbd and qemu. The problem exists in both qemu 1.4.x and 1.5.2.
- The problem exists with or without joshd's qemu async flush patch.
- Windows VMs seem to be more vulnerable than Linux VMs.
- If I restart the qemu-system-x86_64 process, the guest will come back
to life.
- A partial workaround seems to be console input (NoVNC or 'virsh
screenshot'), but restarting qemu-system-x86_64 works better.
- The issue of VMs hanging seems worse with RBD writeback cache enabled
- I typically run virtio, but I believe I've seen it with e1000, too.
- VM guests hang at different times, not all at once on a host (or
across all hosts).
- I co-mingle VM guests on servers that host ceph OSDs.



Oliver,

If your cluster has to recover/backfill, do your guest VMs hang with
more frequency than under normal HEALTH_OK conditions, even if you
prioritize client IO as Sam wrote below?


Sam,

Turning down all the settings you mentioned certainly does slow the
recover/backfill process, but it doesn't prevent the VM guests backed by
RBD volumes from hanging. In fact, I often try to prioritize
recovery/backfill because my guests tend to hang until I get back to
HEALTH_OK. Given this apparent bug, completing recovery/backfill quicker
leads to less total outage, it seems.


Josh,

How can I help you investigate if RBD is the common source of both of
these issues?


Thanks,
Mike Dawson


On 8/2/2013 2:46 PM, Stefan Priebe wrote:

Hi,

 osd recovery max active = 1
 osd max backfills = 1
 osd recovery op priority = 5

still no difference...

Stefan

Am 02.08.2013 20:21, schrieb Samuel Just:

Also, you have osd_recovery_op_priority at 50.  That is close to the
priority of client IO.  You want it below 10 (defaults to 10), perhaps
at 1.  You can also adjust down osd_recovery_max_active.
-Sam

On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe 
wrote:

I already tried both values this makes no difference. The drives are
not the
bottleneck.

Am 02.08.2013 19:35, schrieb Samuel Just:


You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 
wrote:


Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
settings?


sudo ceph --admin-daemon ceph-osd..asok config show



Sure.



{ "name": "osd.0",
"cluster": "ceph",
"none": "0\/5",
"lockdep": "0\/0",
"context": "0\/0",
"crush": "0\/0",
"mds": "0\/0",
"mds_balancer": "0\/0",
"mds_locker": "0\/0",
"mds_log": "0\/0",
"mds_log_expire": "0\/0",
"mds_migrator": "0\/0",
"buffer": "0\/0",
"timer": "0\/0",
"filer": "0\/0",
"striper": "0\/1",
"objecter": "0\/0",
"rados": "0\/0",
"rbd": "0\/0",
"journaler": "0\/0",
"objectcacher": "0\/0",
"client": "0\/0",
"osd": "0\/0",
"optracker": "0\/0",
"objclass": "0\/0",
"filestore": "0\/0",
"journal": "0\/0",
"ms": "0\/0",
"mon": "0\/0",
"monc": "0\/0",
"paxos": "0\/0",
"tp": "0\/0",
"auth": "0\/0",
"crypto": "1\/5",
"finisher": "0\/0",
"heartbeatmap": "0\/0",
"perfcounter": "0\/0",
"rgw": "0\/0",
"hadoop": "0\/0",
"javaclient": "1\/5",
"asok": "0\/0",
"throttle": "0\/0",
"host": "cloud1-1268",
"fsid": "----",
"public_addr": "10.255.0.90:0\/0",
"cluster_addr": "10.255.0.90:0\/0",
"public_network": "10.255.0.1\/24",
"cluster_network": "10.255.0.1\/24",
"num_client": "1",
"monmap": "",
"mon_host": "",
"lockdep": "false",
"run_dir": "\/var\/run\/ceph",
"admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
"daemonize": "true",
"pid_file": "\/var\/run\/ceph\/osd.0.pid",
"chdir": "\/",
"max_open_files": "0",
"fatal_signal_handlers": "true",
"log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
"log_max_new": "1000",
"log_max_recent": "1",
"log_to_stderr": "false",
"err_to_stderr": "true",
"log_to_syslog": "false",
"err_to_syslog": "false",
"log_flush_on_exit": "true",
"log_stop_at_utilization": "0.97",
"clog_to_monitors": "true",
"clog_to_syslog": "false",
"clog_to_syslog_level": "info",
"clog_to_syslog_facility": "daemon",
"mon_cluster_log_to_syslog": "false",
"mon_cluster_log_to_syslog_level": "info",
"mon_cluster_log_to_syslog_facility": "daemon",
"mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
"key": "",
"keyfile": "",
"keyring": "\/etc\/ceph\/osd.

Re: still recovery issues with cuttlefish

2013-08-08 Thread Oliver Francke

Hi Mike,

On 08/08/2013 04:05 PM, Mike Dawson wrote:

Stefan,

I see the same behavior and I theorize it is linked to an issue 
detailed in another thread [0]. Do your VM guests ever hang while your 
cluster is HEALTH_OK like described in that other thread?


[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982

A few observations:

- The VMs that hang do lots of writes (video surveillance).
- I use rbd and qemu. The problem exists in both qemu 1.4.x and 1.5.2.
- The problem exists with or without joshd's qemu async flush patch.
- Windows VMs seem to be more vulnerable than Linux VMs.
- If I restart the qemu-system-x86_64 process, the guest will come 
back to life.
- A partial workaround seems to be console input (NoVNC or 'virsh 
screenshot'), but restarting qemu-system-x86_64 works better.

- The issue of VMs hanging seems worse with RBD writeback cache enabled
- I typically run virtio, but I believe I've seen it with e1000, too.
- VM guests hang at different times, not all at once on a host (or 
across all hosts).

- I co-mingle VM guests on servers that host ceph OSDs.



Oliver,

If your cluster has to recover/backfill, do your guest VMs hang with 
more frequency than under normal HEALTH_OK conditions, even if you 
prioritize client IO as Sam wrote below?


well, at least I can confirm, that with the stupid "while 
install/remove-loop" alone and a "rbd cp some/stuff some/other" and 
ongoing remapping/backfilling the 120-secs problem occurs, too.
though with a spew-test inside the VM the occurance happens 
earlier/sooner/quicker.

That now in a LAB-environment to let alone production ;)

Oliver.




Sam,

Turning down all the settings you mentioned certainly does slow the 
recover/backfill process, but it doesn't prevent the VM guests backed 
by RBD volumes from hanging. In fact, I often try to prioritize 
recovery/backfill because my guests tend to hang until I get back to 
HEALTH_OK. Given this apparent bug, completing recovery/backfill 
quicker leads to less total outage, it seems.



Josh,

How can I help you investigate if RBD is the common source of both of 
these issues?



Thanks,
Mike Dawson


On 8/2/2013 2:46 PM, Stefan Priebe wrote:

Hi,

 osd recovery max active = 1
 osd max backfills = 1
 osd recovery op priority = 5

still no difference...

Stefan

Am 02.08.2013 20:21, schrieb Samuel Just:

Also, you have osd_recovery_op_priority at 50.  That is close to the
priority of client IO.  You want it below 10 (defaults to 10), perhaps
at 1.  You can also adjust down osd_recovery_max_active.
-Sam

On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe 
wrote:

I already tried both values this makes no difference. The drives are
not the
bottleneck.

Am 02.08.2013 19:35, schrieb Samuel Just:


You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 


wrote:


Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
settings?


sudo ceph --admin-daemon ceph-osd..asok config show



Sure.



{ "name": "osd.0",
"cluster": "ceph",
"none": "0\/5",
"lockdep": "0\/0",
"context": "0\/0",
"crush": "0\/0",
"mds": "0\/0",
"mds_balancer": "0\/0",
"mds_locker": "0\/0",
"mds_log": "0\/0",
"mds_log_expire": "0\/0",
"mds_migrator": "0\/0",
"buffer": "0\/0",
"timer": "0\/0",
"filer": "0\/0",
"striper": "0\/1",
"objecter": "0\/0",
"rados": "0\/0",
"rbd": "0\/0",
"journaler": "0\/0",
"objectcacher": "0\/0",
"client": "0\/0",
"osd": "0\/0",
"optracker": "0\/0",
"objclass": "0\/0",
"filestore": "0\/0",
"journal": "0\/0",
"ms": "0\/0",
"mon": "0\/0",
"monc": "0\/0",
"paxos": "0\/0",
"tp": "0\/0",
"auth": "0\/0",
"crypto": "1\/5",
"finisher": "0\/0",
"heartbeatmap": "0\/0",
"perfcounter": "0\/0",
"rgw": "0\/0",
"hadoop": "0\/0",
"javaclient": "1\/5",
"asok": "0\/0",
"throttle": "0\/0",
"host": "cloud1-1268",
"fsid": "----",
"public_addr": "10.255.0.90:0\/0",
"cluster_addr": "10.255.0.90:0\/0",
"public_network": "10.255.0.1\/24",
"cluster_network": "10.255.0.1\/24",
"num_client": "1",
"monmap": "",
"mon_host": "",
"lockdep": "false",
"run_dir": "\/var\/run\/ceph",
"admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
"daemonize": "true",
"pid_file": "\/var\/run\/ceph\/osd.0.pid",
"chdir": "\/",
"max_open_files": "0",
"fatal_signal_handlers": "true",
"log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
"log_max_new": "1000",
"log_max_recent": "1",
"log_to_stderr": "false",
"err_to_stderr": "true",
"log_to_syslog": "false",
"err_to_syslog": "false",
"log_flush_on_exit": "true",
"log_stop_at_utilization": "0.97",
"clog_to_monitors": "true",
"clog_to_syslog": "false",
"clog_to_syslog_level": "info",
"clog_to_syslog_facility": "da

Re: still recovery issues with cuttlefish

2013-08-08 Thread Mike Dawson

Stefan,

I see the same behavior and I theorize it is linked to an issue detailed 
in another thread [0]. Do your VM guests ever hang while your cluster is 
HEALTH_OK like described in that other thread?


[0] http://comments.gmane.org/gmane.comp.file-systems.ceph.user/2982

A few observations:

- The VMs that hang do lots of writes (video surveillance).
- I use rbd and qemu. The problem exists in both qemu 1.4.x and 1.5.2.
- The problem exists with or without joshd's qemu async flush patch.
- Windows VMs seem to be more vulnerable than Linux VMs.
- If I restart the qemu-system-x86_64 process, the guest will come back 
to life.
- A partial workaround seems to be console input (NoVNC or 'virsh 
screenshot'), but restarting qemu-system-x86_64 works better.

- The issue of VMs hanging seems worse with RBD writeback cache enabled
- I typically run virtio, but I believe I've seen it with e1000, too.
- VM guests hang at different times, not all at once on a host (or 
across all hosts).

- I co-mingle VM guests on servers that host ceph OSDs.



Oliver,

If your cluster has to recover/backfill, do your guest VMs hang with 
more frequency than under normal HEALTH_OK conditions, even if you 
prioritize client IO as Sam wrote below?



Sam,

Turning down all the settings you mentioned certainly does slow the 
recover/backfill process, but it doesn't prevent the VM guests backed by 
RBD volumes from hanging. In fact, I often try to prioritize 
recovery/backfill because my guests tend to hang until I get back to 
HEALTH_OK. Given this apparent bug, completing recovery/backfill quicker 
leads to less total outage, it seems.



Josh,

How can I help you investigate if RBD is the common source of both of 
these issues?



Thanks,
Mike Dawson


On 8/2/2013 2:46 PM, Stefan Priebe wrote:

Hi,

 osd recovery max active = 1
 osd max backfills = 1
 osd recovery op priority = 5

still no difference...

Stefan

Am 02.08.2013 20:21, schrieb Samuel Just:

Also, you have osd_recovery_op_priority at 50.  That is close to the
priority of client IO.  You want it below 10 (defaults to 10), perhaps
at 1.  You can also adjust down osd_recovery_max_active.
-Sam

On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe 
wrote:

I already tried both values this makes no difference. The drives are
not the
bottleneck.

Am 02.08.2013 19:35, schrieb Samuel Just:


You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 
wrote:


Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
settings?


sudo ceph --admin-daemon ceph-osd..asok config show



Sure.



{ "name": "osd.0",
"cluster": "ceph",
"none": "0\/5",
"lockdep": "0\/0",
"context": "0\/0",
"crush": "0\/0",
"mds": "0\/0",
"mds_balancer": "0\/0",
"mds_locker": "0\/0",
"mds_log": "0\/0",
"mds_log_expire": "0\/0",
"mds_migrator": "0\/0",
"buffer": "0\/0",
"timer": "0\/0",
"filer": "0\/0",
"striper": "0\/1",
"objecter": "0\/0",
"rados": "0\/0",
"rbd": "0\/0",
"journaler": "0\/0",
"objectcacher": "0\/0",
"client": "0\/0",
"osd": "0\/0",
"optracker": "0\/0",
"objclass": "0\/0",
"filestore": "0\/0",
"journal": "0\/0",
"ms": "0\/0",
"mon": "0\/0",
"monc": "0\/0",
"paxos": "0\/0",
"tp": "0\/0",
"auth": "0\/0",
"crypto": "1\/5",
"finisher": "0\/0",
"heartbeatmap": "0\/0",
"perfcounter": "0\/0",
"rgw": "0\/0",
"hadoop": "0\/0",
"javaclient": "1\/5",
"asok": "0\/0",
"throttle": "0\/0",
"host": "cloud1-1268",
"fsid": "----",
"public_addr": "10.255.0.90:0\/0",
"cluster_addr": "10.255.0.90:0\/0",
"public_network": "10.255.0.1\/24",
"cluster_network": "10.255.0.1\/24",
"num_client": "1",
"monmap": "",
"mon_host": "",
"lockdep": "false",
"run_dir": "\/var\/run\/ceph",
"admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
"daemonize": "true",
"pid_file": "\/var\/run\/ceph\/osd.0.pid",
"chdir": "\/",
"max_open_files": "0",
"fatal_signal_handlers": "true",
"log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
"log_max_new": "1000",
"log_max_recent": "1",
"log_to_stderr": "false",
"err_to_stderr": "true",
"log_to_syslog": "false",
"err_to_syslog": "false",
"log_flush_on_exit": "true",
"log_stop_at_utilization": "0.97",
"clog_to_monitors": "true",
"clog_to_syslog": "false",
"clog_to_syslog_level": "info",
"clog_to_syslog_facility": "daemon",
"mon_cluster_log_to_syslog": "false",
"mon_cluster_log_to_syslog_level": "info",
"mon_cluster_log_to_syslog_facility": "daemon",
"mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
"key": "",
"keyfile": "",
"keyring": "\/etc\/ceph\/osd.0.keyring",
"heartbeat_interval": "5",
"heartbeat_file": "",
"heartbeat_inject_failure": "0",
"perf": "true",

Re: still recovery issues with cuttlefish

2013-08-02 Thread Stefan Priebe

Hi,

osd recovery max active = 1
osd max backfills = 1
osd recovery op priority = 5

still no difference...

Stefan

Am 02.08.2013 20:21, schrieb Samuel Just:

Also, you have osd_recovery_op_priority at 50.  That is close to the
priority of client IO.  You want it below 10 (defaults to 10), perhaps
at 1.  You can also adjust down osd_recovery_max_active.
-Sam

On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe  wrote:

I already tried both values this makes no difference. The drives are not the
bottleneck.

Am 02.08.2013 19:35, schrieb Samuel Just:


You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 
wrote:


Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
settings?


sudo ceph --admin-daemon ceph-osd..asok config show



Sure.



{ "name": "osd.0",
"cluster": "ceph",
"none": "0\/5",
"lockdep": "0\/0",
"context": "0\/0",
"crush": "0\/0",
"mds": "0\/0",
"mds_balancer": "0\/0",
"mds_locker": "0\/0",
"mds_log": "0\/0",
"mds_log_expire": "0\/0",
"mds_migrator": "0\/0",
"buffer": "0\/0",
"timer": "0\/0",
"filer": "0\/0",
"striper": "0\/1",
"objecter": "0\/0",
"rados": "0\/0",
"rbd": "0\/0",
"journaler": "0\/0",
"objectcacher": "0\/0",
"client": "0\/0",
"osd": "0\/0",
"optracker": "0\/0",
"objclass": "0\/0",
"filestore": "0\/0",
"journal": "0\/0",
"ms": "0\/0",
"mon": "0\/0",
"monc": "0\/0",
"paxos": "0\/0",
"tp": "0\/0",
"auth": "0\/0",
"crypto": "1\/5",
"finisher": "0\/0",
"heartbeatmap": "0\/0",
"perfcounter": "0\/0",
"rgw": "0\/0",
"hadoop": "0\/0",
"javaclient": "1\/5",
"asok": "0\/0",
"throttle": "0\/0",
"host": "cloud1-1268",
"fsid": "----",
"public_addr": "10.255.0.90:0\/0",
"cluster_addr": "10.255.0.90:0\/0",
"public_network": "10.255.0.1\/24",
"cluster_network": "10.255.0.1\/24",
"num_client": "1",
"monmap": "",
"mon_host": "",
"lockdep": "false",
"run_dir": "\/var\/run\/ceph",
"admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
"daemonize": "true",
"pid_file": "\/var\/run\/ceph\/osd.0.pid",
"chdir": "\/",
"max_open_files": "0",
"fatal_signal_handlers": "true",
"log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
"log_max_new": "1000",
"log_max_recent": "1",
"log_to_stderr": "false",
"err_to_stderr": "true",
"log_to_syslog": "false",
"err_to_syslog": "false",
"log_flush_on_exit": "true",
"log_stop_at_utilization": "0.97",
"clog_to_monitors": "true",
"clog_to_syslog": "false",
"clog_to_syslog_level": "info",
"clog_to_syslog_facility": "daemon",
"mon_cluster_log_to_syslog": "false",
"mon_cluster_log_to_syslog_level": "info",
"mon_cluster_log_to_syslog_facility": "daemon",
"mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
"key": "",
"keyfile": "",
"keyring": "\/etc\/ceph\/osd.0.keyring",
"heartbeat_interval": "5",
"heartbeat_file": "",
"heartbeat_inject_failure": "0",
"perf": "true",
"ms_tcp_nodelay": "true",
"ms_tcp_rcvbuf": "0",
"ms_initial_backoff": "0.2",
"ms_max_backoff": "15",
"ms_nocrc": "false",
"ms_die_on_bad_msg": "false",
"ms_die_on_unhandled_msg": "false",
"ms_dispatch_throttle_bytes": "104857600",
"ms_bind_ipv6": "false",
"ms_bind_port_min": "6800",
"ms_bind_port_max": "7100",
"ms_rwthread_stack_bytes": "1048576",
"ms_tcp_read_timeout": "900",
"ms_pq_max_tokens_per_priority": "4194304",
"ms_pq_min_cost": "65536",
"ms_inject_socket_failures": "0",
"ms_inject_delay_type": "",
"ms_inject_delay_max": "1",
"ms_inject_delay_probability": "0",
"ms_inject_internal_delays": "0",
"mon_data": "\/var\/lib\/ceph\/mon\/ceph-0",
"mon_initial_members": "",
"mon_sync_fs_threshold": "5",
"mon_compact_on_start": "false",
"mon_compact_on_bootstrap": "false",
"mon_compact_on_trim": "true",
"mon_tick_interval": "5",
"mon_subscribe_interval": "300",
"mon_osd_laggy_halflife": "3600",
"mon_osd_laggy_weight": "0.3",
"mon_osd_adjust_heartbeat_grace": "true",
"mon_osd_adjust_down_out_interval": "true",
"mon_osd_auto_mark_in": "false",
"mon_osd_auto_mark_auto_out_in": "true",
"mon_osd_auto_mark_new_in": "true",
"mon_osd_down_out_interval": "300",
"mon_osd_down_out_subtree_limit": "rack",
"mon_osd_min_up_ratio": "0.3",
"mon_osd_min_in_ratio": "0.3",
"mon_stat_smooth_intervals": "2",
"mon_lease": "5",
"mon_lease_renew_interval": "3",
"mon_lease_ack_timeout": "10",
"mon_clock_drift_allowed": "0.05",
"mon_clock_drift_warn_backoff": "5",
"mon_timecheck_interval": "300",
"mon_accept_timeout": "10",
"mon_pg_create_interval": "30",
"mon_pg_stuck_threshold": "300",
"mon_osd_full_ratio":

Re: still recovery issues with cuttlefish

2013-08-02 Thread Samuel Just
Also, you have osd_recovery_op_priority at 50.  That is close to the
priority of client IO.  You want it below 10 (defaults to 10), perhaps
at 1.  You can also adjust down osd_recovery_max_active.
-Sam

On Fri, Aug 2, 2013 at 11:16 AM, Stefan Priebe  wrote:
> I already tried both values this makes no difference. The drives are not the
> bottleneck.
>
> Am 02.08.2013 19:35, schrieb Samuel Just:
>
>> You might try turning osd_max_backfills to 2 or 1.
>> -Sam
>>
>> On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe 
>> wrote:
>>>
>>> Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd
>>> settings?
>>>
 sudo ceph --admin-daemon ceph-osd..asok config show
>>>
>>>
>>> Sure.
>>>
>>>
>>>
>>> { "name": "osd.0",
>>>"cluster": "ceph",
>>>"none": "0\/5",
>>>"lockdep": "0\/0",
>>>"context": "0\/0",
>>>"crush": "0\/0",
>>>"mds": "0\/0",
>>>"mds_balancer": "0\/0",
>>>"mds_locker": "0\/0",
>>>"mds_log": "0\/0",
>>>"mds_log_expire": "0\/0",
>>>"mds_migrator": "0\/0",
>>>"buffer": "0\/0",
>>>"timer": "0\/0",
>>>"filer": "0\/0",
>>>"striper": "0\/1",
>>>"objecter": "0\/0",
>>>"rados": "0\/0",
>>>"rbd": "0\/0",
>>>"journaler": "0\/0",
>>>"objectcacher": "0\/0",
>>>"client": "0\/0",
>>>"osd": "0\/0",
>>>"optracker": "0\/0",
>>>"objclass": "0\/0",
>>>"filestore": "0\/0",
>>>"journal": "0\/0",
>>>"ms": "0\/0",
>>>"mon": "0\/0",
>>>"monc": "0\/0",
>>>"paxos": "0\/0",
>>>"tp": "0\/0",
>>>"auth": "0\/0",
>>>"crypto": "1\/5",
>>>"finisher": "0\/0",
>>>"heartbeatmap": "0\/0",
>>>"perfcounter": "0\/0",
>>>"rgw": "0\/0",
>>>"hadoop": "0\/0",
>>>"javaclient": "1\/5",
>>>"asok": "0\/0",
>>>"throttle": "0\/0",
>>>"host": "cloud1-1268",
>>>"fsid": "----",
>>>"public_addr": "10.255.0.90:0\/0",
>>>"cluster_addr": "10.255.0.90:0\/0",
>>>"public_network": "10.255.0.1\/24",
>>>"cluster_network": "10.255.0.1\/24",
>>>"num_client": "1",
>>>"monmap": "",
>>>"mon_host": "",
>>>"lockdep": "false",
>>>"run_dir": "\/var\/run\/ceph",
>>>"admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
>>>"daemonize": "true",
>>>"pid_file": "\/var\/run\/ceph\/osd.0.pid",
>>>"chdir": "\/",
>>>"max_open_files": "0",
>>>"fatal_signal_handlers": "true",
>>>"log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
>>>"log_max_new": "1000",
>>>"log_max_recent": "1",
>>>"log_to_stderr": "false",
>>>"err_to_stderr": "true",
>>>"log_to_syslog": "false",
>>>"err_to_syslog": "false",
>>>"log_flush_on_exit": "true",
>>>"log_stop_at_utilization": "0.97",
>>>"clog_to_monitors": "true",
>>>"clog_to_syslog": "false",
>>>"clog_to_syslog_level": "info",
>>>"clog_to_syslog_facility": "daemon",
>>>"mon_cluster_log_to_syslog": "false",
>>>"mon_cluster_log_to_syslog_level": "info",
>>>"mon_cluster_log_to_syslog_facility": "daemon",
>>>"mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
>>>"key": "",
>>>"keyfile": "",
>>>"keyring": "\/etc\/ceph\/osd.0.keyring",
>>>"heartbeat_interval": "5",
>>>"heartbeat_file": "",
>>>"heartbeat_inject_failure": "0",
>>>"perf": "true",
>>>"ms_tcp_nodelay": "true",
>>>"ms_tcp_rcvbuf": "0",
>>>"ms_initial_backoff": "0.2",
>>>"ms_max_backoff": "15",
>>>"ms_nocrc": "false",
>>>"ms_die_on_bad_msg": "false",
>>>"ms_die_on_unhandled_msg": "false",
>>>"ms_dispatch_throttle_bytes": "104857600",
>>>"ms_bind_ipv6": "false",
>>>"ms_bind_port_min": "6800",
>>>"ms_bind_port_max": "7100",
>>>"ms_rwthread_stack_bytes": "1048576",
>>>"ms_tcp_read_timeout": "900",
>>>"ms_pq_max_tokens_per_priority": "4194304",
>>>"ms_pq_min_cost": "65536",
>>>"ms_inject_socket_failures": "0",
>>>"ms_inject_delay_type": "",
>>>"ms_inject_delay_max": "1",
>>>"ms_inject_delay_probability": "0",
>>>"ms_inject_internal_delays": "0",
>>>"mon_data": "\/var\/lib\/ceph\/mon\/ceph-0",
>>>"mon_initial_members": "",
>>>"mon_sync_fs_threshold": "5",
>>>"mon_compact_on_start": "false",
>>>"mon_compact_on_bootstrap": "false",
>>>"mon_compact_on_trim": "true",
>>>"mon_tick_interval": "5",
>>>"mon_subscribe_interval": "300",
>>>"mon_osd_laggy_halflife": "3600",
>>>"mon_osd_laggy_weight": "0.3",
>>>"mon_osd_adjust_heartbeat_grace": "true",
>>>"mon_osd_adjust_down_out_interval": "true",
>>>"mon_osd_auto_mark_in": "false",
>>>"mon_osd_auto_mark_auto_out_in": "true",
>>>"mon_osd_auto_mark_new_in": "true",
>>>"mon_osd_down_out_interval": "300",
>>>"mon_osd_down_out_subtree_limit": "rack",
>>>"mon_osd_min_up_ratio": "0.3",
>>>"mon_osd_min_in_ratio": "0.3",
>>>"mon_stat_smooth_intervals": "2",
>>>"mon_lease": "5",
>>>"mon_lease_renew_interval": "3",
>>>"mon_lease_ack_timeout": "10"

Re: still recovery issues with cuttlefish

2013-08-02 Thread Stefan Priebe
I already tried both values this makes no difference. The drives are not 
the bottleneck.


Am 02.08.2013 19:35, schrieb Samuel Just:

You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe  wrote:

Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd settings?


sudo ceph --admin-daemon ceph-osd..asok config show


Sure.



{ "name": "osd.0",
   "cluster": "ceph",
   "none": "0\/5",
   "lockdep": "0\/0",
   "context": "0\/0",
   "crush": "0\/0",
   "mds": "0\/0",
   "mds_balancer": "0\/0",
   "mds_locker": "0\/0",
   "mds_log": "0\/0",
   "mds_log_expire": "0\/0",
   "mds_migrator": "0\/0",
   "buffer": "0\/0",
   "timer": "0\/0",
   "filer": "0\/0",
   "striper": "0\/1",
   "objecter": "0\/0",
   "rados": "0\/0",
   "rbd": "0\/0",
   "journaler": "0\/0",
   "objectcacher": "0\/0",
   "client": "0\/0",
   "osd": "0\/0",
   "optracker": "0\/0",
   "objclass": "0\/0",
   "filestore": "0\/0",
   "journal": "0\/0",
   "ms": "0\/0",
   "mon": "0\/0",
   "monc": "0\/0",
   "paxos": "0\/0",
   "tp": "0\/0",
   "auth": "0\/0",
   "crypto": "1\/5",
   "finisher": "0\/0",
   "heartbeatmap": "0\/0",
   "perfcounter": "0\/0",
   "rgw": "0\/0",
   "hadoop": "0\/0",
   "javaclient": "1\/5",
   "asok": "0\/0",
   "throttle": "0\/0",
   "host": "cloud1-1268",
   "fsid": "----",
   "public_addr": "10.255.0.90:0\/0",
   "cluster_addr": "10.255.0.90:0\/0",
   "public_network": "10.255.0.1\/24",
   "cluster_network": "10.255.0.1\/24",
   "num_client": "1",
   "monmap": "",
   "mon_host": "",
   "lockdep": "false",
   "run_dir": "\/var\/run\/ceph",
   "admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
   "daemonize": "true",
   "pid_file": "\/var\/run\/ceph\/osd.0.pid",
   "chdir": "\/",
   "max_open_files": "0",
   "fatal_signal_handlers": "true",
   "log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
   "log_max_new": "1000",
   "log_max_recent": "1",
   "log_to_stderr": "false",
   "err_to_stderr": "true",
   "log_to_syslog": "false",
   "err_to_syslog": "false",
   "log_flush_on_exit": "true",
   "log_stop_at_utilization": "0.97",
   "clog_to_monitors": "true",
   "clog_to_syslog": "false",
   "clog_to_syslog_level": "info",
   "clog_to_syslog_facility": "daemon",
   "mon_cluster_log_to_syslog": "false",
   "mon_cluster_log_to_syslog_level": "info",
   "mon_cluster_log_to_syslog_facility": "daemon",
   "mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
   "key": "",
   "keyfile": "",
   "keyring": "\/etc\/ceph\/osd.0.keyring",
   "heartbeat_interval": "5",
   "heartbeat_file": "",
   "heartbeat_inject_failure": "0",
   "perf": "true",
   "ms_tcp_nodelay": "true",
   "ms_tcp_rcvbuf": "0",
   "ms_initial_backoff": "0.2",
   "ms_max_backoff": "15",
   "ms_nocrc": "false",
   "ms_die_on_bad_msg": "false",
   "ms_die_on_unhandled_msg": "false",
   "ms_dispatch_throttle_bytes": "104857600",
   "ms_bind_ipv6": "false",
   "ms_bind_port_min": "6800",
   "ms_bind_port_max": "7100",
   "ms_rwthread_stack_bytes": "1048576",
   "ms_tcp_read_timeout": "900",
   "ms_pq_max_tokens_per_priority": "4194304",
   "ms_pq_min_cost": "65536",
   "ms_inject_socket_failures": "0",
   "ms_inject_delay_type": "",
   "ms_inject_delay_max": "1",
   "ms_inject_delay_probability": "0",
   "ms_inject_internal_delays": "0",
   "mon_data": "\/var\/lib\/ceph\/mon\/ceph-0",
   "mon_initial_members": "",
   "mon_sync_fs_threshold": "5",
   "mon_compact_on_start": "false",
   "mon_compact_on_bootstrap": "false",
   "mon_compact_on_trim": "true",
   "mon_tick_interval": "5",
   "mon_subscribe_interval": "300",
   "mon_osd_laggy_halflife": "3600",
   "mon_osd_laggy_weight": "0.3",
   "mon_osd_adjust_heartbeat_grace": "true",
   "mon_osd_adjust_down_out_interval": "true",
   "mon_osd_auto_mark_in": "false",
   "mon_osd_auto_mark_auto_out_in": "true",
   "mon_osd_auto_mark_new_in": "true",
   "mon_osd_down_out_interval": "300",
   "mon_osd_down_out_subtree_limit": "rack",
   "mon_osd_min_up_ratio": "0.3",
   "mon_osd_min_in_ratio": "0.3",
   "mon_stat_smooth_intervals": "2",
   "mon_lease": "5",
   "mon_lease_renew_interval": "3",
   "mon_lease_ack_timeout": "10",
   "mon_clock_drift_allowed": "0.05",
   "mon_clock_drift_warn_backoff": "5",
   "mon_timecheck_interval": "300",
   "mon_accept_timeout": "10",
   "mon_pg_create_interval": "30",
   "mon_pg_stuck_threshold": "300",
   "mon_osd_full_ratio": "0.95",
   "mon_osd_nearfull_ratio": "0.85",
   "mon_globalid_prealloc": "100",
   "mon_osd_report_timeout": "900",
   "mon_force_standby_active": "true",
   "mon_min_osdmap_epochs": "500",
   "mon_max_pgmap_epochs": "500",
   "mon_max_log_epochs": "500",
   "mon_max_osd": "1",
   "mon_probe_timeout": "2",
   "mon_slurp_timeout": "10",
   "mon_slurp_bytes": "262144",
   "mon_client_bytes": "104857600",
   "mon_daemon_bytes": "419430400",
   "mon_max_log_entries_per_event": "4096",
   "mon_health_data_update_interval": "60",
   "mon_data_avail_crit": "5",
   "mon_dat

Re: still recovery issues with cuttlefish

2013-08-02 Thread Andrey Korolyov
Created #5844.

On Thu, Aug 1, 2013 at 10:38 PM, Samuel Just  wrote:
> Is there a bug open for this?  I suspect we don't sufficiently
> throttle the snapshot removal work.
> -Sam
>
> On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov  wrote:
>> Second this. Also for long-lasting snapshot problem and related
>> performance issues I may say that cuttlefish improved things greatly,
>> but creation/deletion of large snapshot (hundreds of gigabytes of
>> commited data) still can bring down cluster for a minutes, despite
>> usage of every possible optimization.
>>
>> On Thu, Aug 1, 2013 at 12:22 PM, Stefan Priebe - Profihost AG
>>  wrote:
>>> Hi,
>>>
>>> i still have recovery issues with cuttlefish. After the OSD comes back
>>> it seem to hang for around 2-4 minutes and then recovery seems to start
>>> (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
>>> get a lot of slow request messages an hanging VMs.
>>>
>>> What i noticed today is that if i leave the OSD off as long as ceph
>>> starts to backfill - the recovery and "re" backfilling wents absolutely
>>> smooth without any issues and no slow request messages at all.
>>>
>>> Does anybody have an idea why?
>>>
>>> Greets,
>>> Stefan
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-02 Thread Samuel Just
You might try turning osd_max_backfills to 2 or 1.
-Sam

On Fri, Aug 2, 2013 at 12:44 AM, Stefan Priebe  wrote:
> Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd settings?
>
>> sudo ceph --admin-daemon ceph-osd..asok config show
>
> Sure.
>
>
>
> { "name": "osd.0",
>   "cluster": "ceph",
>   "none": "0\/5",
>   "lockdep": "0\/0",
>   "context": "0\/0",
>   "crush": "0\/0",
>   "mds": "0\/0",
>   "mds_balancer": "0\/0",
>   "mds_locker": "0\/0",
>   "mds_log": "0\/0",
>   "mds_log_expire": "0\/0",
>   "mds_migrator": "0\/0",
>   "buffer": "0\/0",
>   "timer": "0\/0",
>   "filer": "0\/0",
>   "striper": "0\/1",
>   "objecter": "0\/0",
>   "rados": "0\/0",
>   "rbd": "0\/0",
>   "journaler": "0\/0",
>   "objectcacher": "0\/0",
>   "client": "0\/0",
>   "osd": "0\/0",
>   "optracker": "0\/0",
>   "objclass": "0\/0",
>   "filestore": "0\/0",
>   "journal": "0\/0",
>   "ms": "0\/0",
>   "mon": "0\/0",
>   "monc": "0\/0",
>   "paxos": "0\/0",
>   "tp": "0\/0",
>   "auth": "0\/0",
>   "crypto": "1\/5",
>   "finisher": "0\/0",
>   "heartbeatmap": "0\/0",
>   "perfcounter": "0\/0",
>   "rgw": "0\/0",
>   "hadoop": "0\/0",
>   "javaclient": "1\/5",
>   "asok": "0\/0",
>   "throttle": "0\/0",
>   "host": "cloud1-1268",
>   "fsid": "----",
>   "public_addr": "10.255.0.90:0\/0",
>   "cluster_addr": "10.255.0.90:0\/0",
>   "public_network": "10.255.0.1\/24",
>   "cluster_network": "10.255.0.1\/24",
>   "num_client": "1",
>   "monmap": "",
>   "mon_host": "",
>   "lockdep": "false",
>   "run_dir": "\/var\/run\/ceph",
>   "admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
>   "daemonize": "true",
>   "pid_file": "\/var\/run\/ceph\/osd.0.pid",
>   "chdir": "\/",
>   "max_open_files": "0",
>   "fatal_signal_handlers": "true",
>   "log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
>   "log_max_new": "1000",
>   "log_max_recent": "1",
>   "log_to_stderr": "false",
>   "err_to_stderr": "true",
>   "log_to_syslog": "false",
>   "err_to_syslog": "false",
>   "log_flush_on_exit": "true",
>   "log_stop_at_utilization": "0.97",
>   "clog_to_monitors": "true",
>   "clog_to_syslog": "false",
>   "clog_to_syslog_level": "info",
>   "clog_to_syslog_facility": "daemon",
>   "mon_cluster_log_to_syslog": "false",
>   "mon_cluster_log_to_syslog_level": "info",
>   "mon_cluster_log_to_syslog_facility": "daemon",
>   "mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
>   "key": "",
>   "keyfile": "",
>   "keyring": "\/etc\/ceph\/osd.0.keyring",
>   "heartbeat_interval": "5",
>   "heartbeat_file": "",
>   "heartbeat_inject_failure": "0",
>   "perf": "true",
>   "ms_tcp_nodelay": "true",
>   "ms_tcp_rcvbuf": "0",
>   "ms_initial_backoff": "0.2",
>   "ms_max_backoff": "15",
>   "ms_nocrc": "false",
>   "ms_die_on_bad_msg": "false",
>   "ms_die_on_unhandled_msg": "false",
>   "ms_dispatch_throttle_bytes": "104857600",
>   "ms_bind_ipv6": "false",
>   "ms_bind_port_min": "6800",
>   "ms_bind_port_max": "7100",
>   "ms_rwthread_stack_bytes": "1048576",
>   "ms_tcp_read_timeout": "900",
>   "ms_pq_max_tokens_per_priority": "4194304",
>   "ms_pq_min_cost": "65536",
>   "ms_inject_socket_failures": "0",
>   "ms_inject_delay_type": "",
>   "ms_inject_delay_max": "1",
>   "ms_inject_delay_probability": "0",
>   "ms_inject_internal_delays": "0",
>   "mon_data": "\/var\/lib\/ceph\/mon\/ceph-0",
>   "mon_initial_members": "",
>   "mon_sync_fs_threshold": "5",
>   "mon_compact_on_start": "false",
>   "mon_compact_on_bootstrap": "false",
>   "mon_compact_on_trim": "true",
>   "mon_tick_interval": "5",
>   "mon_subscribe_interval": "300",
>   "mon_osd_laggy_halflife": "3600",
>   "mon_osd_laggy_weight": "0.3",
>   "mon_osd_adjust_heartbeat_grace": "true",
>   "mon_osd_adjust_down_out_interval": "true",
>   "mon_osd_auto_mark_in": "false",
>   "mon_osd_auto_mark_auto_out_in": "true",
>   "mon_osd_auto_mark_new_in": "true",
>   "mon_osd_down_out_interval": "300",
>   "mon_osd_down_out_subtree_limit": "rack",
>   "mon_osd_min_up_ratio": "0.3",
>   "mon_osd_min_in_ratio": "0.3",
>   "mon_stat_smooth_intervals": "2",
>   "mon_lease": "5",
>   "mon_lease_renew_interval": "3",
>   "mon_lease_ack_timeout": "10",
>   "mon_clock_drift_allowed": "0.05",
>   "mon_clock_drift_warn_backoff": "5",
>   "mon_timecheck_interval": "300",
>   "mon_accept_timeout": "10",
>   "mon_pg_create_interval": "30",
>   "mon_pg_stuck_threshold": "300",
>   "mon_osd_full_ratio": "0.95",
>   "mon_osd_nearfull_ratio": "0.85",
>   "mon_globalid_prealloc": "100",
>   "mon_osd_report_timeout": "900",
>   "mon_force_standby_active": "true",
>   "mon_min_osdmap_epochs": "500",
>   "mon_max_pgmap_epochs": "500",
>   "mon_max_log_epochs": "500",
>   "mon_max_osd": "1",
>   "mon_probe_timeout": "2",
>   "mon_slurp_timeout": "10",
>   "mon_slurp_bytes": "262144",
>   "mon_client_bytes": "104857600",
>   "mon_daemon_bytes": "419430400",
>   "mon_max_log_entries_per_event": "4096",
>   "mon_health_data_update_interval": "60",
>   "mon_data_avail

Re: still recovery issues with cuttlefish

2013-08-02 Thread Stefan Priebe

Am 01.08.2013 23:23, schrieb Samuel Just:> Can you dump your osd settings?
> sudo ceph --admin-daemon ceph-osd..asok config show

Sure.



{ "name": "osd.0",
  "cluster": "ceph",
  "none": "0\/5",
  "lockdep": "0\/0",
  "context": "0\/0",
  "crush": "0\/0",
  "mds": "0\/0",
  "mds_balancer": "0\/0",
  "mds_locker": "0\/0",
  "mds_log": "0\/0",
  "mds_log_expire": "0\/0",
  "mds_migrator": "0\/0",
  "buffer": "0\/0",
  "timer": "0\/0",
  "filer": "0\/0",
  "striper": "0\/1",
  "objecter": "0\/0",
  "rados": "0\/0",
  "rbd": "0\/0",
  "journaler": "0\/0",
  "objectcacher": "0\/0",
  "client": "0\/0",
  "osd": "0\/0",
  "optracker": "0\/0",
  "objclass": "0\/0",
  "filestore": "0\/0",
  "journal": "0\/0",
  "ms": "0\/0",
  "mon": "0\/0",
  "monc": "0\/0",
  "paxos": "0\/0",
  "tp": "0\/0",
  "auth": "0\/0",
  "crypto": "1\/5",
  "finisher": "0\/0",
  "heartbeatmap": "0\/0",
  "perfcounter": "0\/0",
  "rgw": "0\/0",
  "hadoop": "0\/0",
  "javaclient": "1\/5",
  "asok": "0\/0",
  "throttle": "0\/0",
  "host": "cloud1-1268",
  "fsid": "----",
  "public_addr": "10.255.0.90:0\/0",
  "cluster_addr": "10.255.0.90:0\/0",
  "public_network": "10.255.0.1\/24",
  "cluster_network": "10.255.0.1\/24",
  "num_client": "1",
  "monmap": "",
  "mon_host": "",
  "lockdep": "false",
  "run_dir": "\/var\/run\/ceph",
  "admin_socket": "\/var\/run\/ceph\/ceph-osd.0.asok",
  "daemonize": "true",
  "pid_file": "\/var\/run\/ceph\/osd.0.pid",
  "chdir": "\/",
  "max_open_files": "0",
  "fatal_signal_handlers": "true",
  "log_file": "\/var\/log\/ceph\/ceph-osd.0.log",
  "log_max_new": "1000",
  "log_max_recent": "1",
  "log_to_stderr": "false",
  "err_to_stderr": "true",
  "log_to_syslog": "false",
  "err_to_syslog": "false",
  "log_flush_on_exit": "true",
  "log_stop_at_utilization": "0.97",
  "clog_to_monitors": "true",
  "clog_to_syslog": "false",
  "clog_to_syslog_level": "info",
  "clog_to_syslog_facility": "daemon",
  "mon_cluster_log_to_syslog": "false",
  "mon_cluster_log_to_syslog_level": "info",
  "mon_cluster_log_to_syslog_facility": "daemon",
  "mon_cluster_log_file": "\/var\/log\/ceph\/ceph.log",
  "key": "",
  "keyfile": "",
  "keyring": "\/etc\/ceph\/osd.0.keyring",
  "heartbeat_interval": "5",
  "heartbeat_file": "",
  "heartbeat_inject_failure": "0",
  "perf": "true",
  "ms_tcp_nodelay": "true",
  "ms_tcp_rcvbuf": "0",
  "ms_initial_backoff": "0.2",
  "ms_max_backoff": "15",
  "ms_nocrc": "false",
  "ms_die_on_bad_msg": "false",
  "ms_die_on_unhandled_msg": "false",
  "ms_dispatch_throttle_bytes": "104857600",
  "ms_bind_ipv6": "false",
  "ms_bind_port_min": "6800",
  "ms_bind_port_max": "7100",
  "ms_rwthread_stack_bytes": "1048576",
  "ms_tcp_read_timeout": "900",
  "ms_pq_max_tokens_per_priority": "4194304",
  "ms_pq_min_cost": "65536",
  "ms_inject_socket_failures": "0",
  "ms_inject_delay_type": "",
  "ms_inject_delay_max": "1",
  "ms_inject_delay_probability": "0",
  "ms_inject_internal_delays": "0",
  "mon_data": "\/var\/lib\/ceph\/mon\/ceph-0",
  "mon_initial_members": "",
  "mon_sync_fs_threshold": "5",
  "mon_compact_on_start": "false",
  "mon_compact_on_bootstrap": "false",
  "mon_compact_on_trim": "true",
  "mon_tick_interval": "5",
  "mon_subscribe_interval": "300",
  "mon_osd_laggy_halflife": "3600",
  "mon_osd_laggy_weight": "0.3",
  "mon_osd_adjust_heartbeat_grace": "true",
  "mon_osd_adjust_down_out_interval": "true",
  "mon_osd_auto_mark_in": "false",
  "mon_osd_auto_mark_auto_out_in": "true",
  "mon_osd_auto_mark_new_in": "true",
  "mon_osd_down_out_interval": "300",
  "mon_osd_down_out_subtree_limit": "rack",
  "mon_osd_min_up_ratio": "0.3",
  "mon_osd_min_in_ratio": "0.3",
  "mon_stat_smooth_intervals": "2",
  "mon_lease": "5",
  "mon_lease_renew_interval": "3",
  "mon_lease_ack_timeout": "10",
  "mon_clock_drift_allowed": "0.05",
  "mon_clock_drift_warn_backoff": "5",
  "mon_timecheck_interval": "300",
  "mon_accept_timeout": "10",
  "mon_pg_create_interval": "30",
  "mon_pg_stuck_threshold": "300",
  "mon_osd_full_ratio": "0.95",
  "mon_osd_nearfull_ratio": "0.85",
  "mon_globalid_prealloc": "100",
  "mon_osd_report_timeout": "900",
  "mon_force_standby_active": "true",
  "mon_min_osdmap_epochs": "500",
  "mon_max_pgmap_epochs": "500",
  "mon_max_log_epochs": "500",
  "mon_max_osd": "1",
  "mon_probe_timeout": "2",
  "mon_slurp_timeout": "10",
  "mon_slurp_bytes": "262144",
  "mon_client_bytes": "104857600",
  "mon_daemon_bytes": "419430400",
  "mon_max_log_entries_per_event": "4096",
  "mon_health_data_update_interval": "60",
  "mon_data_avail_crit": "5",
  "mon_data_avail_warn": "30",
  "mon_config_key_max_entry_size": "4096",
  "mon_sync_trim_timeout": "30",
  "mon_sync_heartbeat_timeout": "30",
  "mon_sync_heartbeat_interval": "5",
  "mon_sync_backoff_timeout": "30",
  "mon_sync_timeout": "30",
  "mon_sync_max_retries": "5",
  "mon_sync_max_payload_size": "1048576",
  "mon_sync_debug": "false",
  "mon_sync_debug_leader": "-1",
  "mon_sync_debug_provide

Re: still recovery issues with cuttlefish

2013-08-01 Thread Samuel Just
Can you dump your osd settings?
sudo ceph --admin-daemon ceph-osd..asok config show
-Sam

On Thu, Aug 1, 2013 at 12:07 PM, Stefan Priebe  wrote:
> Mike we already have the async patch running. Yes it helps but only helps it
> does not solve. It just hides the issue ...
> Am 01.08.2013 20:54, schrieb Mike Dawson:
>
>> I am also seeing recovery issues with 0.61.7. Here's the process:
>>
>> - ceph osd set noout
>>
>> - Reboot one of the nodes hosting OSDs
>>  - VMs mounted from RBD volumes work properly
>>
>> - I see the OSD's boot messages as they re-join the cluster
>>
>> - Start seeing active+recovery_wait, peering, and active+recovering
>>  - VMs mounted from RBD volumes become unresponsive.
>>
>> - Recovery completes
>>  - VMs mounted from RBD volumes regain responsiveness
>>
>> - ceph osd unset noout
>>
>> Would joshd's async patch for qemu help here, or is there something else
>> going on?
>>
>> Output of ceph -w at: http://pastebin.com/raw.php?i=JLcZYFzY
>>
>> Thanks,
>>
>> Mike Dawson
>> Co-Founder & Director of Cloud Architecture
>> Cloudapt LLC
>> 6330 East 75th Street, Suite 170
>> Indianapolis, IN 46250
>>
>> On 8/1/2013 2:34 PM, Samuel Just wrote:
>>>
>>> Can you reproduce and attach the ceph.log from before you stop the osd
>>> until after you have started the osd and it has recovered?
>>> -Sam
>>>
>>> On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
>>>  wrote:

 Hi,

 i still have recovery issues with cuttlefish. After the OSD comes back
 it seem to hang for around 2-4 minutes and then recovery seems to start
 (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
 get a lot of slow request messages an hanging VMs.

 What i noticed today is that if i leave the OSD off as long as ceph
 starts to backfill - the recovery and "re" backfilling wents absolutely
 smooth without any issues and no slow request messages at all.

 Does anybody have an idea why?

 Greets,
 Stefan
 --
 To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Stefan Priebe
Mike we already have the async patch running. Yes it helps but only 
helps it does not solve. It just hides the issue ...

Am 01.08.2013 20:54, schrieb Mike Dawson:

I am also seeing recovery issues with 0.61.7. Here's the process:

- ceph osd set noout

- Reboot one of the nodes hosting OSDs
 - VMs mounted from RBD volumes work properly

- I see the OSD's boot messages as they re-join the cluster

- Start seeing active+recovery_wait, peering, and active+recovering
 - VMs mounted from RBD volumes become unresponsive.

- Recovery completes
 - VMs mounted from RBD volumes regain responsiveness

- ceph osd unset noout

Would joshd's async patch for qemu help here, or is there something else
going on?

Output of ceph -w at: http://pastebin.com/raw.php?i=JLcZYFzY

Thanks,

Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250

On 8/1/2013 2:34 PM, Samuel Just wrote:

Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam

On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
 wrote:

Hi,

i still have recovery issues with cuttlefish. After the OSD comes back
it seem to hang for around 2-4 minutes and then recovery seems to start
(pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
get a lot of slow request messages an hanging VMs.

What i noticed today is that if i leave the OSD off as long as ceph
starts to backfill - the recovery and "re" backfilling wents absolutely
smooth without any issues and no slow request messages at all.

Does anybody have an idea why?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Mike Dawson

I am also seeing recovery issues with 0.61.7. Here's the process:

- ceph osd set noout

- Reboot one of the nodes hosting OSDs
- VMs mounted from RBD volumes work properly

- I see the OSD's boot messages as they re-join the cluster

- Start seeing active+recovery_wait, peering, and active+recovering
- VMs mounted from RBD volumes become unresponsive.

- Recovery completes
- VMs mounted from RBD volumes regain responsiveness

- ceph osd unset noout

Would joshd's async patch for qemu help here, or is there something else 
going on?


Output of ceph -w at: http://pastebin.com/raw.php?i=JLcZYFzY

Thanks,

Mike Dawson
Co-Founder & Director of Cloud Architecture
Cloudapt LLC
6330 East 75th Street, Suite 170
Indianapolis, IN 46250

On 8/1/2013 2:34 PM, Samuel Just wrote:

Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam

On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
 wrote:

Hi,

i still have recovery issues with cuttlefish. After the OSD comes back
it seem to hang for around 2-4 minutes and then recovery seems to start
(pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
get a lot of slow request messages an hanging VMs.

What i noticed today is that if i leave the OSD off as long as ceph
starts to backfill - the recovery and "re" backfilling wents absolutely
smooth without any issues and no slow request messages at all.

Does anybody have an idea why?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Stefan Priebe

here it is

Am 01.08.2013 20:36, schrieb Samuel Just:

For now, just the main ceph.log.
-Sam

On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe  wrote:

m 01.08.2013 20:34, schrieb Samuel Just:


Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam



Sure which log levels?



On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
 wrote:


Hi,

i still have recovery issues with cuttlefish. After the OSD comes back
it seem to hang for around 2-4 minutes and then recovery seems to start
(pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
get a lot of slow request messages an hanging VMs.

What i noticed today is that if i leave the OSD off as long as ceph
starts to backfill - the recovery and "re" backfilling wents absolutely
smooth without any issues and no slow request messages at all.

Does anybody have an idea why?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html





ceph.log.gz
Description: application/gzip


Re: still recovery issues with cuttlefish

2013-08-01 Thread Samuel Just
Is there a bug open for this?  I suspect we don't sufficiently
throttle the snapshot removal work.
-Sam

On Thu, Aug 1, 2013 at 7:50 AM, Andrey Korolyov  wrote:
> Second this. Also for long-lasting snapshot problem and related
> performance issues I may say that cuttlefish improved things greatly,
> but creation/deletion of large snapshot (hundreds of gigabytes of
> commited data) still can bring down cluster for a minutes, despite
> usage of every possible optimization.
>
> On Thu, Aug 1, 2013 at 12:22 PM, Stefan Priebe - Profihost AG
>  wrote:
>> Hi,
>>
>> i still have recovery issues with cuttlefish. After the OSD comes back
>> it seem to hang for around 2-4 minutes and then recovery seems to start
>> (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
>> get a lot of slow request messages an hanging VMs.
>>
>> What i noticed today is that if i leave the OSD off as long as ceph
>> starts to backfill - the recovery and "re" backfilling wents absolutely
>> smooth without any issues and no slow request messages at all.
>>
>> Does anybody have an idea why?
>>
>> Greets,
>> Stefan
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Samuel Just
It doesn't have log levels, should be in /var/log/ceph/ceph.log.
-Sam

On Thu, Aug 1, 2013 at 11:36 AM, Samuel Just  wrote:
> For now, just the main ceph.log.
> -Sam
>
> On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe  wrote:
>> m 01.08.2013 20:34, schrieb Samuel Just:
>>
>>> Can you reproduce and attach the ceph.log from before you stop the osd
>>> until after you have started the osd and it has recovered?
>>> -Sam
>>
>>
>> Sure which log levels?
>>
>>
>>> On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
>>>  wrote:

 Hi,

 i still have recovery issues with cuttlefish. After the OSD comes back
 it seem to hang for around 2-4 minutes and then recovery seems to start
 (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
 get a lot of slow request messages an hanging VMs.

 What i noticed today is that if i leave the OSD off as long as ceph
 starts to backfill - the recovery and "re" backfilling wents absolutely
 smooth without any issues and no slow request messages at all.

 Does anybody have an idea why?

 Greets,
 Stefan
 --
 To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Samuel Just
For now, just the main ceph.log.
-Sam

On Thu, Aug 1, 2013 at 11:34 AM, Stefan Priebe  wrote:
> m 01.08.2013 20:34, schrieb Samuel Just:
>
>> Can you reproduce and attach the ceph.log from before you stop the osd
>> until after you have started the osd and it has recovered?
>> -Sam
>
>
> Sure which log levels?
>
>
>> On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
>>  wrote:
>>>
>>> Hi,
>>>
>>> i still have recovery issues with cuttlefish. After the OSD comes back
>>> it seem to hang for around 2-4 minutes and then recovery seems to start
>>> (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
>>> get a lot of slow request messages an hanging VMs.
>>>
>>> What i noticed today is that if i leave the OSD off as long as ceph
>>> starts to backfill - the recovery and "re" backfilling wents absolutely
>>> smooth without any issues and no slow request messages at all.
>>>
>>> Does anybody have an idea why?
>>>
>>> Greets,
>>> Stefan
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majord...@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majord...@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Stefan Priebe

m 01.08.2013 20:34, schrieb Samuel Just:

Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam


Sure which log levels?


On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
 wrote:

Hi,

i still have recovery issues with cuttlefish. After the OSD comes back
it seem to hang for around 2-4 minutes and then recovery seems to start
(pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
get a lot of slow request messages an hanging VMs.

What i noticed today is that if i leave the OSD off as long as ceph
starts to backfill - the recovery and "re" backfilling wents absolutely
smooth without any issues and no slow request messages at all.

Does anybody have an idea why?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Samuel Just
Can you reproduce and attach the ceph.log from before you stop the osd
until after you have started the osd and it has recovered?
-Sam

On Thu, Aug 1, 2013 at 1:22 AM, Stefan Priebe - Profihost AG
 wrote:
> Hi,
>
> i still have recovery issues with cuttlefish. After the OSD comes back
> it seem to hang for around 2-4 minutes and then recovery seems to start
> (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
> get a lot of slow request messages an hanging VMs.
>
> What i noticed today is that if i leave the OSD off as long as ceph
> starts to backfill - the recovery and "re" backfilling wents absolutely
> smooth without any issues and no slow request messages at all.
>
> Does anybody have an idea why?
>
> Greets,
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: still recovery issues with cuttlefish

2013-08-01 Thread Andrey Korolyov
Second this. Also for long-lasting snapshot problem and related
performance issues I may say that cuttlefish improved things greatly,
but creation/deletion of large snapshot (hundreds of gigabytes of
commited data) still can bring down cluster for a minutes, despite
usage of every possible optimization.

On Thu, Aug 1, 2013 at 12:22 PM, Stefan Priebe - Profihost AG
 wrote:
> Hi,
>
> i still have recovery issues with cuttlefish. After the OSD comes back
> it seem to hang for around 2-4 minutes and then recovery seems to start
> (pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
> get a lot of slow request messages an hanging VMs.
>
> What i noticed today is that if i leave the OSD off as long as ceph
> starts to backfill - the recovery and "re" backfilling wents absolutely
> smooth without any issues and no slow request messages at all.
>
> Does anybody have an idea why?
>
> Greets,
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


still recovery issues with cuttlefish

2013-08-01 Thread Stefan Priebe - Profihost AG
Hi,

i still have recovery issues with cuttlefish. After the OSD comes back
it seem to hang for around 2-4 minutes and then recovery seems to start
(pgs in recovery_wait start to decrement). This is with ceph 0.61.7. I
get a lot of slow request messages an hanging VMs.

What i noticed today is that if i leave the OSD off as long as ceph
starts to backfill - the recovery and "re" backfilling wents absolutely
smooth without any issues and no slow request messages at all.

Does anybody have an idea why?

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html