Can you reproduce with logging on the primary for that pg?

debug osd = 20
debug filestore = 20
debug ms = 1

Since restarting the osd may be a workaround, can you inject the debug
values without restarting the daemon?
-Sam

On Wed, Sep 21, 2016 at 2:44 AM, Tobias Böhm <t...@robhost.de> wrote:
> Hi,
>
> there is an open bug in the tracker: http://tracker.ceph.com/issues/16474
>
> It also suggests restarting OSDs as a workaround. We faced the same issue 
> after increasing the number of PGs in our cluster and restarting OSDs solved 
> it as well.
>
> Tobias
>
>> Am 21.09.2016 um 11:26 schrieb Dan van der Ster <d...@vanderster.com>:
>>
>> There was a thread about this a few days ago:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012857.html
>> And the OP found a workaround.
>> Looks like a bug though... (by default PGs scrub at most once per day).
>>
>> -- dan
>>
>>
>>
>> On Tue, Sep 20, 2016 at 10:43 PM, Martin Bureau <mbur...@stingray.com> wrote:
>>> Hello,
>>>
>>>
>>> I noticed that the same pg gets scrubbed repeatedly on our new Jewel
>>> cluster:
>>>
>>>
>>> Here's an excerpt from log:
>>>
>>>
>>> 2016-09-20 20:36:31.236123 osd.12 10.1.82.82:6820/14316 150514 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:32.232918 osd.12 10.1.82.82:6820/14316 150515 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:32.236876 osd.12 10.1.82.82:6820/14316 150516 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:33.233268 osd.12 10.1.82.82:6820/14316 150517 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:33.242258 osd.12 10.1.82.82:6820/14316 150518 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:36.233604 osd.12 10.1.82.82:6820/14316 150519 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:36.237221 osd.12 10.1.82.82:6820/14316 150520 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:41.234490 osd.12 10.1.82.82:6820/14316 150521 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:41.243720 osd.12 10.1.82.82:6820/14316 150522 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:45.235128 osd.12 10.1.82.82:6820/14316 150523 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:45.352589 osd.12 10.1.82.82:6820/14316 150524 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:47.235310 osd.12 10.1.82.82:6820/14316 150525 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:47.239348 osd.12 10.1.82.82:6820/14316 150526 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:49.235538 osd.12 10.1.82.82:6820/14316 150527 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:49.243121 osd.12 10.1.82.82:6820/14316 150528 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:51.235956 osd.12 10.1.82.82:6820/14316 150529 : cluster
>>> [INF] 25.3f deep-scrub starts
>>> 2016-09-20 20:36:51.244201 osd.12 10.1.82.82:6820/14316 150530 : cluster
>>> [INF] 25.3f deep-scrub ok
>>> 2016-09-20 20:36:52.236076 osd.12 10.1.82.82:6820/14316 150531 : cluster
>>> [INF] 25.3f scrub starts
>>> 2016-09-20 20:36:52.239376 osd.12 10.1.82.82:6820/14316 150532 : cluster
>>> [INF] 25.3f scrub ok
>>> 2016-09-20 20:36:56.236740 osd.12 10.1.82.82:6820/14316 150533 : cluster
>>> [INF] 25.3f scrub starts
>>>
>>>
>>> How can I troubleshoot / resolve this ?
>>>
>>>
>>> Regards,
>>>
>>> Martin
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to