On 03/07/2017 09:52 AM, Mike Snitzer wrote:
> On Tue, Mar 07 2017 at  3:49am -0500,
> Jack Wang <jinpu.w...@profitbricks.com> wrote:
> 
>>
>>
>> On 06.03.2017 21:18, Jens Axboe wrote:
>>> On 03/05/2017 09:40 PM, NeilBrown wrote:
>>>> On Fri, Mar 03 2017, Jack Wang wrote:
>>>>>
>>>>> Thanks Neil for pushing the fix.
>>>>>
>>>>> We can optimize generic_make_request a little bit:
>>>>> - assign bio_list struct hold directly instead init and merge
>>>>> - remove duplicate code
>>>>>
>>>>> I think better to squash into your fix.
>>>>
>>>> Hi Jack,
>>>>  I don't object to your changes, but I'd like to see a response from
>>>>  Jens first.
>>>>  My preference would be to get the original patch in, then other changes
>>>>  that build on it, such as this one, can be added.  Until the core
>>>>  changes lands, any other work is pointless.
>>>>
>>>>  Of course if Jens wants a this merged before he'll apply it, I'll
>>>>  happily do that.
>>>
>>> I like the change, and thanks for tackling this. It's been a pending
>>> issue for way too long. I do think we should squash Jack's patch
>>> into the original, as it does clean up the code nicely.
>>>
>>> Do we have a proper test case for this, so we can verify that it
>>> does indeed also work in practice?
>>>
>> Hi Jens,
>>
>> I can trigger deadlock with in RAID1 with test below:
>>
>> I create one md with one local loop device and one remote scsi
>> exported by SRP. running fio with mix rw on top of md, force_close
>> session on storage side. mdx_raid1 is wait on free_array in D state,
>> and a lot of fio also in D state in wait_barrier.
>>
>> With the patch from Neil above, I can no longer trigger it anymore.
>>
>> The discussion was in link below:
>> http://www.spinics.net/lists/raid/msg54680.html
> 
> In addition to Jack's MD raid test there is a DM snapshot deadlock test,
> albeit unpolished/needy to get running, see:
> https://www.redhat.com/archives/dm-devel/2017-January/msg00064.html

Can you run this patch with that test, reverting your DM workaround?

-- 
Jens Axboe

Reply via email to