On 7/13/18 12:00 PM, Jens Axboe wrote:
> On 7/13/18 10:56 AM, Martin Wilck wrote:
>> On Thu, 2018-07-12 at 10:42 -0600, Jens Axboe wrote:
>>>
>>> Hence the patch I sent is wrong, the code actually looks fine. Which
>>> means we're back to trying to figure out what is going on here. It'd
>>> be great with a test case...
>>
>> We don't have an easy test case yet. But the customer has confirmed
>> that the problem occurs with upstream 4.17.5, too. We also confirmed
>> again that the problem occurs when the kernel uses the kmalloc() code
>> path in __blkdev_direct_IO_simple().
>>
>> My personal suggestion would be to ditch __blkdev_direct_IO_simple()
>> altogether. After all, it's not _that_ much simpler thatn
>> __blkdev_direct_IO(), and it seems to be broken in a subtle way.
> 
> That's not a great suggestion at all, we need to find out why we're
> hitting the issue. For all you know, the bug could be elsewhere and
> we're just going to be hitting it differently some other way. The
> head-in-the-sand approach is rarely a win long term.
> 
> It's saving an allocation per IO, that's definitely measurable on
> the faster storage. For reads, it's also not causing a context
> switch for dirtying pages. I'm not a huge fan of multiple cases
> in general, but this one is definitely warranted in an era where
> 1 usec is a lot of extra time for an IO.
> 
>> However, so far I've only identified a minor problem, see below - it
>> doesn't explain the data corruption we're seeing.
> 
> What would help is trying to boil down a test case. So far it's a lot
> of hand waving, and nothing that can really help narrow down what is
> going on here.

When someone reports to you that some kind of data corruption occurs,
you need to find out as many details you can. Ideally they can give you
a test case, but if they can't, you ask as many questions as possible to
help YOU make a test case.

- What is the nature of the corruption? Is it reads or writes? Is it
  zeroes, random data, or data from somewhere else? How big is the
  corrupted area?

- What does the workload look like that reproduces it? If they don't
  really know and they can't give you a reproducer, you help them with
  tracing to give you the information you need.

Those are a great start. Right now we have zero information, other than
reverting that patch fixes it. This means that we're likely dealing with
IO that is larger than 16k, since we're going to be hitting the same
path for <= 16k ios with the patch reverted. We also know it's less than
1MB. But that's it. I don't even know if it's writes, since your (and
Hannes's) report has no details at all.

Go look at what you ask customers to provide for bug reports, then apply
some of that to this case...

-- 
Jens Axboe

Reply via email to