On Fri, May 18, 2018 at 10:28:04AM -0600, Keith Busch wrote:
> On Thu, May 17, 2018 at 04:23:45PM +0200, Johannes Thumshirn wrote:
> > > Agreed. Alternatively possibly call the driver's reset_preparei/done
> > > callbacks.
> >
> > Exactly, but as long as we can issue the reset via sysfs the test-c
On Fri, May 18, 2018 at 05:45:00PM -0600, Keith Busch wrote:
> On Sat, May 19, 2018 at 06:26:28AM +0800, Ming Lei wrote:
> > So could we please face to the real issue instead of working around
> > test case?
>
> Yes, that's why I want you to stop referencing the broken test.
Unfortunately I don't
On Sat, May 19, 2018 at 06:26:28AM +0800, Ming Lei wrote:
> So could we please face to the real issue instead of working around
> test case?
Yes, that's why I want you to stop referencing the broken test.
On Fri, May 18, 2018 at 07:57:51AM -0600, Keith Busch wrote:
> On Fri, May 18, 2018 at 08:20:05AM +0800, Ming Lei wrote:
> > What I think block/011 is helpful is that it can trigger IO timeout
> > during reset, which can be triggered in reality too.
>
> As I mentioned earlier, there is nothing wro
On 5/18/18 7:57 AM, Keith Busch wrote:
> On Fri, May 18, 2018 at 08:20:05AM +0800, Ming Lei wrote:
>> What I think block/011 is helpful is that it can trigger IO timeout
>> during reset, which can be triggered in reality too.
>
> As I mentioned earlier, there is nothing wrong with the spirit of
>
On Thu, May 17, 2018 at 04:23:45PM +0200, Johannes Thumshirn wrote:
> > Agreed. Alternatively possibly call the driver's reset_preparei/done
> > callbacks.
>
> Exactly, but as long as we can issue the reset via sysfs the test-case
> is still valid.
I disagree the test case is valid. The test writ
On Fri, May 18, 2018 at 08:20:05AM +0800, Ming Lei wrote:
> What I think block/011 is helpful is that it can trigger IO timeout
> during reset, which can be triggered in reality too.
As I mentioned earlier, there is nothing wrong with the spirit of
the test. What's wrong with it is the misguided i
On Fri, May 18, 2018 at 08:19:58AM +0800, Ming Lei wrote:
> On Wed, May 16, 2018 at 08:20:31PM -0600, Keith Busch wrote:
> > Hi Ming,
> >
> > I'm developing the answers in code the issues you raised. It will just
> > take a moment to complete flushing those out. In the meantime just want
> > to po
On Wed, May 16, 2018 at 08:20:31PM -0600, Keith Busch wrote:
> Hi Ming,
>
> I'm developing the answers in code the issues you raised. It will just
> take a moment to complete flushing those out. In the meantime just want
> to point out why I think block/011 isn't a real test.
>
> On Thu, May 17,
On Thu, May 17, 2018 at 08:20:51AM -0600, Keith Busch wrote:
> > Heh. But yes, this test and the PCI "enable" interface in sysfs look
> > horribly wrong. pci_disable_device/pci_enable_device aren't something we
> > can just do underneath the driver. Even if injecting the lowlevel
> > config space
On Thu, May 17, 2018 at 10:41:29AM +0200, Christoph Hellwig wrote:
> On Wed, May 16, 2018 at 08:20:31PM -0600, Keith Busch wrote:
> > On Thu, May 17, 2018 at 07:10:59AM +0800, Ming Lei wrote:
> > > All simulation in block/011 may happen in reality.
> >
> > If this test actually simulates reality,
On Wed, May 16, 2018 at 08:20:31PM -0600, Keith Busch wrote:
> On Thu, May 17, 2018 at 07:10:59AM +0800, Ming Lei wrote:
> > All simulation in block/011 may happen in reality.
>
> If this test actually simulates reality, then the following one line
> patch (plus explanation for why) would be a rea
Hi Ming,
I'm developing the answers in code the issues you raised. It will just
take a moment to complete flushing those out. In the meantime just want
to point out why I think block/011 isn't a real test.
On Thu, May 17, 2018 at 07:10:59AM +0800, Ming Lei wrote:
> All simulation in block/011 may
Hi Keith,
On Wed, May 16, 2018 at 08:12:42AM -0600, Keith Busch wrote:
> Hi Ming,
>
> I'm sorry, but I am frankly not on board with introducing yet
> another work-queue into this driver for handling this situation. The
> fact you missed syncing with this queue in the surprise remove case,
> intro
Hi Ming,
I'm sorry, but I am frankly not on board with introducing yet
another work-queue into this driver for handling this situation. The
fact you missed syncing with this queue in the surprise remove case,
introducing various use-after-free conditions, just demonstrates exactly
how over-complic
When one req is timed out, now nvme_timeout() handles it by the
following way:
nvme_dev_disable(dev, false);
nvme_reset_ctrl(&dev->ctrl);
return BLK_EH_HANDLED.
There are several issues about the above approach:
1) IO may fail during resetting
Admin IO timeout may be tri
16 matches
Mail list logo