On Wed, 11 Jun 2014, Matias Bjørling wrote:
I've rebased nvmemq_review and added two patches from Jens that add
support for requests with single range virtual addresses.
Keith, will you take it for a spin and see if it fixes 068 for you?
There might still be a problem with some flushes, I'm loo
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 03:10 PM, Keith Busch wrote:
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm a
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm always hitting a BUG when
unmounting the scratch dev in xfstests generic/068. The bug
On Tue, 10 Jun 2014, Jens Axboe wrote:
On Jun 10, 2014, at 9:52 AM, Keith Busch wrote:
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
don't
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
don't know much about this area, but I think this may be from the recent
chunk sectors patch causing a __bio_a
On Wed, 4 Jun 2014, Jens Axboe wrote:
On 06/04/2014 12:28 PM, Keith Busch wrote:
Are you testing against 3.13? You really need the current tree for this,
otherwise I'm sure you'll run into issues (as you appear to be :-)
I'm using Matias' current tree:
git://github.com/
On Wed, 4 Jun 2014, Matias Bjørling wrote:
On 06/04/2014 12:27 AM, Keith Busch wrote:
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a s
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a simple user admin program sending nvme passthrough
commands in a tight loop, then run:
# echo
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
Still fails as before:
[ 88.933881] BUG: unable to handle kernel NULL pointer dereference at
0014
[ 88.942900] IP: [] blk_mq_map_queue+0xf/0x1e
[ 88.949605] PGD 427b
768531] ---[ end trace 785048a51785f51e
]---
On Mon, 2 Jun 2014, Keith Busch wrote:
On Mon, 2 Jun 2014, Matias Bjørling wrote:
Hi Matthew and Keith,
Here is an updated patch with the feedback from the previous days. It's
against
Jens' for-3.16/core tree. You may use the nvmemq_wip_
On Mon, 2 Jun 2014, Matias Bjørling wrote:
Hi Matthew and Keith,
Here is an updated patch with the feedback from the previous days. It's against
Jens' for-3.16/core tree. You may use the nvmemq_wip_review branch at:
I'm testing this on my normal hardware now. As I feared, hot removal
doesn't w
On Thu, 29 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'm pretty darn sure this new nvme_remove can cause a process
with an open reference to use queues after they're freed in the
nvme_submit_sync_command path, maybe even the admin tags t
On Thu, 29 May 2014, Jens Axboe wrote:
On 2014-05-28 21:07, Keith Busch wrote:
Barring any bugs in the code, then yes, this should work. On the scsi-mq
side, extensive error injection and pulling has been done, and it seems to
hold up fine now. The ioctl path would need to be audited.
It
On Wed, 28 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I am concerned about device hot removal since the h/w queues can be
freed at any time. I *think* blk-mq helps with this in that the driver
will not see a new request after calling blk_
an implementation to the nvm-express driver
here so there's at least one user, assuming this is acceptable.
Keith Busch (2):
driver-core: allow asynchronous device shutdown
NVMe: Complete shutdown asynchronously
drivers/base/core.c |4
drivers
INTx irq if performing the shutdown asynchronously.
Signed-off-by: Keith Busch
---
drivers/block/nvme-core.c | 28 ++--
include/linux/nvme.h |1 +
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-c
ld up indefinitely.
Signed-off-by: Keith Busch
---
drivers/base/core.c|4
include/linux/device.h |1 +
2 files changed, 5 insertions(+)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 20da3ad..71b83bb 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -10,6
Looks good to me. This won't apply in linux-nvme yet and it may be a
little while before it does, so this might be considered to go upstream
through a different tree if you want this in sooner.
On Tue, 4 Mar 2014, Paul Bolle wrote:
Building nvme-core.o on 32 bit x86 triggers a rather impressive
On Fri, 28 Feb 2014, Kent Overstreet wrote:
On Thu, Feb 27, 2014 at 12:22:54PM -0500, Matthew Wilcox wrote:
On Wed, Feb 26, 2014 at 03:39:49PM -0800, Kent Overstreet wrote:
We do this by adding calls to blk_queue_split() to the various
make_request functions that need it - a few can already han
On Thu, 20 Feb 2014, Paul Bolle wrote:
On Tue, 2014-02-18 at 10:02 +0100, Geert Uytterhoeven wrote:
And these popped up in v3.14-rc1 on 32 bit x86. This patch makes these
warnings go away. Compile tested only (on 32 and 64 bit x86).
Review is appreciated, because the code I'm touching here is
-by: Alexander Gordeev
Reviewed-by: Keith Busch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Tue, 21 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
Signed-off-by: Alexander Gordeev
---
+static
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
This definitely seems to improve the code flow,
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This update fixes an oddity when a device is first added
and then removed from dev_list in case of initialization
failure, instead of just being added in case of success.
Signed-off-by: Alexander Gordeev
---
drivers/block/nvme-core.c | 19
On Fri, 17 Jan 2014, Bjorn Helgaas wrote:
On Fri, Jan 17, 2014 at 9:02 AM, Alexander Gordeev wrote:
In case MSI-X and MSI initialization failed the function
irq_set_affinity_hint() is called with uninitialized value
in dev->entry[0].vector. This update fixes the issue.
dev->entry[0].vector i
On Tue, 22 Oct 2013, Matias Bjorling wrote:
Den 22-10-2013 18:55, Keith Busch skrev:
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion within the traditional block layer, a multi-queue
block layer is being implemented.
This
On Fri, 11 Oct 2013, Matias Bjorling wrote:
The doorbell code is repeated various places. Refactor it into its own function
for clarity.
Signed-off-by: Matias Bjorling
Looks good to me.
Reviewed-by: Keith Busch
---
drivers/block/nvme-core.c | 29 +
1 file
On Tue, 8 Oct 2013, Jens Axboe wrote:
On Tue, Oct 08 2013, Matthew Wilcox wrote:
On Tue, Oct 08, 2013 at 11:34:20AM +0200, Matias Bjørling wrote:
The nvme driver implements itself as a bio-based driver. This primarily because
of high lock congestion for high-performance nvm devices. To remove t
On Tue, 8 Oct 2013, Matias Bjørling wrote:
Convert the driver to blk mq.
The patch consists of:
* Initializion of mq data structures.
* Convert function calls from bio to request data structures.
* IO queues are split into an admin queue and io queues.
* bio splits are removed as it should be h
901 - 931 of 931 matches
Mail list logo