This is a performance optimization that allows the hardware to work on
a command in parallel with the kernel's stats and timeout tracking.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/d
When a request completion is polled, the completion task wakes itself
up. This is unnecessary, as the task can just set itself back to
running.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
fs/block_dev.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --gi
Ahh, I incorporated non-multipath disks into the mix and observing some
trouble. Details below:
On Thu, Nov 09, 2017 at 06:44:47PM +0100, Christoph Hellwig wrote:
> +#ifdef CONFIG_NVME_MULTIPATH
> + if (ns->head->disk) {
> + sprintf(disk_name, "nvme%dc%dn%d",
. If this option is enabled only a single
> +/dev/nvneXnY device will show up for each NVMe namespaces,
Minor typo: should be /dev/nvmeXnY.
Otherwise, everything in the series looks good to me and testing on my
dual port devices hasn't found any problems.
For the whole series:
Rev
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewd-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Nov 06, 2017 at 10:13:24AM +0100, Christoph Hellwig wrote:
> On Sat, Nov 04, 2017 at 09:38:45AM -0600, Keith Busch wrote:
> > That's not quite right. For non-PI metadata formats, we use the
> > 'nop_profile', which gets the metadata buffer allocated so we can safely
>
On Sat, Nov 04, 2017 at 09:18:25AM +0100, Christoph Hellwig wrote:
> On Fri, Nov 03, 2017 at 09:02:04AM -0600, Keith Busch wrote:
> > If the namespace has metadata, but the request doesn't have a metadata
> > payload attached to it for whatever reason, we can't constr
just prints the same as the 'ph' format, which would look like this:
01 02 03 04 05 06 07 08
The change will make it look like this:
01-02-03-04-05-06-07-08
I think that was the original intention.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Fri, Nov 03, 2017 at 01:53:40PM +0100, Christoph Hellwig wrote:
> > - if (ns && ns->ms &&
> > + if (ns->ms &&
> > (!ns->pi_type || ns->ms != sizeof(struct t10_pi_tuple)) &&
> > !blk_integrity_rq(req) && !blk_rq_is_passthrough(req))
> > return BLK_STS_NOTSUPP;
>
This is a good cleanup, and I'd support this patch going in ahead of
this series on its own if you want to apply to 4.15 immediately.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
With the series, 'modprobe -r nvme && modprobe nvme' is failing. I can
get that to pass with the following:
---
@@ -1904,7 +1907,11 @@ static void nvme_free_subsystem(struct device *dev)
static void nvme_put_subsystem(struct nvme_subsystem *subsys)
{
put_device(>dev);
+
+ if
lt;mwi...@suse.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
groups
> and adds them to the device before sending out uevents.
>
> Signed-off-by: Martin Wilck <mwi...@suse.com>
Is NVMe the only one having this problem? Was putting our attributes in
the disk's kobj a bad choice?
Any, looks fine to me.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Sep 25, 2017 at 03:40:30PM +0200, Christoph Hellwig wrote:
> The new block devices nodes for multipath access will show up as
>
> /dev/nvm-subXnZ
Just thinking ahead ... Once this goes in, someone will want to boot their
OS from a multipath target. It was a pain getting installers
f-by: Christoph Hellwig <h...@lst.de>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
ote that these create the new persistent names. Overriding the existing
> nvme ones would be nicer, but while that works for the first path, the
> normal rule will override it again for each subsequent path.
>
> Signed-off-by: Christoph Hellwig <h...@lst.de>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
to struct nvme_ns_link or similar and use the nvme_ns name for the
> new structure. But that would involve a lot of churn.
>
> Signed-off-by: Christoph Hellwig <h...@lst.de>
Looks good; I can live with 'nvme_ns_head'.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Sep 25, 2017 at 03:40:28PM +0200, Christoph Hellwig wrote:
> This allows us to manage the various uniqueue namespace identifiers
> together instead needing various variables and arguments.
>
> Signed-off-by: Christoph Hellwig <h...@lst.de>
Looks good.
Reviewed-by: Kei
Ns unless
> the involved subsystems support multiple controllers.
>
> Signed-off-by: Christoph Hellwig <h...@lst.de>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Sep 25, 2017 at 04:05:17PM +0200, Hannes Reinecke wrote:
> On 09/25/2017 03:50 PM, Christoph Hellwig wrote:
> > On Mon, Sep 25, 2017 at 03:47:43PM +0200, Hannes Reinecke wrote:
> >> Can't we make the multipath support invisible to the host?
> >> IE check the shared namespaces before
On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote:
> +static void nvme_failover_req(struct request *req)
> +{
> + struct nvme_ns *ns = req->q->queuedata;
> + unsigned long flags;
> +
> + spin_lock_irqsave(>head->requeue_lock, flags);
> +
On Mon, Sep 18, 2017 at 04:14:50PM -0700, Christoph Hellwig wrote:
> +static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl
> *id)
> +{
> + struct nvme_subsystem *subsys, *found;
> +
> + subsys = kzalloc(sizeof(*subsys), GFP_KERNEL);
> + if (!subsys)
> +
On Thu, Sep 21, 2017 at 01:52:50AM +0200, Christoph Hellwig wrote:
> I noticed the odd renaming in sysfs and though about gettind rid
> of the /dev/nvme/ directory. I just need to come up with a good
> name for the device nodes - the name can't contain /dev/nvme* as
> nvme-cli would break if it
On Thu, Sep 21, 2017 at 04:37:48PM +0200, Christoph Hellwig wrote:
> On Thu, Sep 21, 2017 at 07:22:17AM +0200, Johannes Thumshirn wrote:
> > > But head also has connotations in the SAN world. Maybe nvme_ns_chain?
> >
> > I know that's why I didn't really like it all too much in the first place
On Mon, Sep 18, 2017 at 04:14:53PM -0700, Christoph Hellwig wrote:
This is awesome! Looks great, just a minor comment:
> + sprintf(head->disk->disk_name, "nvme/ns%d", head->instance);
Naming it 'nvme/ns<#>', kobject_set_name_vargs is going to change that
'/' into a '!', so the sysfs entry
On Tue, Sep 19, 2017 at 03:18:45PM +, Bart Van Assche wrote:
> On Tue, 2017-09-19 at 11:07 -0400, Keith Busch wrote:
> > The problem is when blk-mq's timeout handler prevents the request from
> > completing, and doesn't leave any indication the driver requested to
>
On Tue, Sep 19, 2017 at 11:22:20PM +0800, Ming Lei wrote:
> On Tue, Sep 19, 2017 at 11:07 PM, Keith Busch <keith.bu...@intel.com> wrote:
> > On Tue, Sep 19, 2017 at 12:16:31PM +0800, Ming Lei wrote:
> >> On Tue, Sep 19, 2017 at 7:08 AM, Keith Busch <k
On Tue, Sep 19, 2017 at 12:16:31PM +0800, Ming Lei wrote:
> On Tue, Sep 19, 2017 at 7:08 AM, Keith Busch <keith.bu...@intel.com> wrote:
> >
> > Indeed that prevents .complete from running concurrently with the
> > timeout handler, but scsi_mq_done and nvme_h
On Mon, Sep 18, 2017 at 11:14:38PM +, Bart Van Assche wrote:
> On Mon, 2017-09-18 at 19:08 -0400, Keith Busch wrote:
> > On Mon, Sep 18, 2017 at 10:53:12PM +, Bart Van Assche wrote:
> > > Are you sure that scenario can happen? The blk-mq core calls
>
On Mon, Sep 18, 2017 at 10:53:12PM +, Bart Van Assche wrote:
> On Mon, 2017-09-18 at 18:39 -0400, Keith Busch wrote:
> > The nvme driver's use of blk_mq_reinit_tagset only happens during
> > controller initialisation, but I'm seeing lost commands well after that
> > dur
On Mon, Sep 18, 2017 at 10:07:58PM +, Bart Van Assche wrote:
> On Mon, 2017-09-18 at 18:03 -0400, Keith Busch wrote:
> > I think we've always known it's possible to lose a request during timeout
> > handling, but just accepted that possibility. It seems to be causing
>
. The block
layer's timeout handler will then complete the command if it observes
the started flag is no longer set.
Note it's possible to lose the command even with this patch. It's just
less likely to happen.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/blk-mq.
On Tue, Sep 12, 2017 at 10:48:22PM -0700, Anish Jhaveri wrote:
> On Tue, Sep 12, 2017 at 12:00:44PM -0400, Keith Busch wrote:
> >
> > I find this patch series confusing to review. You declare these failover
> > functions in patch 1, use them in patch 2, but they're not defi
On Tue, Aug 29, 2017 at 04:55:59PM +0200, Christoph Hellwig wrote:
> On Tue, Aug 29, 2017 at 10:54:17AM -0400, Keith Busch wrote:
> > It also looks like new submissions will get a new path only from the
> > fact that the original/primary is being reset. The controller reset
> &g
On Wed, Aug 23, 2017 at 07:58:15PM +0200, Christoph Hellwig wrote:
> + /* Anything else could be a path failure, so should be retried */
> + spin_lock_irqsave(>head->requeue_lock, flags);
> + blk_steal_bios(>head->requeue_list, req);
> + spin_unlock_irqrestore(>head->requeue_lock,
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
The bi_bdev field was replaced with the gendisk. This patch just fixes
an omission.
Cc: Christoph Hellwig <h...@lst.de>
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/bio-integrity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/bio-integri
On Wed, Aug 23, 2017 at 07:58:15PM +0200, Christoph Hellwig wrote:
>
> TODO: implement sysfs interfaces for the new subsystem and
> subsystem-namespace object. Unless we can come up with something
> better than sysfs here..
Can we get symlinks from the multipath'ed nvme block device to the
Looks great. A few minor comments below.
On Wed, Aug 23, 2017 at 07:58:11PM +0200, Christoph Hellwig wrote:
> +static struct nvme_subsystem *__nvme_find_get_subsystem(const char
> *subsysnqn)
> +{
> + struct nvme_subsystem *subsys;
> +
> + lockdep_assert_held(_subsystems_lock);
> +
> +
On Thu, Aug 17, 2017 at 02:17:08PM -0600, Jens Axboe wrote:
> On 08/17/2017 02:15 PM, Keith Busch wrote:
> > On Thu, Aug 17, 2017 at 01:32:20PM -0600, Jens Axboe wrote:
> >> We currently have an issue with nvme when polling is used. Just
> >> ran some testing on
On Fri, Jul 21, 2017 at 07:07:06PM +0200, Benoit Depail wrote:
> On 07/21/17 18:07, Roger Pau Monné wrote:
> >
> > Hm, I'm not sure I follow either. AFAIK this problem came from
> > changing the Linux version in the Dom0 (where the backend, blkback is
> > running), rather than in the DomU right?
On Fri, Jul 21, 2017 at 12:19:39PM +0200, Benoit Depail wrote:
> On 07/20/17 19:36, Keith Busch wrote:
> >
> > As a test, could you throttle the xvdb queue's max_sectors_kb? If I
> > followed xen-blkfront correctly, the default should have it set to 44.
> > Try settin
On Thu, Jul 20, 2017 at 05:12:33PM +0200, Benoit Depail wrote:
>
> The main issue we are seeing is degraded write performance on storage
> devices of Xen PV DomUs, about half (or even a third on our production
> setup where NFS is involved) of what we used to have.
Read performance is unchanged?
On Sun, Jul 02, 2017 at 08:31:51AM -0700, Christoph Hellwig wrote:
> Please CC the linux-nvme list on any nvme issues. Also this
> code is getting a little too fancy for living in nvme, I think we
> need to move it into the PCI core, ensure we properly take drv->lock
> to synchronize it, and
ng
global RCU")
> Signed-off-by: Jon Derrick <jonathan.derr...@intel.com>
> Acked-by: Keith Busch <keith.bu...@intel.com>
> Cc: <sta...@vger.kernel.org> # 4.11
> ---
> drivers/pci/host/vmd.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>
the drivers do
not have to be modified in order to be safe.
Fixes: 705cda97e ("blk-mq: Make it safe to use RCU to iterate over
blk_mq_tag_set.tag_list")
Reported-by: Gabriel Krisman Bertazi <kris...@collabora.co.uk>
Reviewed-by: Bart Van Assche <bart.vanass...@sandisk.com>
On Tue, May 30, 2017 at 02:00:44PM -0300, Gabriel Krisman Bertazi wrote:
> Since the merge window for 4.12, one of the machines in Intel's CI
> started to hit the WARN_ON below at blk_mq_update_nr_hw_queues during an
> nvme_reset_work. The issue persists with the latest 4.12-rc3, and full
> dmesg
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Fri, May 19, 2017 at 08:52:45PM +0800, Ming Lei wrote:
> But I still think it may be better to move nvme_kill_queues() into
> nvme_remove_dead_ctrl() as an improvement because during this small
> window page cache can be used up by write application, and no writeback
> can move on meantime.
On Thu, May 18, 2017 at 11:35:43PM +0800, Ming Lei wrote:
> On Thu, May 18, 2017 at 03:49:31PM +0200, Christoph Hellwig wrote:
> > On Wed, May 17, 2017 at 09:27:29AM +0800, Ming Lei wrote:
> > > If some writeback requests are submitted just before queue is killed,
> > > and these requests may not
On Tue, May 09, 2017 at 12:15:25AM +0800, Ming Lei wrote:
> This patch looks working, but seems any 'goto out' in this function
> may have rick to cause the same race too.
The goto was really intended for handling totally broken contronllers,
which isn't the case if someone requested to remove
On Wed, Apr 05, 2017 at 04:18:55PM +0200, Christoph Hellwig wrote:
> The way NVMe uses this field is entirely different from the older
> SCSI/BLOCK_PC usage, so move it into struct nvme_request.
>
> Also reduce the size of the file to a unsigned char so that we leave space
> for additional
itly remap the queues.
>
> Fixes: 4e68a011428a ("blk-mq: don't redistribute hardware queues on a CPU
> hotplug event")
> Signed-off-by: Omar Sandoval <osan...@fb.com>
This looks good to me.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Fri, Mar 31, 2017 at 01:30:15PM -0700, Omar Sandoval wrote:
> On Fri, Mar 31, 2017 at 04:30:44PM -0400, Keith Busch wrote:
> > On Fri, Mar 31, 2017 at 11:59:24AM -0700, Omar Sandoval wrote:
> > > @@ -2629,11 +2639,12 @@ void blk_mq_update_nr_hw_queues(struct
> > &g
corruption or double allocation[1][2],
> > when doing I/O and removing NVMe device at the sametime.
>
> I agree, completing it looks bogus. If the request is in a scheduler or
> on a software queue, this won't end well at all. Looks like it was
> introduced by this patch:
>
>
unnecessarily fail them. Once the controller has been disabled,
the queues will be restarted to force remaining entered requests to end
in failure so that blk-mq's hot cpu notifier may progress.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/core.
A driver may wish to take corrective action if queued requests do not
complete within a set time.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/blk-mq.c | 9 +
include/linux/blk-mq.h | 2 ++
2 files changed, 11 insertions(+)
diff --git a/block/blk-mq.c b/blo
Drivers can start a freeze, so this provides a way to wait for frozen.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
Reviewed-by: Christoph Hellwig <h...@lst.de>
Signed-off-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-mq.c | 3 ++-
include/linux/blk-mq.h | 1
this for several hours with fio running buffered writes
in the back-ground and rtcwake running suspend/resume at intervals.
This succeeded with no fio errors.
Keith Busch (3):
blk-mq: Export blk_mq_freeze_queue_wait
blk-mq: Provide queue freeze wait timeout
nvme: Complete all stuck requests
block
On Tue, Feb 28, 2017 at 08:42:19AM +0100, Artur Paszkiewicz wrote:
>
> I'm observing the same thing when hibernating during mdraid resync on
> nvme - it hangs in blk_mq_freeze_queue_wait() after "Disabling non-boot
> CPUs ...".
The patch guarantees forward progress for blk-mq's hot-cpu notifier
On Mon, Feb 27, 2017 at 07:27:51PM +0200, Sagi Grimberg wrote:
> OK, I think we can get it for fabrics too, need to figure out how to
> handle it there too.
>
> Do you have a reproducer?
To repro, I have to run a buffered writer workload then put the system into S3.
This fio job seems to
On Mon, Feb 27, 2017 at 08:35:06PM +0200, Sagi Grimberg wrote:
> > On Sat, Feb 25, 2017 at 08:16:04PM +0100, Matias Bjørling wrote:
> > > On 02/25/2017 07:21 PM, Christoph Hellwig wrote:
> > > > No way in hell. vs is vendor specific and we absolutely can't overload
> > > > it with any sort of
On Mon, Feb 27, 2017 at 03:46:09PM +0200, Sagi Grimberg wrote:
> On 24/02/17 02:36, Keith Busch wrote:
> > If the block layer has entered requests and gets a CPU hot plug event
> > prior to the resume event, it will wait for those requests to exit. If
> > the nvme dr
Drivers can start a freeze, so this provides a way to wait for frozen.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/blk-mq.c | 3 ++-
include/linux/blk-mq.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 9
, the driver
waits for freeze to complete before completing the controller shutdown.
On resume, the driver will unfreeze the queue for new requests to enter
once the hardware contexts are reinitialized.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
v1 -> v2:
Simplified t
On Wed, Feb 22, 2017 at 09:47:53AM -0700, Jens Axboe wrote:
> I see, I found it now. Guys, let's get this process streamlined a bit
> more. This whole thing has been a flurry of patches and patchseries,
> posted by either you or Jon. Previous patch series was 1-4 patches
> posted by you, and then
On Fri, Feb 17, 2017 at 01:59:37PM +0100, Christoph Hellwig wrote:
> Hi all,
>
> this contains a few more OPAL-related fixups. It tones down warnings a bit,
> allocates the OPAL-ѕpecific data structure in a separate dynamic allocation,
> checks for support of Security Send/Receive in NVMe before
On Fri, Feb 17, 2017 at 01:59:41PM +0100, Christoph Hellwig wrote:
> @@ -1789,7 +1789,8 @@ static void nvme_reset_work(struct work_struct *work)
> if (result)
> goto out;
>
> - if ((dev->ctrl.oacs & NVME_CTRL_OACS_SEC_SUPP) && !dev->ctrl.opal_dev) {
> +
On Tue, Feb 07, 2017 at 05:46:58PM +0100, Christoph Hellwig wrote:
> @@ -1233,6 +1243,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl
> *ctrl,
> if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
> vwc = true;
> blk_queue_write_cache(q, vwc, vwc);
> +
On Sun, Feb 05, 2017 at 05:40:23PM +0100, Christoph Hellwig wrote:
> Hi Joe,
>
> On Fri, Feb 03, 2017 at 08:58:09PM -0500, Joe Korty wrote:
> > IIRC, some years ago I ran across a customer system where
> > the #cpus_present was twice as big as #cpus_possible.
> >
> > Hyperthreading was turned
On Wed, Jan 18, 2017 at 02:21:48PM -0800, Jens Axboe wrote:
> On 01/18/2017 02:16 PM, Jens Axboe wrote:
> > On 01/18/2017 02:21 PM, Keith Busch wrote:
> >> Signed-off-by: Keith Busch <keith.bu...@intel.com>
> >> Reviewed-by: Christoph Hellwig <h...@lst.de
Signed-off-by: Keith Busch <keith.bu...@intel.com>
Reviewed-by: Christoph Hellwig <h...@lst.de>
Reviewed-by: Sagi Grimberg <s...@grimberg.me>
---
block/blk-mq.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index a8e67a1..c3400b5 100644
--
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/blk-mq.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0c9a2a3..fae9651 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -916,7 +916,6 @@ bool blk_mq_dispatch_rq_list(struct blk_mq_
to something.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/affinity.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 4544b11..b25dce0 100644
--- a/kernel/irq/affinity.c
+++ b/kern
We need to leave the block queues stopped if we're changing the tagset's
number of queues.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
drivers/nvme/host/pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
The offline CPUs need to assigned to something incase they come online
later, otherwise anyone using the mapping for things other than affinity
will have blank entries for that online CPU.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
kernel/irq/affinity.c | 8
1 file c
On Thu, Dec 01, 2016 at 10:53:43AM -0700, Scott Bauer wrote:
> > Maybe. I need to look at the TCG spec again (oh my good, what a fucking
> > mess), but if I remember the context if it is the whole nvme controller
> > and not just a namespace, so a block_device might be the wrong context.
> > Then
On Tue, Nov 29, 2016 at 02:52:00PM -0700, Scott Bauer wrote:
> +struct opal_dev {
> + dev_t majmin;
> + sed_sec_submit *submit_fn;
> + void *submit_data;
> + struct opal_lock_unlock lkul;
> + const opal_step *funcs;
> + void **func_data;
> + bool resume_from_suspend;
>
On Tue, Nov 29, 2016 at 02:52:01PM -0700, Scott Bauer wrote:
> +static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer,
> +size_t len, bool send)
> +{
> + struct request_queue *q;
> + struct request *req;
> + struct nvme_ns *ns;
> + struct
On Thu, Nov 17, 2016 at 02:17:11PM -0800, Chaitanya Kulkarni wrote:
> From: Chaitanya Kulkarni
>
> This adds a new block layer operation to zero out a range of
> LBAs. This allows to implement zeroing for devices that don't use
> either discard with a predictable
On Wed, Nov 16, 2016 at 04:17:27PM -0700, Scott Bauer wrote:
> +int opal_unlock_from_suspend(struct opal_suspend_unlk *data)
> +{
> + const char *diskname = data->name;
> + struct opal_dev *iter, *dev = NULL;
> + struct opal_completion *completion;
> + void *func_data[3] = { NULL
On Tue, Nov 15, 2016 at 10:50:36PM -0800, Chaitanya Kulkarni wrote:
> This adds a new block layer operation to zero out a range of
> LBAs. This allows to implement zeroing for devices that don't use
> either discard with a predictable zero pattern or WRITE SAME of zeroes.
> The prominent example
t;
Yeah, we've been depending on the values of BLK_MQ_RQ_QUEUE_[ERROR|BUSY]
not being zero without this. Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
> ---
> drivers/nvme/host/core.c | 4 ++--
> drivers/nvme/host/pci.c| 8
> drivers/nvme/host/r
On Thu, Nov 10, 2016 at 04:01:31PM -0700, Scott Bauer wrote:
> On Tue, Nov 01, 2016 at 06:57:05AM -0700, Christoph Hellwig wrote:
> > blk_execute_rq_nowait is the API to use - blk_mq_insert_request isn't
> > even exported.
>
> I remember now, after I changed it to use rq_nowait, why we added this
On Wed, Nov 09, 2016 at 01:43:55AM +, Alana Alexander-Rutledge wrote:
> Hi,
>
> I have been profiling the performance of the NVMe and SAS IO stacks on Linux.
> I used blktrace and blkparse to collect block layer trace points and a
> custom analysis script on the trace points to average out
On Wed, Oct 19, 2016 at 04:51:18PM -0700, Bart Van Assche wrote:
>
> I assume that line 498 in blk-mq.c corresponds to BUG_ON(blk_queued_rq(rq))?
> Anyway, it seems to me like this is a bug in the NVMe code and also that
> this bug is completely unrelated to my patch series. In nvme_complete_rq()
Hi Bart,
I'm running linux 4.9-rc1 + linux-block/for-linus, and alternating tests
with and without this series.
Without this, I'm not seeing any problems in a link-down test while
running fio after ~30 runs.
With this series, I only see the test pass infrequently. Most of the
time I observe one
On Tue, Sep 27, 2016 at 05:25:36PM +0800, Ming Lei wrote:
> On Mon, 26 Sep 2016 19:00:30 -0400
> Keith Busch <keith.bu...@intel.com> wrote:
>
> > The only user of polling requires its original request be completed in
> > its entirety before continuing execution. I
if
the remaining transfer has not yet completed.
This patch has blk-mq return an invalid cookie if a bio requires splitting
so that polling does not occur.
Signed-off-by: Keith Busch <keith.bu...@intel.com>
---
block/blk-mq.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/blo
201 - 300 of 305 matches
Mail list logo