Compile ide-atapi failed with defining macro "DEBUG"
...
|drivers/ide/ide-atapi.c:285:52: error: 'struct request' has
no member named 'cmd'; did you mean 'csd'?
| debug_log("%s: rq->cmd[0]: 0x%x\n", __func__, rq->cmd[0]);
...
Since we split the scsi_request out of struct request, it missed
do the
On 11/10/2017 4:30 PM, Ingo Molnar wrote:
* Byungchul Park wrote:
Event C depends on event A.
Event A depends on event B.
Event B depends on event C.
- NOTE: Precisely speaking, a dependency is one between whether a
- waiter for an event can be woken up and whether anot
* Byungchul Park wrote:
> Event C depends on event A.
> Event A depends on event B.
> Event B depends on event C.
>
> - NOTE: Precisely speaking, a dependency is one between whether a
> - waiter for an event can be woken up and whether another waiter for
> - another event can
On 11/09/2017 06:44 PM, Christoph Hellwig wrote:
> This patch adds native multipath support to the nvme driver. For each
> namespace we create only single block device node, which can be used
> to access that namespace through any of the controllers that refer to it.
> The gendisk for each control
On Fri, Nov 10, 2017 at 01:53:18PM +0800, Ming Lei wrote:
> On Thu, Nov 09, 2017 at 09:32:58AM -0700, Jens Axboe wrote:
> > On 11/09/2017 08:30 AM, Jens Axboe wrote:
> > > On 11/09/2017 03:00 AM, Ming Lei wrote:
> > >> On Thu, Nov 09, 2017 at 11:41:40AM +0800, Ming Lei wrote:
> > >>> On Wed, Nov 08
On Sun, Nov 05, 2017 at 08:10:08PM +0800, Ming Lei wrote:
> blk-mq never respects queue dead, and this may cause use-after-free on
> any kind of queue resources. This patch respects the rule by calling
> blk_mq_quiesce_queue() when queue is marked as DEAD.
>
> This patch fixes the following kernel
On Thu, Nov 09, 2017 at 09:32:58AM -0700, Jens Axboe wrote:
> On 11/09/2017 08:30 AM, Jens Axboe wrote:
> > On 11/09/2017 03:00 AM, Ming Lei wrote:
> >> On Thu, Nov 09, 2017 at 11:41:40AM +0800, Ming Lei wrote:
> >>> On Wed, Nov 08, 2017 at 03:48:51PM -0700, Jens Axboe wrote:
> This patch atte
On Fri, Nov 10, 2017 at 05:52:36AM +0100, Christoph Hellwig wrote:
> > If we've CMIC capabilities, we'll use the subsys->instance; if we don't
> > have CMIC, we use the ctrl->instance.
> >
> > Since the two instances are independent of each other, they can create
> > duplicate names.
> >
> > To
On Thu, Nov 09, 2017 at 04:22:17PM -0500, Mike Snitzer wrote:
> Your 0th header speaks to the NVMe multipath IO path leveraging NVMe's
> lack of partial completion but I think it'd be useful to have this
> header (that actually gets committed) speak to it.
There is a comment above blk_steal_bios t
> If we've CMIC capabilities, we'll use the subsys->instance; if we don't
> have CMIC, we use the ctrl->instance.
>
> Since the two instances are independent of each other, they can create
> duplicate names.
>
> To fix, I think we'll need to always use the subsys instance for
> consistency if CO
On Thu, 2017-11-09 at 09:32 -0700, Jens Axboe wrote:
> It's been running happily for > 1 hour now, no issues observed.
The same null_blk test runs fine on my setup. But what's weird is that if
I run the srp-test software that I again see a lockup in sd_probe_async().
That happens not only with tod
If we run out of driver tags, we currently treat shared and non-shared
tags the same - both cases hook into the tag waitqueue. This is a bit
more costly than it needs to be on unshared tags, since we have to both
grab the hctx lock, and the waitqueue lock (and disable interrupts).
For the non-share
On Thu, Nov 09 2017 at 12:44pm -0500,
Christoph Hellwig wrote:
> This patch adds native multipath support to the nvme driver. For each
> namespace we create only single block device node, which can be used
> to access that namespace through any of the controllers that refer to it.
> The gendisk
Ahh, I incorporated non-multipath disks into the mix and observing some
trouble. Details below:
On Thu, Nov 09, 2017 at 06:44:47PM +0100, Christoph Hellwig wrote:
> +#ifdef CONFIG_NVME_MULTIPATH
> + if (ns->head->disk) {
> + sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance
On 11/09/2017 11:49 AM, Bart Van Assche wrote:
> Hello Jens,
>
> It is known that during the resume following a hibernate, especially when
> using an md RAID1 array created on top of SCSI devices, sometimes the system
> hangs instead of resuming up properly. This patch series fixes that
> problem.
Christoph,
> From: Hannes Reinecke
>
> When creating nvme multipath devices we should populate the 'slaves'
> and 'holders' directorys properly to aid userspace topology detection.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> From: Hannes Reinecke
>
> When creating nvme multipath devices we should populate the 'slaves'
> and 'holders' directorys properly to aid userspace topology detection.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> We do this by adding a helper that returns the ns_head for a device
> that can belong to either the per-controller or per-subsystem block
> device nodes, and otherwise reuse all the existing code.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> This patch adds native multipath support to the nvme driver. For each
> namespace we create only single block device node, which can be used
> to access that namespace through any of the controllers that refer to
> it. The gendisk for each controllers path to the name space still
>
Christoph,
> Introduce a new struct nvme_ns_head that holds information about an
> actual namespace, unlike struct nvme_ns, which only holds the
> per-controller namespace information. For private namespaces there is
> a 1:1 relation of the two, but for shared namespaces this lets us
> discover
Christoph,
> This allows us to manage the various uniqueue namespace identifiers
> together instead needing various variables and arguments.
Reviewed-by: Martin K. Petersen
--
Martin K. Petersen Oracle Linux Engineering
Christoph,
> This adds a new nvme_subsystem structure so that we can track multiple
> controllers that belong to a single subsystem. For now we only use it
> to store the NQN, and to check that we don't have duplicate NQNs
> unless the involved subsystems support multiple controllers.
>
> Includ
The contexts from which a SCSI device can be quiesced or resumed are:
* Writing into /sys/class/scsi_device/*/device/state.
* SCSI parallel (SPI) domain validation.
* The SCSI device power management methods. See also scsi_bus_pm_ops.
It is essential during suspend and resume that neither the file
This flag will be used in the next patch to let the block layer
core know whether or not a SCSI request queue has been quiesced.
A quiesced SCSI queue namely only processes RQF_PREEMPT requests.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Tested-by:
Convert blk_get_request(q, op, __GFP_RECLAIM) into
blk_get_request_flags(q, op, BLK_MQ_PREEMPT). This patch does not
change any functionality.
Signed-off-by: Bart Van Assche
Tested-by: Martin Steigerwald
Acked-by: David S. Miller [ for IDE ]
Acked-by: Martin K. Petersen
Reviewed-by: Hannes Rei
Several block layer and NVMe core functions accept a combination
of BLK_MQ_REQ_* flags through the 'flags' argument but there is
no verification at compile time whether the right type of block
layer flags is passed. Make it possible for sparse to verify this.
This patch does not change any function
A side effect of this patch is that the GFP mask that is passed to
several allocation functions in the legacy block layer is changed
from GFP_KERNEL into __GFP_DIRECT_RECLAIM.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Tested-by: Oleksandr Natalenk
Hello Jens,
It is known that during the resume following a hibernate, especially when
using an md RAID1 array created on top of SCSI devices, sometimes the system
hangs instead of resuming up properly. This patch series fixes that
problem. These patches have been tested on top of the block layer f
Set RQF_PREEMPT if BLK_MQ_REQ_PREEMPT is passed to
blk_get_request_flags().
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Tested-by: Martin Steigerwald
Tested-by: Oleksandr Natalenko
Cc: Christoph Hellwig
Cc: Ming Lei
Cc: Johannes Thumshirn
---
block/blk-core.c | 4 +++-
From: Ming Lei
This patch makes it possible to pause request allocation for
the legacy block layer by calling blk_mq_freeze_queue() and
blk_mq_unfreeze_queue().
Signed-off-by: Ming Lei
[ bvanassche: Combined two patches into one, edited a comment and made sure
REQ_NOWAIT is handled properly i
On Wed, Nov 08, 2017 at 03:48:51PM -0700, Jens Axboe wrote:
> This patch attempts to make the case of hctx re-running on driver tag
> failure more robust. Without this patch, it's pretty easy to trigger a
> stall condition with shared tags. An example is using null_blk like
> this:
>
> modprobe nu
Hello,
I was doing some cleanup work on rbd BLKROSET handler and discovered
that we ignore partition rw/ro setting (hd_struct->policy) for pretty
much everything but straight writes.
David (CCed) has blktests patches standing by.
(Another aspect of this is that we don't enforce open(2) mode. Te
Similar to blkdev_write_iter(), return -EPERM if the partition is
read-only. This covers ioctl(), fallocate() and most in-kernel users
but isn't meant to be exhaustive -- everything else will be caught in
generic_make_request_checks(), fail with -EIO and can be fixed later.
Signed-off-by: Ilya Dr
Regular block device writes go through blkdev_write_iter(), which does
bdev_read_only(), while zeroout/discard/etc requests are never checked,
both userspace- and kernel-triggered. Add a generic catch-all check to
generic_make_request_checks() to actually enforce ioctl(BLKROSET) and
set_disk_ro(),
On Thu, Nov 09, 2017 at 06:44:47PM +0100, Christoph Hellwig wrote:
> +config NVME_MULTIPATH
> + bool "NVMe multipath support"
> + depends on NVME_CORE
> + ---help---
> +This option enables support for multipath access to NVMe
> +subsystems. If this option is enabled onl
This adds a new nvme_subsystem structure so that we can track multiple
controllers that belong to a single subsystem. For now we only use it
to store the NQN, and to check that we don't have duplicate NQNs unless
the involved subsystems support multiple controllers.
Includes code originally from
This patch adds native multipath support to the nvme driver. For each
namespace we create only single block device node, which can be used
to access that namespace through any of the controllers that refer to it.
The gendisk for each controllers path to the name space still exists
inside the kerne
We do this by adding a helper that returns the ns_head for a device that
can belong to either the per-controller or per-subsystem block device
nodes, and otherwise reuse all the existing code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Joha
Introduce a new struct nvme_ns_head that holds information about an actual
namespace, unlike struct nvme_ns, which only holds the per-controller
namespace information. For private namespaces there is a 1:1 relation of
the two, but for shared namespaces this lets us discover all the paths to
it. F
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
I think we are pretty much done with with very little changes in
the last reposts. Unless I hear objections I plan to send this
to Jens tomorrow with the rem
From: Hannes Reinecke
When creating nvme multipath devices we should populate the 'slaves' and
'holders' directorys properly to aid userspace topology detection.
Signed-off-by: Hannes Reinecke
[hch: split from a larger patch, compile fix for disable multipath code]
Signed-off-by: Christoph Hell
This allows us to manage the various uniqueue namespace identifiers
together instead needing various variables and arguments.
Signed-off-by: Christoph Hellwig
Reviewed-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
---
drivers/nvme/host/core.c | 69 +++
From: Hannes Reinecke
When creating nvme multipath devices we should populate the 'slaves' and
'holders' directorys properly to aid userspace topology detection.
Signed-off-by: Hannes Reinecke
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig
---
block/genhd.c | 14 +++
Then,
Reported-by: Oleksandr Natalenko
Tested-by: Oleksandr Natalenko
On Ätvrtek 9. listopadu 2017 17:55:58 CET Jens Axboe wrote:
> On 11/09/2017 09:54 AM, Bart Van Assche wrote:
> > On Thu, 2017-11-09 at 07:16 +0100, Oleksandr Natalenko wrote:
> >> is this something known to you, or it is just
On 11/09/2017 09:54 AM, Bart Van Assche wrote:
> On Thu, 2017-11-09 at 07:16 +0100, Oleksandr Natalenko wrote:
>> is this something known to you, or it is just my fault applying this series
>> to
>> v4.13? Except having this warning, suspend/resume works for me:
>>
>> [ 27.383846] sd 0:0:0:0: [
On Thu, 2017-11-09 at 07:16 +0100, Oleksandr Natalenko wrote:
> is this something known to you, or it is just my fault applying this series
> to
> v4.13? Except having this warning, suspend/resume works for me:
>
> [ 27.383846] sd 0:0:0:0: [sda] Starting disk
> [ 27.383976] sd 1:0:0:0: [sdb]
On 11/09/2017 08:30 AM, Jens Axboe wrote:
> On 11/09/2017 03:00 AM, Ming Lei wrote:
>> On Thu, Nov 09, 2017 at 11:41:40AM +0800, Ming Lei wrote:
>>> On Wed, Nov 08, 2017 at 03:48:51PM -0700, Jens Axboe wrote:
This patch attempts to make the case of hctx re-running on driver tag
failure mo
To allow lockless path lookup the list of nvme_ns structures per
nvme_ns_head is protected by SRCU, which requires freeing the nvme_ns
structure through call_srcu.
Can you remind me why isn't rcu sufficient? Can looking up a
path (ns from head->list) block?
blk_mq_make_request can block.
O
On 11/9/2017 5:20 PM, Tony Yang wrote:
Hi, All
I downloaded the nvme with multipath kernel, The kernel version is
4.14, I encountered a problem, I use Mellanox connectx-3 infiniband
driver. Because the 4.14 kernel version is too new to install
infiniband driver, does anyone encounter with m
Tony,
2017-11-09 16:20 GMT+01:00 Tony Yang :
> Hi, All
>I downloaded the nvme with multipath kernel, The kernel version is
> 4.14, I encountered a problem, I use Mellanox connectx-3 infiniband
> driver. Because the 4.14 kernel version is too new to install
> infiniband driver, does anyone enco
Hi Tony,
Hi, All
I downloaded the nvme with multipath kernel, The kernel version is
4.14, I encountered a problem, I use Mellanox connectx-3 infiniband
driver. Because the 4.14 kernel version is too new to install
infiniband driver, does anyone encounter with me The same situation?
How to so
On Thu, Nov 9, 2017 at 11:42 AM, Adrian Hunter wrote:
> On 08/11/17 10:54, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>> wrote:
>> At least you could do what I did and break out a helper like
>> this:
>>
>> /*
>> * This reports status back to the block layer for a fi
On Thu, Nov 09, 2017 at 04:44:32PM +0100, Hannes Reinecke wrote:
> - We don't have the topology information in sysfs;
We have all the topology information in sysfs, but you seem to look
for the wrong thing.
> while the namespace
> device has the 'slaves' and 'holders' directories, they remain emp
On 11/02/2017 07:30 PM, Christoph Hellwig wrote:
> This patch adds native multipath support to the nvme driver. For each
> namespace we create only single block device node, which can be used
> to access that namespace through any of the controllers that refer to it.
> The gendisk for each control
On 09/11/17 14:34, Linus Walleij wrote:
> On Thu, Nov 9, 2017 at 8:27 AM, Adrian Hunter wrote:
>> On 08/11/17 11:28, Linus Walleij wrote:
>>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>>> wrote:
>>>
For blk-mq, add support for completing requests directly in the ->done
callback. Th
On 11/09/2017 03:00 AM, Ming Lei wrote:
> On Thu, Nov 09, 2017 at 11:41:40AM +0800, Ming Lei wrote:
>> On Wed, Nov 08, 2017 at 03:48:51PM -0700, Jens Axboe wrote:
>>> This patch attempts to make the case of hctx re-running on driver tag
>>> failure more robust. Without this patch, it's pretty easy
On 09/11/17 15:36, Ulf Hansson wrote:
> On 3 November 2017 at 14:20, Adrian Hunter wrote:
>> card_busy_detect() doesn't set a correct timeout, and it doesn't take care
>> of error status bits. Stop using it for blk-mq.
>
> I think this changelog isn't very descriptive. Could you please work
> on
Hi, All
I downloaded the nvme with multipath kernel, The kernel version is
4.14, I encountered a problem, I use Mellanox connectx-3 infiniband
driver. Because the 4.14 kernel version is too new to install
infiniband driver, does anyone encounter with me The same situation?
How to solve? Thank yo
UFS partitions from newer versions of FreeBSD 10 and 11 use relative addressing
for their subpartitions. But older versions of FreeBSD still use absolute
addressing just like OpenBSD and NetBSD.
Instead of simply testing for a FreeBSD partition, the code needs to also
test if the starting offset
On 09/11/17 15:41, Ulf Hansson wrote:
> On 3 November 2017 at 14:20, Adrian Hunter wrote:
>> From: Venkat Gopalakrishnan
>>
>> This patch adds CMDQ support for command-queue compatible
>> hosts.
>>
>> Command queue is added in eMMC-5.1 specification. This
>> enables the controller to process upto
Some blkcg policies may not implement all operations in struct blkcg_policy,
there are lots of "if (pol->xxx)", add wrappers for these pol->xxx_fn.
Signed-off-by: weiping zhang
---
block/blk-cgroup.c | 55 +--
include/linux/blk-cgroup.h | 72 ++
On 3 November 2017 at 14:20, Adrian Hunter wrote:
> From: Venkat Gopalakrishnan
>
> This patch adds CMDQ support for command-queue compatible
> hosts.
>
> Command queue is added in eMMC-5.1 specification. This
> enables the controller to process upto 32 requests at
> a time.
>
> Adrian Hunter con
On 3 November 2017 at 14:20, Adrian Hunter wrote:
> Add CQHCI initialization and implement CQHCI operations for Intel GLK.
>
> Signed-off-by: Adrian Hunter
This looks good to me!
Kind regards
Uffe
> ---
> drivers/mmc/host/Kconfig | 1 +
> drivers/mmc/host/sdhci-pci-core.c | 155
>
On 3 November 2017 at 14:20, Adrian Hunter wrote:
> card_busy_detect() doesn't set a correct timeout, and it doesn't take care
> of error status bits. Stop using it for blk-mq.
I think this changelog isn't very descriptive. Could you please work
on that for the next version.
>
> Signed-off-by: A
On Thu, Nov 09, 2017 at 02:59:50PM +0200, Sagi Grimberg wrote:
>
To allow lockless path lookup the list of nvme_ns structures per
nvme_ns_head is protected by SRCU, which requires freeing the nvme_ns
structure through call_srcu.
>>>
>>> Can you remind me why isn't rcu sufficient? Can
On 09/11/17 15:07, Ulf Hansson wrote:
> On 3 November 2017 at 14:20, Adrian Hunter wrote:
>> For blk-mq, add support for completing requests directly in the ->done
>> callback. That means that error handling and urgent background operations
>> must be handled by recovery_work in that case.
>
> As
On 3 November 2017 at 14:20, Adrian Hunter wrote:
> For blk-mq, add support for completing requests directly in the ->done
> callback. That means that error handling and urgent background operations
> must be handled by recovery_work in that case.
As the mmc docs sucks, I think it's important tha
On 09/11/17 14:52, Linus Walleij wrote:
> On Thu, Nov 9, 2017 at 8:56 AM, Adrian Hunter wrote:
>> On 08/11/17 11:30, Linus Walleij wrote:
>>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>>> wrote:
>>>
Recovery is simpler to understand if it is only used for errors. Create a
separate
Any reason to do all this before we know if we found an existing subsystem?
We'd either have to do all the initialization including the memory
allocation and ida_simple_get under nvme_subsystems_lock, or search
the list first, then allocate, then search again.
Given that the not found case is
To allow lockless path lookup the list of nvme_ns structures per
nvme_ns_head is protected by SRCU, which requires freeing the nvme_ns
structure through call_srcu.
Can you remind me why isn't rcu sufficient? Can looking up a
path (ns from head->list) block?
blk_mq_make_request can block.
O
On 09/11/17 14:26, Linus Walleij wrote:
> On Wed, Nov 8, 2017 at 3:14 PM, Adrian Hunter wrote:
>> On 08/11/17 11:22, Linus Walleij wrote:
>>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>>> wrote:
>
>>> (...)
>
+EXPORT_SYMBOL(cqhci_resume);
>>>
>>> Why would the CQE case require special
On Thu, Nov 09, 2017 at 01:33:16PM +0200, Sagi Grimberg wrote:
> Any reason to do all this before we know if we found an existing subsystem?
We'd either have to do all the initialization including the memory
allocation and ida_simple_get under nvme_subsystems_lock, or search
the list first, then
On Thu, Nov 9, 2017 at 8:56 AM, Adrian Hunter wrote:
> On 08/11/17 11:30, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>> wrote:
>>
>>> Recovery is simpler to understand if it is only used for errors. Create a
>>> separate function for card polling.
>>>
>>> Signed-off-by
On Thu, Nov 09, 2017 at 01:37:43PM +0200, Sagi Grimberg wrote:
>
>> To allow lockless path lookup the list of nvme_ns structures per
>> nvme_ns_head is protected by SRCU, which requires freeing the nvme_ns
>> structure through call_srcu.
>
> Can you remind me why isn't rcu sufficient? Can looking u
On Thu, Nov 9, 2017 at 8:43 AM, Adrian Hunter wrote:
> On 08/11/17 11:38, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>> wrote:
>>
>>> There are only a few things the recovery needs to do. Primarily, it just
>>> needs to:
>>> Determine the number of bytes transf
On 09/11/17 14:04, Linus Walleij wrote:
> On Wed, Nov 8, 2017 at 2:20 PM, Adrian Hunter wrote:
>> On 08/11/17 11:00, Linus Walleij wrote:
>
>>> This and other bits gives me the feeling CQE is now actually ONLY
>>> working on the MQ path.
>>
>> I was not allowed to support non-mq.
>
> Fair enough
On Thu, Nov 9, 2017 at 8:27 AM, Adrian Hunter wrote:
> On 08/11/17 11:28, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>> wrote:
>>
>>> For blk-mq, add support for completing requests directly in the ->done
>>> callback. That means that error handling and urgent backgrou
On Wed, Nov 8, 2017 at 3:14 PM, Adrian Hunter wrote:
> On 08/11/17 11:22, Linus Walleij wrote:
>> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter
>> wrote:
>> (...)
>>> +EXPORT_SYMBOL(cqhci_resume);
>>
>> Why would the CQE case require special suspend/resume
>> functionality?
>
> Seems like a ve
On Wed, Nov 8, 2017 at 2:20 PM, Adrian Hunter wrote:
> On 08/11/17 11:00, Linus Walleij wrote:
>> This and other bits gives me the feeling CQE is now actually ONLY
>> working on the MQ path.
>
> I was not allowed to support non-mq.
Fair enough.
>> That is good. We only add new functionality on
To allow lockless path lookup the list of nvme_ns structures per
nvme_ns_head is protected by SRCU, which requires freeing the nvme_ns
structure through call_srcu.
Can you remind me why isn't rcu sufficient? Can looking up a
path (ns from head->list) block?
+static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+{
+ struct nvme_subsystem *subsys, *found;
+ int ret;
+
+ subsys = kzalloc(sizeof(*subsys), GFP_KERNEL);
+ if (!subsys)
+ return -ENOMEM;
+ ret = ida_simple_get(&nvme_sub
On 08/11/17 10:54, Linus Walleij wrote:
> On Fri, Nov 3, 2017 at 2:20 PM, Adrian Hunter wrote:
>
>> Define and use a blk-mq queue. Discards and flushes are processed
>> synchronously, but reads and writes asynchronously. In order to support
>> slow DMA unmapping, DMA unmapping is not done until a
On Thu, Nov 09, 2017 at 11:41:40AM +0800, Ming Lei wrote:
> On Wed, Nov 08, 2017 at 03:48:51PM -0700, Jens Axboe wrote:
> > This patch attempts to make the case of hctx re-running on driver tag
> > failure more robust. Without this patch, it's pretty easy to trigger a
> > stall condition with share
On Wed, Nov 08, 2017 at 12:18:32PM +0100, Hannes Reinecke wrote:
> On 11/08/2017 09:54 AM, Christoph Hellwig wrote:
> > Can I get a review for this one? The only changes vs the previously
> > reviewed versions is that we don't use the multipath code at all for
> > subsystems that aren't multiporte
On Tue, Nov 7, 2017 at 4:38 PM, Yu Chen wrote:
> Hi all,
> We are using 4.13.5-100.fc25.x86_64 and a panic was found during
> resume from hibernation, the backtrace is illustrated as below, would
> someone please take a look if this has already been fixed or is this issue
> still
> in the upstrea
85 matches
Mail list logo