On Mon, Aug 17, 2020 at 03:50:11PM +0200, Ahmed S. Darwish wrote:
> Hello,
>
> Below v5.9-rc1 commit reliably breaks my boot on a Thinkpad e480
> laptop. PCI nvme detection fails, and the kernel becomes not able
> anymore to find the rootfs / parse "root=".
>
> Bisecting v5.8=>v5.9-rc1 blames
On Tue, Aug 18, 2020 at 11:50:33AM +0200, Javier Gonzalez wrote:
> a number of customers are requiring the use of normal writes, which we
> want to support.
A device that supports append is completely usable for those customers,
too. There's no need to create divergence in this driver.
On Tue, Aug 18, 2020 at 07:29:12PM +0200, Javier Gonzalez wrote:
> On 18.08.2020 09:58, Keith Busch wrote:
> > On Tue, Aug 18, 2020 at 11:50:33AM +0200, Javier Gonzalez wrote:
> > > a number of customers are requiring the use of normal writes, which we
> > > want to
lowing is console output.
Thanks, this looks good to me.
Reviewed-by: Keith Busch
On Mon, Aug 31, 2020 at 11:29:03AM -0400, Sasha Levin wrote:
> From: Keith Busch
>
> [ Upstream commit c41ad98bebb8f4f0335b3c50dbb7583a6149dce4 ]
>
> Zoned block devices reuse the chunk_sectors queue limit to define zone
> boundaries. If a such a device happens to also
On Fri, Aug 14, 2020 at 03:14:31AM -0400, Tong Zhang wrote:
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index ba725ae47305..c4f1ce0ee1e3 100644
> --- a/drivers/nvme/host/pci.c
> +++ b/drivers/nvme/host/pci.c
> @@ -1249,8 +1249,8 @@ static enum blk_eh_timer_return
On Fri, Aug 14, 2020 at 11:37:20AM -0400, Tong Zhang wrote:
> On Fri, Aug 14, 2020 at 11:04 AM Keith Busch wrote:
> >
> > On Fri, Aug 14, 2020 at 03:14:31AM -0400, Tong Zhang wrote:
> > > diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> > > index
: 61f3b8963097 ("nvme-pci: use unsigned for io queue depth")
> Signed-off-by: John Garry
Looks good to me.
Reviewed-by: Keith Busch
On Thu, Sep 17, 2020 at 11:32:12PM -0400, Tong Zhang wrote:
> Please correct me if I am wrong.
> After a bit more digging I found out that it is indeed command_id got
> corrupted is causing this problem. Although the tag and command_id
> range is checked like you said, the elements in rqs cannot
On Fri, Sep 18, 2020 at 06:44:20PM +0800, Xianting Tian wrote:
> @@ -940,7 +940,9 @@ static inline void nvme_handle_cqe(struct nvme_queue
> *nvmeq, u16 idx)
> struct nvme_completion *cqe = >cqes[idx];
> struct request *req;
>
> - if (unlikely(cqe->command_id >= nvmeq->q_depth))
On Thu, Oct 29, 2020 at 02:20:27AM +, Gloria Tsai wrote:
> Corrected the description of this bug that SSD will not do GC after receiving
> shutdown cmd.
> Do GC before shutdown -> delete IO Q -> shutdown from host -> breakup GC ->
> D3hot -> enter PS4 -> have a chance swap block -> use wrong
On Thu, Oct 29, 2020 at 11:33:06AM +0900, Keith Busch wrote:
> On Thu, Oct 29, 2020 at 02:20:27AM +, Gloria Tsai wrote:
> > Corrected the description of this bug that SSD will not do GC after
> > receiving shutdown cmd.
> > Do GC before shutdown -> delete IO Q -> sh
The commit subject is a too long. We should really try to keep these to
50 characters or less.
nvme-pci: fix NULL req in completion handler
Otherwise, looks fine.
Reviewed-by: Keith Busch
On Wed, Feb 03, 2021 at 12:22:31PM +0100, Filippo Sironi wrote:
>
> On 2/3/21 12:15 PM, Christoph Hellwig wrote:
> >
> > On Wed, Feb 03, 2021 at 12:12:31PM +0100, Filippo Sironi wrote:
> > > I don't disagree on the first part of your sentence, this is a big
> > > oversight.
> >
> > But it is
On Thu, Jan 28, 2021 at 12:15:28PM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 27, 2021 at 04:38:28PM -0800, Jianxiong Gao wrote:
> > For devices that need to preserve address offset on mapping through
> > swiotlb, this patch adds offset preserving based on page_offset_mask
> > and keeps the
On Mon, Sep 16, 2019 at 12:13:24PM +, Baldyga, Robert wrote:
> Ok, fair enough. We want to keep things hidden behind certain layers,
> and that's definitely a good thing. But there is a problem with these
> layers - they do not expose all the features. For example AFAIK there
> is no clear way
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> The action of saving the PCI state will cause numerous PCI configuration
> space reads which depending upon the vendor implementation may cause
> the drive to exit the deepest NVMe state.
>
> In these cases ASPM will typically
On Wed, Sep 18, 2019 at 03:26:11PM +0200, Christoph Hellwig wrote:
> Even if we had a use case for that the bounce buffer is just too ugly
> to live. And I'm really sick and tired of Intel wasting our time for
> their out of tree monster given that they haven't even tried helping
> to improve the
On Wed, Sep 11, 2019 at 06:42:33PM -0500, Mario Limonciello wrote:
> ---
> drivers/nvme/host/pci.c | 13 +++--
> 1 file changed, 7 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
> index 732d5b6..9b3fed4 100644
> ---
On Thu, Sep 19, 2019 at 01:47:50PM +, Bharat Kumar Gogada wrote:
> Hi All,
>
> We are testing NVMe cards on ARM64 platform, the card uses MSI-X interrupts.
> We are hitting following case in drivers/nvme/host/pci.c
> /*
> * Did we miss an interrupt?
> */
> if
Looks good to me.
Reviewed-by: Keith Busch
Fixes: fa46c6fb5d61 ("nvme/pci: move cqe check after device shutdown")
On Mon, Jan 22, 2018 at 10:02:12PM +0100, Paul Menzel wrote:
> Dear Linux folks,
>
>
> Benchmarking the ACPI S3 suspend and resume times with `sleepgraph.py
> -config config/suspend-callgraph.cfg` [1], shows that the NVMe disk SAMSUNG
> MZVKW512HMJP-0 in the TUXEDO Book BU1406 takes between
On Mon, Jan 22, 2018 at 09:14:23PM +0100, Christoph Hellwig wrote:
> > Link: https://lkml.org/lkml/2018/1/19/68
> > Suggested-by: Keith Busch
> > Signed-off-by: Keith Busch
> > Signed-off-by: Jianchao Wang
>
> Why does this have a signoff from Keith?
Right, I
On Wed, Jan 24, 2018 at 11:29:12PM +0100, Paul Menzel wrote:
> Am 22.01.2018 um 22:30 schrieb Keith Busch:
> > The nvme spec guides toward longer times than that. I don't see the
> > point of warning users about things operating within spec.
>
> I quickly glanced over NVM
On Mon, Jan 29, 2018 at 09:55:41PM +0200, Sagi Grimberg wrote:
> > Thanks for the fix. It looks like we still have a problem, though.
> > Commands submitted with the "shutdown_lock" held need to be able to make
> > forward progress without relying on a completion, but this one could
> > block
On Tue, Jan 30, 2018 at 11:41:07AM +0800, jianchao.wang wrote:
> Another point that confuses me is that whether nvme_set_host_mem is necessary
> in nvme_dev_disable ?
> As the comment:
>
> /*
>* If the controller is still alive tell it to stop using the
>
with current code we do not acknowledge the
> interrupt and we get dpc interrupt storm.
> This patch acknowledges the interrupt in interrupt handler.
>
> Signed-off-by: Oza Pawandeep
Thanks, looks good to me.
Reviewed-by: Keith Busch
> and keeps all of these symbols grouped together.
>
> Signed-off-by: Randy Dunlap
Thanks, looks good.
Reviewed-by: Keith Busch
On Wed, Feb 07, 2018 at 02:09:38PM -0600, wenxi...@linux.vnet.ibm.com wrote:
> @@ -1189,6 +1183,12 @@ static enum blk_eh_timer_return nvme_timeout(struct
> request *req, bool reserved)
> struct nvme_command cmd;
> u32 csts = readl(dev->bar + NVME_REG_CSTS);
>
> + /* If PCI error
On Thu, Feb 08, 2018 at 10:17:00PM +0800, jianchao.wang wrote:
> There is a dangerous scenario which caused by nvme_wait_freeze in
> nvme_reset_work.
> please consider it.
>
> nvme_reset_work
> -> nvme_start_queues
> -> nvme_wait_freeze
>
> if the controller no response, we have to rely on
On Sun, May 30, 2083 at 09:51:06AM +0530, Nitesh Shetty wrote:
> This removes the dependency on interrupts to wake up task. Set task
> state as TASK_RUNNING, if need_resched() returns true,
> while polling for IO completion.
> Earlier, polling task used to sleep, relying on interrupt to wake it
On Thu, Feb 08, 2018 at 05:56:49PM +0200, Sagi Grimberg wrote:
> Given the discussion on this set, you plan to respin again
> for 4.16?
With the exception of maybe patch 1, this needs more consideration than
I'd feel okay with for the 4.16 release.
On Fri, Feb 09, 2018 at 09:50:58AM +0800, jianchao.wang wrote:
>
> if we set NVME_REQ_CANCELLED and return BLK_EH_HANDLED as the RESETTING case,
> nvme_reset_work will hang forever, because no one could complete the entered
> requests.
Except it's no longer in the "RESETTING" case since you
er 200 iterations that used to
fail within only a few. I'd say the problem is cured. Thanks!
Tested-by: Keith Busch
On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> On 1/17/2018 5:37 AM, Oza Pawandeep wrote:
> > +static bool dpc_wait_link_active(struct pci_dev *pdev)
> > +{
>
> I think you can also make this function common instead of making another copy
> here.
> Of course, this would be another
Looks good.
Reviewed-by: Keith Busch
Looks good.
Reviewed-by: Keith Busch
On Thu, Jan 18, 2018 at 09:10:43AM +0100, Thomas Gleixner wrote:
> Can you please provide the output of
>
> # cat /sys/kernel/debug/irq/irqs/$ONE_I40_IRQ
# cat /sys/kernel/debug/irq/irqs/48
handler: handle_edge_irq
device: :1a:00.0
status: 0x
istate: 0x
ddepth: 0
On Thu, Jan 18, 2018 at 11:35:59AM -0500, Sinan Kaya wrote:
> On 1/18/2018 12:32 AM, p...@codeaurora.org wrote:
> > On 2018-01-18 08:26, Keith Busch wrote:
> >> On Wed, Jan 17, 2018 at 08:27:39AM -0800, Sinan Kaya wrote:
> >>> On 1/17/2018 5:37 AM, Oza Pawande
On Thu, Jan 18, 2018 at 06:10:02PM +0800, Jianchao Wang wrote:
> + * - When the ctrl.state is NVME_CTRL_RESETTING, the expired
> + * request should come from the previous work and we handle
> + * it as nvme_cancel_request.
> + * - When the ctrl.state is
On Fri, Jan 19, 2018 at 01:55:29PM +0800, jianchao.wang wrote:
> On 01/19/2018 12:59 PM, Keith Busch wrote:
> > On Thu, Jan 18, 2018 at 06:10:02PM +0800, Jianchao Wang wrote:
> >> + * - When the ctrl.state is NVME_CTRL_RESETTING, the expired
> >> + * request sh
On Thu, Jan 18, 2018 at 06:10:00PM +0800, Jianchao Wang wrote:
> Hello
>
> Please consider the following scenario.
> nvme_reset_ctrl
> -> set state to RESETTING
> -> queue reset_work
> (scheduling)
> nvme_reset_work
> -> nvme_dev_disable
> -> quiesce queues
> ->
On Fri, Jan 19, 2018 at 04:14:02PM +0800, jianchao.wang wrote:
> On 01/19/2018 04:01 PM, Keith Busch wrote:
> > The nvme_dev_disable routine makes forward progress without depending on
> > timeout handling to complete expired commands. Once controller disabling
> > completes,
On Fri, Jan 19, 2018 at 05:02:06PM +0800, jianchao.wang wrote:
> We should not use blk_sync_queue here, the requeue_work and run_work will be
> canceled.
> Just flush_work(>timeout_work) should be ok.
I agree flushing timeout_work is sufficient. All the other work had
already better not be
On Fri, Jan 19, 2018 at 09:56:48PM +0800, jianchao.wang wrote:
> In nvme_dev_disable, the outstanding requests will be requeued finally.
> I'm afraid the requests requeued on the q->requeue_list will be blocked until
> another requeue
> occurs, if we cancel the requeue work before it get
This is all way over my head, but the part that obviously shows
something's gone wrong:
kworker/u674:3-1421 [028] d... 335.307051: irq_matrix_reserve_managed:
bit=56 cpu=0 online=1 avl=86 alloc=116 managed=3 online_maps=112
global_avl=22084, global_rsvd=157, total_alloc=570
On Tue, Jan 16, 2018 at 12:20:18PM +0100, Thomas Gleixner wrote:
> What we want is s/i + 1/i/
>
> That's correct because x86_vector_free_irqs() does:
>
>for (i = 0; i < nr; i++)
>
>
> So if we fail at the first irq, then the loop will do nothing. Failing on
> the
On Tue, Jan 16, 2018 at 03:28:19PM +0100, Johannes Thumshirn wrote:
> Add tracepoints for nvme command submission and completion. The tracepoints
> are modeled after SCSI's trace_scsi_dispatch_cmd_start() and
> trace_scsi_dispatch_cmd_done() tracepoints and fulfil a similar purpose,
> namely a
+ 1);
> + x86_vector_free_irqs(domain, virq, i);
> return err;
> }
>
The patch does indeed fix all the warnings and allows device binding to
succeed, albeit in a degraded performance mode. Despite that, this is
a good fix, and looks applicable to 4.4-stable, so:
Tested-by: Keith
On Wed, Jan 17, 2018 at 08:34:22AM +0100, Thomas Gleixner wrote:
> Can you trace the matrix allocations from the very beginning or tell me how
> to reproduce. I'd like to figure out why this is happening.
Sure, I'll get the irq_matrix events.
I reproduce this on a machine with 112 CPUs and 3
;
> Suggested-by: James Smart
> Reviewed-by: James Smart
> Signed-off-by: Jianchao Wang
This looks fine. Thank you for your patience.
Reviewed-by: Keith Busch
On Mon, Jan 29, 2018 at 11:07:35AM +0800, Jianchao Wang wrote:
> nvme_set_host_mem will invoke nvme_alloc_request without NOWAIT
> flag, it is unsafe for nvme_dev_disable. The adminq driver tags
> may have been used up when the previous outstanding adminq requests
> cannot be completed due to some
On Fri, May 24, 2019 at 07:45:00AM +1000, Stephen Rothwell wrote:
> Commits
>
> 5fb4aac756ac ("nvme: release namespace SRCU protection before performing
> controller ioctls")
> 90ec611adcf2 ("nvme: merge nvme_ns_ioctl into nvme_ioctl")
> 3f98bcc58cd5 ("nvme: remove the ifdef around
On Fri, May 24, 2019 at 05:22:30PM +0200, Jiri Kosina wrote:
> Hi,
>
> Something is broken in Linus' tree (4dde821e429) with respec to
> hibernation on my thinkpad x270, and it seems to be nvme related.
>
> I reliably see the warning below during hibernation, and then sometimes
> resume sort
On Thu, Jul 25, 2019 at 02:51:41AM -0700, Rafael J. Wysocki wrote:
> Hi Keith,
>
> Unfortunately,
>
> commit d916b1be94b6dc8d293abed2451f3062f6af7551
> Author: Keith Busch
> Date: Thu May 23 09:27:35 2019 -0600
>
> nvme-pci: use host managed power sta
On Tue, Jul 23, 2019 at 01:21:50PM -0700,
sathyanarayanan.kuppusw...@linux.intel.com wrote:
> From: Kuppuswamy Sathyanarayanan
>
> Currently, in native mode, DPC driver is configured to trigger DPC only
> for FATAL errors and hence it only supports port recovery for FATAL
> errors. But with
On Thu, Jul 25, 2019 at 09:48:57PM +0200, Rafael J. Wysocki wrote:
> NVME Identify Controller:
> vid : 0x1c5c
> ssvid : 0x1c5c
> sn : MS92N171312902J0N
> mn : PC401 NVMe SK hynix 256GB
> fr : 80007E00
> rab : 2
> ieee: ace42e
> cmic: 0
> mdts:
On Thu, Jul 25, 2019 at 02:28:28PM -0600, Logan Gunthorpe wrote:
>
>
> On 2019-07-25 1:58 p.m., Keith Busch wrote:
> > On Thu, Jul 25, 2019 at 11:54:18AM -0600, Logan Gunthorpe wrote:
> >>
> >>
> >> On 2019-07-25 11:50 a.m., Matthew Wilcox wrote:
>
On Wed, Jul 31, 2019 at 11:25:51PM +0200, Rafael J. Wysocki wrote:
>
> A couple of remarks if you will.
>
> First, we don't know which case is the majority at this point. For
> now, there is one example of each, but it may very well turn out that
> the SK Hynix BC501 above needs to be quirked.
On Thu, Aug 01, 2019 at 02:05:54AM -0700, Kai-Heng Feng wrote:
> at 06:33, Rafael J. Wysocki wrote:
> > On Thu, Aug 1, 2019 at 12:22 AM Keith Busch wrote:
> >
> >> In which case we do need to reintroduce the HMB handling.
> >
> > Right.
>
> The pa
On Wed, Jul 03, 2019 at 01:46:19PM -0700,
sathyanarayanan.kuppusw...@linux.intel.com wrote:
> +#ifdef CONFIG_PCI_PRI
> +static void pci_pri_init(struct pci_dev *pdev)
> +{
> + u32 max_requests;
> + int pos;
> +
> + /*
> + * As per PCIe r4.0, sec 9.3.7.11, only PF is permitted to
On Thu, Aug 01, 2019 at 02:21:07PM -0700, sathyanarayanan kuppuswamy wrote:
> On 8/1/19 2:09 PM, Keith Busch wrote:
> > Rather than surround the call to pci_pri_init() with the #ifdef, you
> > should provide an empty function implementation when CONFIG_PCI_PRI is
> > no
ot;proper" way
> sometimes this week, adding a way to shrink the AQ down to something
> like 3 (one admin request, one async event (AEN), and the empty slot)
> by making a bunch of the constants involved variables instead.
I don't feel too strongly about it. I think your patch is fine, so
Acked-by: Keith Busch
a memory notifier callback and register the memory attributes
the first time its node is brought online if it wasn't registered.
Signed-off-by: Keith Busch
---
drivers/acpi/hmat/hmat.c | 75
1 file changed, 57 insertions(+), 18 deletions(-)
diff
Instead of registering the hmat cache attributes in line with parsing
the table, save the attributes in the memory target and register them
after parsing completes. This will make it easier to register the
attributes later when hot add is supported.
Tested-by: Brice Goglin
Signed-off-by: Keith
From: Keith Busch
Hi Rafael,
These are just some fixes from a while ago to work correctly with memory
node onlining, but haven't been merged in yet. I've included a fix from
Dan, but had to modify it slightly for conflicts. I think it makes most
sense for this to go through the acpi tree
: Invalid table"
...result for HMAT parsing.
Reviewed-by: Dave Hansen
Reviewed-by: Keith Busch
Acked-by: Rafael J. Wysocki
Signed-off-by: Dan Williams
---
drivers/acpi/hmat/hmat.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/acpi/hmat/hmat.c
g
> > > like 3 (one admin request, one async event (AEN), and the empty slot)
> > > by making a bunch of the constants involved variables instead.
> >
> > I don't feel too strongly about it. I think your patch is fine, so
> >
> > Acked-by: Keith Busch
>
> Shou
On Fri, Aug 30, 2019 at 06:01:39PM -0600, Logan Gunthorpe wrote:
> To fix this, assign the subsystem's instance based on the instance
> number of the controller's instance that first created it. There should
> always be fewer subsystems than controllers so the should not be a need
> to create
SetFeatures has been
> called. This has been proven to resolve the issue across a 5000 sample
> test on previously failing disk/system combinations.
>
> Signed-off-by: Mario Limonciello
This looks good. It clashes with something I posted yesterday, but
I'll rebase after this one.
Reviewed-by: Keith Busch
On Wed, Aug 14, 2019 at 09:05:49AM -0700, Mario Limonciello wrote:
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 8f3fbe5..47c7754 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2251,6 +2251,29 @@ static const struct nvme_core_quirk_entry
managed power state for suspend")
> Link: http://lists.infradead.org/pipermail/linux-nvme/2019-July/thread.html
> Signed-off-by: Mario Limonciello
> Signed-off-by: Charles Hyde
Looks fine to me.
Reviewed-by: Keith Busch
On Wed, Aug 14, 2019 at 01:14:50PM -0700, Sagi Grimberg wrote:
> Mario,
>
> Can you please respin a patch that applies cleanly on nvme-5.4?
This fixes a regression we introduced in 5.3, so it should go in
5.3-rc. For this to apply cleanly, though, we'll need to resync to Linus'
tree to get
On Fri, Oct 04, 2019 at 11:36:42AM -0400, Tyler Ramer wrote:
> Here's a failure we had which represents the issue the patch is
> intended to solve:
>
> Aug 26 15:00:56 testhost kernel: nvme nvme4: async event result 00010300
> Aug 26 15:01:27 testhost kernel: nvme nvme4: controller is down; will
On Tue, Aug 27, 2019 at 05:09:27PM +0800, Ming Lei wrote:
> On Tue, Aug 27, 2019 at 11:06:20AM +0200, Johannes Thumshirn wrote:
> > On 27/08/2019 10:53, Ming Lei wrote:
> > [...]
> > > + char *devname;
> > > + const struct cpumask *mask;
> > > + unsigned long irqflags =
upt.
>
> Cc: Long Li
> Cc: Ingo Molnar ,
> Cc: Peter Zijlstra
> Cc: Keith Busch
> Cc: Jens Axboe
> Cc: Christoph Hellwig
> Cc: Sagi Grimberg
> Cc: John Garry
> Cc: Thomas Gleixner
> Cc: Hannes Reinecke
> Cc: linux-n...@lists.infradead.org
> Cc: l
On Tue, Aug 27, 2019 at 08:34:21AM -0600, Keith Busch wrote:
> I think you should probably just have pci_irq_get_affinity() take a flags
> argument, or make a new function like __pci_irq_get_affinity() that
> pci_irq_get_affinity() can call with a default flag.
Sorry, copied the wrong
eft nodes if 'numvecs' vectors
> have been spread.
>
> Also, if the specified cpumask for one numa node is empty, simply not
> spread vectors on this node.
>
> Cc: Christoph Hellwig
> Cc: Keith Busch
> Cc: linux-n...@lists.infradead.org,
> Cc: Jon De
On Fri, Aug 09, 2019 at 01:05:42AM -0700, Rafael J. Wysocki wrote:
> On Fri, Aug 9, 2019 at 12:16 AM Keith Busch wrote:
> >
> > The v3 series looks good to me.
> >
> > Reviewed-by: Keith Busch
> >
> > Bjorn,
> >
> > If you're okay with the ser
On Fri, Aug 09, 2019 at 12:28:43PM +0200, Lukas Wunner wrote:
> A sysfs request to enable or disable a PCIe hotplug slot should not
> return before it has been carried out. That is sought to be achieved
> by waiting until the controller's "pending_events" have been cleared.
>
> However the IRQ
gt; irq 33, cpu list 0-1
> irq 34, cpu list 3,5
> irq 35, cpu list 6-7
> irq 36, cpu list 8-9
> irq 37, cpu list 11,13
> irq 38, cpu list 14-15
>
> Without this patch, kernel warning is triggered on above situation, and
> allocation resu
On Wed, Aug 21, 2019 at 7:34 PM Ming Lei wrote:
> On Wed, Aug 21, 2019 at 04:27:00PM +, Long Li wrote:
> > Here is the command to benchmark it:
> >
> > fio --bs=4k --ioengine=libaio --iodepth=128
> >
st 1 vector for remained nodes if 'numvecs' vectors
> have been handled already.
>
> Also, if the specified cpumask for one numa node is empty, simply not
> spread vectors on this node.
>
> Cc: Christoph Hellwig
> Cc: Keith Busch
> Cc: linux-n...@lists.infradead.org,
> Cc:
irq 36, cpu list 8-9
> irq 37, cpu list 11,13
> irq 38, cpu list 14-15
>
> Without this patch, kernel warning is triggered on above situation, and
> allocation result was supposed to be 4 vectors for each node.
>
> Cc: Christoph Hellwig
> Cc: Keith B
On Mon, Aug 19, 2019 at 04:33:45PM -0700, Ashton Holmes wrote:
> When playing certain games on my PC dmesg will start spitting out NVME
> timeout messages, this eventually results in BTRFS throwing errors and
> remounting itself as read only. The drive passes smart's health check and
> works fine
On Mon, Oct 07, 2019 at 11:13:12AM -0400, Tyler Ramer wrote:
> > Setting the shutdown to true is
> > usually just to get the queues flushed, but the nvme_kill_queues() that
> > we call accomplishes the same thing.
>
> The intention of this patch was to clean up another location where
>
On Mon, Oct 07, 2019 at 01:50:11PM -0400, Tyler Ramer wrote:
> Shutdown the controller when nvme_remove_dead_controller is
> reached.
>
> If nvme_remove_dead_controller() is called, the controller won't
> be comming back online, so we should shut it down rather than just
> disabling.
>
> Remove
On Wed, Apr 29, 2020 at 05:20:09AM +, Williams, Dan J wrote:
> On Tue, 2020-04-28 at 08:27 -0700, David E. Box wrote:
> > On Tue, 2020-04-28 at 16:22 +0200, Christoph Hellwig wrote:
> > > On Tue, Apr 28, 2020 at 07:09:59AM -0700, David E. Box wrote:
> > > > > I'm not sure who came up with the
Signed-off-by: Dan Carpenter
Thanks, patch looks good.
Reviewed-by: Keith Busch
On Tue, Sep 24, 2019 at 11:05:36AM -0700, Sagi Grimberg wrote:
> Looks fine to me,
>
> Reviewed-by: Sagi Grimberg
>
> Keith, Christoph?
Looks good to me, too.
Reviewed-by: Keith Busch
in that
subsystem.
Reviewed-by: Logan Gunthorpe
Signed-off-by: Keith Busch
---
v1 -> v2:
Changelog: reduce sensationalism, fix spelling
drivers/nvme/host/core.c | 21 ++---
1 file changed, 10 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/h
On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote:
> When one IRQ flood happens on one CPU:
>
> 1) softirq handling on this CPU can't make progress
>
> 2) kernel thread bound to this CPU can't make progress
>
> For example, network may require softirq to xmit packets, or another irq
>
On Fri, Sep 06, 2019 at 11:30:57AM -0700, Sagi Grimberg wrote:
>
> >
> > Ok, so the real problem is per-cpu bounded tasks.
> >
> > I share Thomas opinion about a NAPI like approach.
>
> We already have that, its irq_poll, but it seems that for this
> use-case, we get lower performance for some
On Sat, Sep 07, 2019 at 06:19:21AM +0800, Ming Lei wrote:
> On Fri, Sep 06, 2019 at 05:50:49PM +, Long Li wrote:
> > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism
> > >
> > >Why are all 8 nvmes sharing the same CPU for interrupt handling?
> > >Shouldn't
On Wed, Jul 29, 2020 at 07:29:08PM +, Lach wrote:
> Hello
>
> I caught a regression in the nvme driver, which shows itself on some
> controllers (In my case, at 126h:2263)
Fix is staged for the next 5.8 pull;
On Wed, Jun 24, 2020 at 09:34:08AM +0800, Baolin Wang wrote:
> OK, I understaood your concern. Now we will select the RR arbitration as
> default
> in nvme_enable_ctrl(), but for some cases, we will not set the arbitration
> burst
> values from userspace, and we still want to use the defaut
On Tue, Jun 23, 2020 at 09:24:32PM +0800, Baolin Wang wrote:
> +void nvme_set_arbitration_burst(struct nvme_ctrl *ctrl)
> +{
> + u32 result;
> + int status;
> +
> + if (!ctrl->rab)
> + return;
> +
> + /*
> + * The Arbitration Burst setting indicates the maximum
On Tue, Jun 23, 2020 at 06:27:51PM +0200, Christoph Hellwig wrote:
> On Tue, Jun 23, 2020 at 09:24:33PM +0800, Baolin Wang wrote:
> > Introduce a new capability macro to indicate if the controller
> > supports the memory buffer or not, instead of reading the
> > NVME_REG_CMBSZ register.
>
> This
On Mon, Jul 20, 2020 at 05:01:19PM -0600, Logan Gunthorpe wrote:
> On 2020-07-20 4:35 p.m., Sagi Grimberg wrote:
>
> > passthru commands are in essence REQ_OP_DRV_IN/REQ_OP_DRV_OUT, which
> > means that the driver shouldn't need the ns at all. So if you have a
> > dedicated request queue (mapped
801 - 900 of 1501 matches
Mail list logo