On Thu, May 24, 2018 at 05:51:34PM +0800, Jianchao Wang wrote:
> result = adapter_alloc_sq(dev, qid, nvmeq);
> - if (result < 0)
> + /*
> + * If return -EINTR, it means the allocate sq command times out and is
> completed
> + * with NVME_REQ_CANCELLED. At the time, the
This looks fine.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
This looks fine.
Reviewed-by: Keith Busch
Thanks, applied.
Thanks, applied.
Thanks for the ping. I started a new branch, nvme-4.18-2, based off of
Jens' for-next with this being the first new commit.
I'm certain we're still missing a lot of reviewed commits. I'll try to
go through the mail history and apply by the end of the week, but any
friendly reminders would not be
Thanks for the ping. I started a new branch, nvme-4.18-2, based off of
Jens' for-next with this being the first new commit.
I'm certain we're still missing a lot of reviewed commits. I'll try to
go through the mail history and apply by the end of the week, but any
friendly reminders would not be
On Thu, May 17, 2018 at 11:15:59AM +, Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > As per NVME specification:
> > > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > > (INTMS/INTMC) registers to
On Thu, May 17, 2018 at 11:15:59AM +, Bharat Kumar Gogada wrote:
> > > Hi,
> > >
> > > As per NVME specification:
> > > 7.5.1.1 Host Software Interrupt Handling It is recommended that host
> > > software utilize the Interrupt Mask Set and Interrupt Mask Clear
> > > (INTMS/INTMC) registers to
On Wed, May 16, 2018 at 06:44:22PM -0400, Sinan Kaya wrote:
> On 5/16/2018 5:33 PM, Alexandru Gagniuc wrote:
> > AER status bits are sticky, and they survive system resets. Downstream
> > devices are usually taken care of after re-enumerating the downstream
> > busses, as the AER bits are cleared
On Wed, May 16, 2018 at 06:44:22PM -0400, Sinan Kaya wrote:
> On 5/16/2018 5:33 PM, Alexandru Gagniuc wrote:
> > AER status bits are sticky, and they survive system resets. Downstream
> > devices are usually taken care of after re-enumerating the downstream
> > busses, as the AER bits are cleared
On Wed, May 16, 2018 at 12:35:15PM +, Bharat Kumar Gogada wrote:
> Hi,
>
> As per NVME specification:
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and
> Interrupt Mask Clear (INTMS/INTMC)
> registers to efficiently handle
On Wed, May 16, 2018 at 12:35:15PM +, Bharat Kumar Gogada wrote:
> Hi,
>
> As per NVME specification:
> 7.5.1.1 Host Software Interrupt Handling
> It is recommended that host software utilize the Interrupt Mask Set and
> Interrupt Mask Clear (INTMS/INTMC)
> registers to efficiently handle
On Fri, May 11, 2018 at 11:26:11AM -0600, Keith Busch wrote:
> I trust you know the offsets here, but it's hard to tell what this
> is doing with hard-coded addresses. Just to be safe and for clarity,
> I recommend the 'CAP_*+' with a mask.
>
> For example, disabling ASPM L1.
On Fri, May 11, 2018 at 11:26:11AM -0600, Keith Busch wrote:
> I trust you know the offsets here, but it's hard to tell what this
> is doing with hard-coded addresses. Just to be safe and for clarity,
> I recommend the 'CAP_*+' with a mask.
>
> For example, disabling ASPM L1.
On Fri, May 11, 2018 at 11:57:52AM -0500, Bjorn Helgaas wrote:
> We reported several corrected errors before the nvme timeout:
>
> [12750.281158] nvme nvme0: controller is down; will reset: CSTS=0x,
> PCI_STATUS=0x10
> [12750.297594] nvme nvme0: I/O 455 QID 2 timeout, disable
On Fri, May 11, 2018 at 11:57:52AM -0500, Bjorn Helgaas wrote:
> We reported several corrected errors before the nvme timeout:
>
> [12750.281158] nvme nvme0: controller is down; will reset: CSTS=0x,
> PCI_STATUS=0x10
> [12750.297594] nvme nvme0: I/O 455 QID 2 timeout, disable
and nvme_dev.
> Second, it makes it clearer what error is being passed on:
> 'return -ENODEV' vs 'goto out', where 'result' happens to be -ENODEV
>
> CC: Keith Busch <keith.bu...@intel.com>
> Signed-off-by: Alexandru Gagniuc <mr.nuke...@gmail.com>
Ah, that's just wr
and nvme_dev.
> Second, it makes it clearer what error is being passed on:
> 'return -ENODEV' vs 'goto out', where 'result' happens to be -ENODEV
>
> CC: Keith Busch
> Signed-off-by: Alexandru Gagniuc
Ah, that's just wrapping a function that has a single out. The challenge
is to f
On Mon, May 07, 2018 at 06:57:54AM +, Bharat Kumar Gogada wrote:
> Hi,
>
> Does anyone have any inputs ?
Hi,
I recall we did observe issues like this when legacy interrupts were
used, so the driver does try to use MSI/MSIx if possible.
The nvme_timeout() is called from the block layer when
On Mon, May 07, 2018 at 06:57:54AM +, Bharat Kumar Gogada wrote:
> Hi,
>
> Does anyone have any inputs ?
Hi,
I recall we did observe issues like this when legacy interrupts were
used, so the driver does try to use MSI/MSIx if possible.
The nvme_timeout() is called from the block layer when
Thank you, applied for the next nvme 4.17-rc.
Thank you, applied for the next nvme 4.17-rc.
On Thu, May 03, 2018 at 05:00:35PM +0200, Johannes Thumshirn wrote:
> After commit bb06ec31452f ("nvme: expand nvmf_check_if_ready checks")
> resetting of the loopback nvme target failed as we forgot to switch
> it's state to NVME_CTRL_CONNECTING before we reconnect the admin
> queues. Therefore
On Thu, May 03, 2018 at 05:00:35PM +0200, Johannes Thumshirn wrote:
> After commit bb06ec31452f ("nvme: expand nvmf_check_if_ready checks")
> resetting of the loopback nvme target failed as we forgot to switch
> it's state to NVME_CTRL_CONNECTING before we reconnect the admin
> queues. Therefore
On Thu, Apr 26, 2018 at 02:25:15PM -0600, Johannes Thumshirn wrote:
> Keith reported that command submission and command completion
> tracepoints have the order of the cmdid and qid fields swapped.
>
> While it isn't easily possible to change the command submission
> tracepoint, as there is a
On Thu, Apr 26, 2018 at 02:25:15PM -0600, Johannes Thumshirn wrote:
> Keith reported that command submission and command completion
> tracepoints have the order of the cmdid and qid fields swapped.
>
> While it isn't easily possible to change the command submission
> tracepoint, as there is a
Thanks, staged for 4.18.
Thanks, staged for 4.18.
On Wed, Apr 18, 2018 at 03:32:47PM +0800, Jianchao Wang wrote:
> With lockdep enabled, when trigger nvme_remove, suspicious RCU
> usage warning will be printed out.
> Fix it with adding srcu_read_lock/unlock in it.
>
> Signed-off-by: Jianchao Wang
> ---
>
On Wed, Apr 18, 2018 at 03:32:47PM +0800, Jianchao Wang wrote:
> With lockdep enabled, when trigger nvme_remove, suspicious RCU
> usage warning will be printed out.
> Fix it with adding srcu_read_lock/unlock in it.
>
> Signed-off-by: Jianchao Wang
> ---
> drivers/nvme/host/nvme.h | 9 +++--
On Tue, Apr 17, 2018 at 04:15:38PM -0600, Jens Axboe wrote:
> >> Looks good to me.
> >>
> >> Reviewed-by: Matias Bjørling
> >>
> >> Keith, when convenient can you pick this up for 4.18?
> >
> > This looks safe for 4.17-rc2, no? Unless you want to wait for the next
> > release.
On Tue, Apr 17, 2018 at 04:15:38PM -0600, Jens Axboe wrote:
> >> Looks good to me.
> >>
> >> Reviewed-by: Matias Bjørling
> >>
> >> Keith, when convenient can you pick this up for 4.18?
> >
> > This looks safe for 4.17-rc2, no? Unless you want to wait for the next
> > release.
>
> It should
On Tue, Apr 17, 2018 at 08:16:25AM +0200, Matias Bjørling wrote:
> On 4/17/18 3:55 AM, Wei Xu wrote:
> > Add a new lightnvm quirk to identify CNEX’s Granby controller.
> >
> > Signed-off-by: Wei Xu
> > ---
> > drivers/nvme/host/pci.c | 2 ++
> > 1 file changed, 2
On Tue, Apr 17, 2018 at 08:16:25AM +0200, Matias Bjørling wrote:
> On 4/17/18 3:55 AM, Wei Xu wrote:
> > Add a new lightnvm quirk to identify CNEX’s Granby controller.
> >
> > Signed-off-by: Wei Xu
> > ---
> > drivers/nvme/host/pci.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff
Thanks, applied for 4.17-rc1.
I was a little surprised git was able to apply this since the patch
format is off, but it worked!
Thanks, applied for 4.17-rc1.
I was a little surprised git was able to apply this since the patch
format is off, but it worked!
On Thu, Apr 12, 2018 at 12:27:20PM -0400, Sinan Kaya wrote:
> On 4/12/2018 11:02 AM, Keith Busch wrote:
> >
> > Also, I thought the plan was to keep hotplug and non-hotplug the same,
> > except for the very end: if not a hotplug bridge, initiate the rescan
> > automat
On Thu, Apr 12, 2018 at 12:27:20PM -0400, Sinan Kaya wrote:
> On 4/12/2018 11:02 AM, Keith Busch wrote:
> >
> > Also, I thought the plan was to keep hotplug and non-hotplug the same,
> > except for the very end: if not a hotplug bridge, initiate the rescan
> > automat
On Thu, Apr 12, 2018 at 08:39:54AM -0600, Keith Busch wrote:
> On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> > On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> > >
> > > I think the scenario you are describing is two systems that are
> > >
On Thu, Apr 12, 2018 at 08:39:54AM -0600, Keith Busch wrote:
> On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> > On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> > >
> > > I think the scenario you are describing is two systems that are
> > >
On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> >
> > I think the scenario you are describing is two systems that are
> > identical except that in the first, the endpoint is below a hotplug
> > bridge, while in the second, it's below a
On Thu, Apr 12, 2018 at 10:34:37AM -0400, Sinan Kaya wrote:
> On 4/12/2018 10:06 AM, Bjorn Helgaas wrote:
> >
> > I think the scenario you are describing is two systems that are
> > identical except that in the first, the endpoint is below a hotplug
> > bridge, while in the second, it's below a
On Mon, Apr 09, 2018 at 10:41:52AM -0400, Oza Pawandeep wrote:
> +static int find_dpc_dev_iter(struct device *device, void *data)
> +{
> + struct pcie_port_service_driver *service_driver;
> + struct device **dev;
> +
> + dev = (struct device **) data;
> +
> + if (device->bus ==
On Mon, Apr 09, 2018 at 10:41:52AM -0400, Oza Pawandeep wrote:
> +static int find_dpc_dev_iter(struct device *device, void *data)
> +{
> + struct pcie_port_service_driver *service_driver;
> + struct device **dev;
> +
> + dev = (struct device **) data;
> +
> + if (device->bus ==
On Mon, Apr 09, 2018 at 10:41:53AM -0400, Oza Pawandeep wrote:
> +/**
> + * pcie_wait_for_link - Wait for link till it's active/inactive
> + * @pdev: Bridge device
> + * @active: waiting for active or inactive ?
> + *
> + * Use this to wait till link becomes active or inactive.
> + */
> +bool
On Mon, Apr 09, 2018 at 10:41:53AM -0400, Oza Pawandeep wrote:
> +/**
> + * pcie_wait_for_link - Wait for link till it's active/inactive
> + * @pdev: Bridge device
> + * @active: waiting for active or inactive ?
> + *
> + * Use this to wait till link becomes active or inactive.
> + */
> +bool
On Mon, Apr 09, 2018 at 10:41:51AM -0400, Oza Pawandeep wrote:
> This patch implements generic pcie_port_find_service() routine.
>
> Signed-off-by: Oza Pawandeep <p...@codeaurora.org>
Looks good.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Apr 09, 2018 at 10:41:51AM -0400, Oza Pawandeep wrote:
> This patch implements generic pcie_port_find_service() routine.
>
> Signed-off-by: Oza Pawandeep
Looks good.
Reviewed-by: Keith Busch
off-by: Oza Pawandeep <p...@codeaurora.org>
Looks fine.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
On Mon, Apr 09, 2018 at 10:41:49AM -0400, Oza Pawandeep wrote:
> This patch renames error recovery to generic name with pcie prefix
>
> Signed-off-by: Oza Pawandeep <p...@codeaurora.org>
Looks fine.
Reviewed-by: Keith Busch <keith.bu...@intel.com>
On Mon, Apr 09, 2018 at 10:41:49AM -0400, Oza Pawandeep wrote:
> This patch renames error recovery to generic name with pcie prefix
>
> Signed-off-by: Oza Pawandeep
Looks fine.
Reviewed-by: Keith Busch
On Fri, Mar 30, 2018 at 09:04:46AM +, Eric H. Chang wrote:
> We internally call PCIe-retimer as HBA. It's not a real Host Bus Adapter that
> translates the interface from PCIe to SATA or SAS. Sorry for the confusion.
Please don't call a PCIe retimer an "HBA"! :)
While your experiment is
On Fri, Mar 30, 2018 at 09:04:46AM +, Eric H. Chang wrote:
> We internally call PCIe-retimer as HBA. It's not a real Host Bus Adapter that
> translates the interface from PCIe to SATA or SAS. Sorry for the confusion.
Please don't call a PCIe retimer an "HBA"! :)
While your experiment is
Thanks, I've applied the patch with a simpler changelog explaining
the bug.
Thanks, I've applied the patch with a simpler changelog explaining
the bug.
On Mon, Apr 02, 2018 at 11:49:41AM -0300, Rodrigo R. Galvao wrote:
> When trying to issue write_zeroes command against TARGET with a 4K block
> size, it ends up hitting the following condition at __blkdev_issue_zeroout:
>
> if ((sector | nr_sects) & bs_mask)
> return -EINVAL;
On Mon, Apr 02, 2018 at 11:49:41AM -0300, Rodrigo R. Galvao wrote:
> When trying to issue write_zeroes command against TARGET with a 4K block
> size, it ends up hitting the following condition at __blkdev_issue_zeroout:
>
> if ((sector | nr_sects) & bs_mask)
> return -EINVAL;
: Bjorn Helgaas <bhelg...@google.com>
Looks good.
Acked-by: Keith Busch <keith.bu...@intel.com>
On Sat, Mar 31, 2018 at 05:34:26PM -0500, Bjorn Helgaas wrote:
> From: Bjorn Helgaas
>
> Rename from pcie-dpc.c to dpc.c. The path "drivers/pci/pcie/pcie-dpc.c"
> has more occurrences of "pci" than necessary.
>
> Signed-off-by: Bjorn Helgaas
Looks good.
Acked-by: Keith Busch
On Mon, Apr 02, 2018 at 10:47:10AM -0300, Rodrigo Rosatti Galvao wrote:
> One thing that I just forgot to explain previously, but I think its
> relevant:
>
> 1. The command is failing with 4k logical block size, but works with 512B
>
> 2. With the patch, the command is working for both 512B and
On Mon, Apr 02, 2018 at 10:47:10AM -0300, Rodrigo Rosatti Galvao wrote:
> One thing that I just forgot to explain previously, but I think its
> relevant:
>
> 1. The command is failing with 4k logical block size, but works with 512B
>
> 2. With the patch, the command is working for both 512B and
On Fri, Mar 30, 2018 at 06:18:50PM -0300, Rodrigo R. Galvao wrote:
> sector = le64_to_cpu(write_zeroes->slba) <<
> (req->ns->blksize_shift - 9);
> nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length)) <<
> - (req->ns->blksize_shift - 9)) + 1;
> +
On Fri, Mar 30, 2018 at 06:18:50PM -0300, Rodrigo R. Galvao wrote:
> sector = le64_to_cpu(write_zeroes->slba) <<
> (req->ns->blksize_shift - 9);
> nr_sector = (((sector_t)le16_to_cpu(write_zeroes->length)) <<
> - (req->ns->blksize_shift - 9)) + 1;
> +
On Fri, Mar 30, 2018 at 06:18:50PM -0300, Rodrigo R. Galvao wrote:
> When trying to issue write_zeroes command against TARGET the nr_sector is
> being incremented by 1, which ends up hitting the following condition at
> __blkdev_issue_zeroout:
>
> if ((sector | nr_sects) & bs_mask)
>
On Fri, Mar 30, 2018 at 06:18:50PM -0300, Rodrigo R. Galvao wrote:
> When trying to issue write_zeroes command against TARGET the nr_sector is
> being incremented by 1, which ends up hitting the following condition at
> __blkdev_issue_zeroout:
>
> if ((sector | nr_sects) & bs_mask)
>
On Wed, Mar 28, 2018 at 10:06:46AM +0200, Christoph Hellwig wrote:
> For PCIe devices the right policy is not a round robin but to use
> the pcie device closer to the node. I did a prototype for that
> long ago and the concept can work. Can you look into that and
> also make that policy used
On Wed, Mar 28, 2018 at 10:06:46AM +0200, Christoph Hellwig wrote:
> For PCIe devices the right policy is not a round robin but to use
> the pcie device closer to the node. I did a prototype for that
> long ago and the concept can work. Can you look into that and
> also make that policy used
Thanks, applied.
Thanks, applied.
Thanks, applied.
Thanks, applied.
Thanks, applied.
Thanks, applied.
On Wed, Mar 28, 2018 at 03:57:47PM +0200, Arnd Bergmann wrote:
> @@ -2233,8 +2233,8 @@ int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct
> nvme_ns *ns,
> c.get_log_page.lid = log_page;
> c.get_log_page.numdl = cpu_to_le16(dwlen & ((1 << 16) - 1));
> c.get_log_page.numdu =
On Wed, Mar 28, 2018 at 03:57:47PM +0200, Arnd Bergmann wrote:
> @@ -2233,8 +2233,8 @@ int nvme_get_log_ext(struct nvme_ctrl *ctrl, struct
> nvme_ns *ns,
> c.get_log_page.lid = log_page;
> c.get_log_page.numdl = cpu_to_le16(dwlen & ((1 << 16) - 1));
> c.get_log_page.numdu =
On Tue, Mar 27, 2018 at 08:00:33PM +0200, Matias Bjørling wrote:
> Compiling on 32 bits system produces a warning for the shift width
> when shifting 32 bit integer with 64bit integer.
>
> Make sure that offset always is 64bit, and use macros for retrieving
> lower and upper bits of the offset.
On Tue, Mar 27, 2018 at 08:00:33PM +0200, Matias Bjørling wrote:
> Compiling on 32 bits system produces a warning for the shift width
> when shifting 32 bit integer with 64bit integer.
>
> Make sure that offset always is 64bit, and use macros for retrieving
> lower and upper bits of the offset.
On Wed, Mar 21, 2018 at 08:27:07PM +0100, Matias Bjørling wrote:
> Enable the lightnvm integration to use the nvme_get_log_ext()
> function.
>
> Signed-off-by: Matias Bjørling
Thanks, applied to nvme-4.17.
On Wed, Mar 21, 2018 at 08:27:07PM +0100, Matias Bjørling wrote:
> Enable the lightnvm integration to use the nvme_get_log_ext()
> function.
>
> Signed-off-by: Matias Bjørling
Thanks, applied to nvme-4.17.
On Wed, Mar 21, 2018 at 11:48:09PM +0800, Ming Lei wrote:
> On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
> > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> > >> NVMe driver uses threads for the work at device reset, including enabling
> > >> the PCIe
On Wed, Mar 21, 2018 at 11:48:09PM +0800, Ming Lei wrote:
> On Wed, Mar 21, 2018 at 01:10:31PM +0100, Marta Rybczynska wrote:
> > > On Wed, Mar 21, 2018 at 12:00:49PM +0100, Marta Rybczynska wrote:
> > >> NVMe driver uses threads for the work at device reset, including enabling
> > >> the PCIe
On Wed, Mar 21, 2018 at 03:06:05AM -0700, Matias Bjørling wrote:
> > outside of nvme core so that we can use it form lightnvm.
> >
> > Signed-off-by: Javier González
> > ---
> > drivers/lightnvm/core.c | 11 +++
> > drivers/nvme/host/core.c | 6 ++--
> >
On Wed, Mar 21, 2018 at 03:06:05AM -0700, Matias Bjørling wrote:
> > outside of nvme core so that we can use it form lightnvm.
> >
> > Signed-off-by: Javier González
> > ---
> > drivers/lightnvm/core.c | 11 +++
> > drivers/nvme/host/core.c | 6 ++--
> >
On Wed, Mar 14, 2018 at 02:52:30PM -0600, Keith Busch wrote:
>
> Reviewed-by: Keith Busch
On Wed, Mar 14, 2018 at 02:52:30PM -0600, Keith Busch wrote:
>
> Reviewed-by: Keith Busch
y INT interrupt.
>
> With current code we do not acknowledge the interrupt back in dpc_irq()
> and we get dpc interrupt storm.
>
> This patch acknowledges the interrupt in interrupt handler.
>
> Signed-off-by: Oza Pawandeep <p...@codeaurora.org>
Thanks, this looks good to me.
Reviewed-by: Keith Busch
y INT interrupt.
>
> With current code we do not acknowledge the interrupt back in dpc_irq()
> and we get dpc interrupt storm.
>
> This patch acknowledges the interrupt in interrupt handler.
>
> Signed-off-by: Oza Pawandeep
Thanks, this looks good to me.
Reviewed-by: Keith Busch
On Mon, Mar 12, 2018 at 11:47:12PM -0400, Sinan Kaya wrote:
>
> The spec is recommending code to use "Hotplug Surprise" to differentiate
> these two cases we are looking for.
>
> The use case Keith is looking for is for hotplug support.
> The case I and Oza are more interested is for error
On Mon, Mar 12, 2018 at 11:47:12PM -0400, Sinan Kaya wrote:
>
> The spec is recommending code to use "Hotplug Surprise" to differentiate
> these two cases we are looking for.
>
> The use case Keith is looking for is for hotplug support.
> The case I and Oza are more interested is for error
Thanks, applied for 4.17.
Thanks, applied for 4.17.
On Tue, Mar 13, 2018 at 06:45:00PM +0800, Ming Lei wrote:
> On Tue, Mar 13, 2018 at 05:58:08PM +0800, Jianchao Wang wrote:
> > Currently, adminq and ioq1 share the same irq vector which is set
> > affinity to cpu0. If a system allows cpu0 to be offlined, the adminq
> > will not be able work any
On Tue, Mar 13, 2018 at 06:45:00PM +0800, Ming Lei wrote:
> On Tue, Mar 13, 2018 at 05:58:08PM +0800, Jianchao Wang wrote:
> > Currently, adminq and ioq1 share the same irq vector which is set
> > affinity to cpu0. If a system allows cpu0 to be offlined, the adminq
> > will not be able work any
On Mon, Mar 12, 2018 at 02:47:30PM -0500, Bjorn Helgaas wrote:
> [+cc Alex]
>
> On Mon, Mar 12, 2018 at 08:25:51AM -0600, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > >
On Mon, Mar 12, 2018 at 02:47:30PM -0500, Bjorn Helgaas wrote:
> [+cc Alex]
>
> On Mon, Mar 12, 2018 at 08:25:51AM -0600, Keith Busch wrote:
> > On Sun, Mar 11, 2018 at 11:03:58PM -0400, Sinan Kaya wrote:
> > > On 3/11/2018 6:03 PM, Bjorn Helgaas wrote:
> > > >
Hi Jianchao,
The patch tests fine on all hardware I had. I'd like to queue this up
for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy
suggested and a changelog aligned with Ming's insights?
Thanks,
Keith
Hi Jianchao,
The patch tests fine on all hardware I had. I'd like to queue this up
for the next 4.16-rc. Could you send a v3 with the cleanup changes Andy
suggested and a changelog aligned with Ming's insights?
Thanks,
Keith
On Mon, Mar 12, 2018 at 11:09:34AM -0700, Alexander Duyck wrote:
> On Mon, Mar 12, 2018 at 10:40 AM, Keith Busch <keith.bu...@intel.com> wrote:
> > On Mon, Mar 12, 2018 at 10:21:29AM -0700, Alexander Duyck wrote:
> >> diff --git a/include/linux/pci.h b/include/linux/pci.h
601 - 700 of 1501 matches
Mail list logo