On Fri, Dec 11, 2020 at 07:21:38PM +0530, SelvaKumar S wrote:
> +int blk_copy_emulate(struct block_device *bdev, struct blk_copy_payload
> *payload,
> + gfp_t gfp_mask)
> +{
> + struct request_queue *q = bdev_get_queue(bdev);
> + struct bio *bio;
> + void *buf = NULL;
> +
On Thu, Mar 25, 2021 at 09:48:37AM +, Niklas Cassel wrote:
> From: Niklas Cassel
>
> When a passthru command targets a specific namespace, the ns parameter to
> nvme_user_cmd()/nvme_user_cmd64() is set. However, there is currently no
> validation that the nsid specified in the passthru comman
On Tue, Mar 30, 2021 at 10:34:25AM -0700, Sagi Grimberg wrote:
>
> > > It is, but in this situation, the controller is sending a second
> > > completion that results in a use-after-free, which makes the
> > > transport irrelevant. Unless there is some other flow (which is
> > > unclear
> > > to me
For the subject, s/superflues/superfluous
A 'false' return means the value was safely set, so the comment should
say 'true' for when it is not considered safe.
Cc: Jason Gunthorpe
Cc: Kees Cook
Signed-off-by: Keith Busch
---
include/linux/overflow.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff -
On Thu, Apr 01, 2021 at 12:00:39PM +0300, Dan Carpenter wrote:
> Hi Keith,
>
> I've been trying to figure out ways Smatch can check for device managed
> resources. Like adding rules that if we call dev_set_name(&foo->bar)
> then it's device managaged and if there is a kfree(foo) without calling
>
On Mon, Apr 05, 2021 at 11:42:31PM +, Chuck Lever III wrote:
> > On Apr 5, 2021, at 4:07 PM, Jason Gunthorpe wrote:
> > On Mon, Apr 05, 2021 at 03:41:15PM +0200, Christoph Hellwig wrote:
> >> On Mon, Apr 05, 2021 at 08:23:54AM +0300, Leon Romanovsky wrote:
> >>> From: Leon Romanovsky
> >>>
>
On Sat, Feb 20, 2021 at 06:01:56PM +, David Laight wrote:
> From: SelvaKumar S
> > Sent: 19 February 2021 12:45
> >
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> > https://nv
On Sat, Feb 20, 2021 at 05:10:18PM +0800, Yang Li wrote:
> fixed the following coccicheck:
> ./drivers/nvme/host/core.c:3440:60-61: WARNING opportunity for
> kobj_to_dev()
> ./drivers/nvme/host/core.c:3679:60-61: WARNING opportunity for
> kobj_to_dev()
>
> Reported-by: Abaci Robot
> Signed-off-by
On Wed, Dec 09, 2020 at 06:32:27PM -0300, Enzo Matsumiya wrote:
> +void nvme_hwmon_exit(struct nvme_ctrl *ctrl)
> +{
> + hwmon_device_unregister(ctrl->dev);
> +}
The hwmon registration uses the devm_ version, so don't we need to use
the devm_hwmon_device_unregister() here?
On Thu, Nov 19, 2020 at 10:59:19AM -0800, Tom Roeder wrote:
> This patch changes the NVMe PCI implementation to cache host_mem_descs
> in non-DMA memory instead of depending on descriptors stored in DMA
> memory. This change is needed under the malicious-hypervisor threat
> model assumed by the AMD
On Fri, Nov 20, 2020 at 09:02:43AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 19, 2020 at 05:27:37PM -0800, Tom Roeder wrote:
> > This patch changes the NVMe PCI implementation to cache host_mem_descs
> > in non-DMA memory instead of depending on descriptors stored in DMA
> > memory. This change
On Fri, Dec 04, 2020 at 11:25:12AM +, Damien Le Moal wrote:
> On 2020/12/04 20:02, SelvaKumar S wrote:
> > This patchset tries to add support for TP4065a ("Simple Copy Command"),
> > v2020.05.04 ("Ratified")
> >
> > The Specification can be found in following link.
> > https://nvmexpress.org/w
On Tue, Dec 01, 2020 at 11:09:49AM +0530, SelvaKumar S wrote:
> +static void nvme_config_copy(struct gendisk *disk, struct nvme_ns *ns,
> +struct nvme_id_ns *id)
> +{
> + struct nvme_ctrl *ctrl = ns->ctrl;
> + struct request_queue *queue = disk->queue;
>
On Mon, Mar 01, 2021 at 02:55:30PM +0100, Hannes Reinecke wrote:
> On 3/1/21 2:26 PM, Daniel Wagner wrote:
> > On Sat, Feb 27, 2021 at 02:19:01AM +0900, Keith Busch wrote:
> >> Crashing is bad, silent data corruption is worse. Is there truly no
> >> defense against that
On Wed, Mar 10, 2021 at 02:41:10PM +0100, Christoph Hellwig wrote:
> On Wed, Mar 10, 2021 at 02:21:56PM +0100, Christoph Hellwig wrote:
> > Can you try this patch instead?
> >
> > http://lists.infradead.org/pipermail/linux-nvme/2021-February/023183.html
>
> Actually, please try the patch below in
On Mon, Mar 01, 2021 at 05:53:25PM +0100, Hannes Reinecke wrote:
> On 3/1/21 5:05 PM, Keith Busch wrote:
> > On Mon, Mar 01, 2021 at 02:55:30PM +0100, Hannes Reinecke wrote:
> > > On 3/1/21 2:26 PM, Daniel Wagner wrote:
> > > > On Sat, Feb 27, 2021 at 02:19
On Tue, Mar 02, 2021 at 08:18:40AM +0100, Hannes Reinecke wrote:
> On 3/1/21 9:59 PM, Keith Busch wrote:
> > On Mon, Mar 01, 2021 at 05:53:25PM +0100, Hannes Reinecke wrote:
> >> On 3/1/21 5:05 PM, Keith Busch wrote:
> >>> On Mon, Mar 01, 2021 at 02:55:30PM +0100, Ha
Looks good to me. This won't apply in linux-nvme yet and it may be a
little while before it does, so this might be considered to go upstream
through a different tree if you want this in sooner.
On Tue, 4 Mar 2014, Paul Bolle wrote:
Building nvme-core.o on 32 bit x86 triggers a rather impressive
On Fri, 28 Feb 2014, Kent Overstreet wrote:
On Thu, Feb 27, 2014 at 12:22:54PM -0500, Matthew Wilcox wrote:
On Wed, Feb 26, 2014 at 03:39:49PM -0800, Kent Overstreet wrote:
We do this by adding calls to blk_queue_split() to the various
make_request functions that need it - a few can already han
-by: Alexander Gordeev
Reviewed-by: Keith Busch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Thu, 20 Feb 2014, Paul Bolle wrote:
On Tue, 2014-02-18 at 10:02 +0100, Geert Uytterhoeven wrote:
And these popped up in v3.14-rc1 on 32 bit x86. This patch makes these
warnings go away. Compile tested only (on 32 and 64 bit x86).
Review is appreciated, because the code I'm touching here is
On Tue, 21 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
Signed-off-by: Alexander Gordeev
---
+static
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion within the traditional block layer, a multi-queue
block layer is being implemented.
This
On Fri, 11 Oct 2013, Matias Bjorling wrote:
The doorbell code is repeated various places. Refactor it into its own function
for clarity.
Signed-off-by: Matias Bjorling
Looks good to me.
Reviewed-by: Keith Busch
---
drivers/block/nvme-core.c | 29 +
1 file
On Fri, 17 Jan 2014, Bjorn Helgaas wrote:
On Fri, Jan 17, 2014 at 9:02 AM, Alexander Gordeev wrote:
In case MSI-X and MSI initialization failed the function
irq_set_affinity_hint() is called with uninitialized value
in dev->entry[0].vector. This update fixes the issue.
dev->entry[0].vector i
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This update fixes an oddity when a device is first added
and then removed from dev_list in case of initialization
failure, instead of just being added in case of success.
Signed-off-by: Alexander Gordeev
---
drivers/block/nvme-core.c | 19
On Mon, 20 Jan 2014, Alexander Gordeev wrote:
This is an attempt to make handling of admin queue in a
single scope. This update also fixes a IRQ leak in case
nvme_setup_io_queues() failed to allocate enough iomem
and bailed out with -ENOMEM errno.
This definitely seems to improve the code flow,
On Tue, 30 Sep 2014, Matias Bjørling wrote:
@@ -1967,27 +1801,30 @@ static struct nvme_ns *nvme_alloc_ns(struct nvme_dev
*dev, unsigned nsid,
{
...
- ns->queue->queue_flags = QUEUE_FLAG_DEFAULT;
+ queue_flag_set_unlocked(QUEUE_FLAG_DEFAULT, ns->queue);
Instead of the above, you w
ld up indefinitely.
Signed-off-by: Keith Busch
---
drivers/base/core.c|4
include/linux/device.h |1 +
2 files changed, 5 insertions(+)
diff --git a/drivers/base/core.c b/drivers/base/core.c
index 20da3ad..71b83bb 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -10,6
INTx irq if performing the shutdown asynchronously.
Signed-off-by: Keith Busch
---
drivers/block/nvme-core.c | 28 ++--
include/linux/nvme.h |1 +
2 files changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/block/nvme-core.c b/drivers/block/nvme-c
an implementation to the nvm-express driver
here so there's at least one user, assuming this is acceptable.
Keith Busch (2):
driver-core: allow asynchronous device shutdown
NVMe: Complete shutdown asynchronously
drivers/base/core.c |4
drivers
On Mon, 30 Jun 2014, David Rientjes wrote:
On Mon, 30 Jun 2014, Keith Busch wrote:
Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Cc: x...@kernel.org
Acked-by: David Rientjes
This is definitely a fix for "genirq: Provide generic hwirq allocation
facility", but the changel
irq_free_hwirqs() always calls irq_free_descs() with a cnt == 0
which makes it a no-op since the interrupt count to free is
decremented in itself.
Fixes: 7b6ef1262549f6afc5c881aaef80beb8fd15f908
Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Acked-by: David Rientjes
---
kernel/irq/irqdesc.c
Signed-off-by: Keith Busch
Cc: Thomas Gleixner
Cc: x...@kernel.org
---
kernel/irq/irqdesc.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 7339e42..1487a12 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
On Thu, 10 Jul 2014, Bjorn Helgaas wrote:
[+cc LKML, Greg KH for driver core async shutdown question]
On Tue, Jun 24, 2014 at 10:48:57AM -0600, Keith Busch wrote:
To provide context why I want to do this asynchronously, NVM-Express has
one PCI device per controller, of which there could be
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
don't know much about this area, but I think this may be from the recent
chunk sectors patch causing a __bio_a
On Tue, 10 Jun 2014, Jens Axboe wrote:
On Jun 10, 2014, at 9:52 AM, Keith Busch wrote:
On Tue, 10 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'd like to run xfstests on this, but it is failing mkfs.xfs. I honestly
don't
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm always hitting a BUG when
unmounting the scratch dev in xfstests generic/068. The bug
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 03:10 PM, Keith Busch wrote:
On Tue, 10 Jun 2014, Jens Axboe wrote:
On 06/10/2014 01:29 PM, Keith Busch wrote:
I have two devices, one formatted 4k, the other 512. The 4k is used as
the TEST_DEV and 512 is used as SCRATCH_DEV. I'm a
On Wed, 11 Jun 2014, Matias Bjørling wrote:
I've rebased nvmemq_review and added two patches from Jens that add
support for requests with single range virtual addresses.
Keith, will you take it for a spin and see if it fixes 068 for you?
There might still be a problem with some flushes, I'm loo
On Thu, 12 Jun 2014, Matias Bjørling wrote:
On 06/12/2014 12:51 AM, Keith Busch wrote:
So far so good: it passed the test that was previously failing. I'll
let the remaining xfstests run and see what happens.
Great.
The flushes was a fluke. I haven't been able to reproduce.
Coo
On Thu, 12 Jun 2014, Keith Busch wrote:
On Thu, 12 Jun 2014, Matias Bjørling wrote:
On 06/12/2014 12:51 AM, Keith Busch wrote:
So far so good: it passed the test that was previously failing. I'll
let the remaining xfstests run and see what happens.
Great.
The flushes was a fluke. I ha
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/12/2014 06:06 PM, Keith Busch wrote:
When cancelling IOs, we have to check if the hwctx has a valid tags
for some reason. I have 32 cores in my system and as many queues, but
It's because unused queues are torn down, to save memory.
blk-
On Fri, 13 Jun 2014, Jens Axboe wrote:
On 06/13/2014 09:05 AM, Keith Busch wrote:
Here are the performance drops observed with blk-mq with the existing
driver as baseline:
CPU : Drop
:.
0 : -6%
8 : -36%
16 : -12%
We need the hints back for sure, I'll run some of the
On Wed, 28 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I am concerned about device hot removal since the h/w queues can be
freed at any time. I *think* blk-mq helps with this in that the driver
will not see a new request after calling blk_
On Thu, 29 May 2014, Jens Axboe wrote:
On 2014-05-28 21:07, Keith Busch wrote:
Barring any bugs in the code, then yes, this should work. On the scsi-mq
side, extensive error injection and pulling has been done, and it seems to
hold up fine now. The ioctl path would need to be audited.
It
On Thu, 29 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'm pretty darn sure this new nvme_remove can cause a process
with an open reference to use queues after they're freed in the
nvme_submit_sync_command path, maybe even the admin tags t
On Mon, 2 Jun 2014, Matias Bjørling wrote:
Hi Matthew and Keith,
Here is an updated patch with the feedback from the previous days. It's against
Jens' for-3.16/core tree. You may use the nvmemq_wip_review branch at:
I'm testing this on my normal hardware now. As I feared, hot removal
doesn't w
768531] ---[ end trace 785048a51785f51e
]---
On Mon, 2 Jun 2014, Keith Busch wrote:
On Mon, 2 Jun 2014, Matias Bjørling wrote:
Hi Matthew and Keith,
Here is an updated patch with the feedback from the previous days. It's
against
Jens' for-3.16/core tree. You may use the nvmemq_wip_
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
Still fails as before:
[ 88.933881] BUG: unable to handle kernel NULL pointer dereference at
0014
[ 88.942900] IP: [] blk_mq_map_queue+0xf/0x1e
[ 88.949605] PGD 427b
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a simple user admin program sending nvme passthrough
commands in a tight loop, then run:
# echo
On Wed, 4 Jun 2014, Matias Bjørling wrote:
On 06/04/2014 12:27 AM, Keith Busch wrote:
On Tue, 3 Jun 2014, Matias Bjorling wrote:
Keith, will you take the nvmemq_wip_v6 branch for a spin? Thanks!
BTW, if you want to test this out yourself, it's pretty simple to
recreate. I just run a s
On Wed, 4 Jun 2014, Jens Axboe wrote:
On 06/04/2014 12:28 PM, Keith Busch wrote:
Are you testing against 3.13? You really need the current tree for this,
otherwise I'm sure you'll run into issues (as you appear to be :-)
I'm using Matias' current tree:
git://github.com/
On Fri, 13 Jun 2014, Jens Axboe wrote:
OK, same setup as mine. The affinity hint is really screwing us over, no
question about it. We just need a:
irq_set_affinity_hint(dev->entry[nvmeq->cq_vector].vector, hctx->cpumask);
in the ->init_hctx() methods to fix that up.
That brings us to roughly t
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+ struct nvme_dev *dev = pci_get_drvdata(pdev);
-
On Tue, 24 Jun 2014, Matias Bjorling wrote:
Den 16-06-2014 17:57, Keith Busch skrev:
This latest is otherwise stable on my dev machine.
May I add an Acked-by from you?
Totally up to Willy, but my feeling is "not just yet". I see the value
this driver provides, but I would need to
On Tue, 24 Jun 2014, Matias Bjørling wrote:
On Tue, Jun 24, 2014 at 10:33 PM, Keith Busch wrote:
On Tue, 24 Jun 2014, Matias Bjorling wrote:
Den 16-06-2014 17:57, Keith Busch skrev:
This latest is otherwise stable on my dev machine.
May I add an Acked-by from you?
Totally up to Willy
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
because of high lock congestion for high-performance nvm devices. To
remove the congestion
On Tue, 22 Oct 2013, Matias Bjorling wrote:
Den 22-10-2013 18:55, Keith Busch skrev:
On Fri, 18 Oct 2013, Matias Bjørling wrote:
On 10/18/2013 05:13 PM, Keith Busch wrote:
On Fri, 18 Oct 2013, Matias Bjorling wrote:
The nvme driver implements itself as a bio-based driver. This primarily
On Thu, 21 Aug 2014, Matias Bjørling wrote:
On 08/19/2014 12:49 AM, Keith Busch wrote:
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the blk_stop/start_queue()
e's opened inode so that two different disks that have
a major/minor collision can coexist.
Signed-off-by: Keith Busch
---
Maybe this is terrible idea!?
This came from proposals to the nvme driver that remove the dynamic
partitioning that was recently added, and I wanted to know why exactly
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed but
opened, and the pesky userspace refuses to
On Fri, 22 Aug 2014, Keith Busch wrote:
On Fri, 22 Aug 2014, Christoph Hellwig wrote:
On Fri, Aug 22, 2014 at 10:28:16AM -0600, Keith Busch wrote:
When using the GENHD_FL_EXT_DEVT disk flags, a newly added device may
be assigned the same major/minor as one that was previously removed but
Signed-off-by: Keith Busch
---
This was briefly discussed here:
http://lists.infradead.org/pipermail/linux-nvme/2014-August/001120.html
This patch goes one step further and fixes the same problem for partitions
and disks.
block/genhd.c | 18 +-
block/partition-generic
Signed-off-by: Keith Busch
---
v1->v2:
Applied comments from Willy: fixed gfp mask in idr_alloc to not wait,
and preload.
block/genhd.c | 24 ++--
block/partition-generic.c |2 +-
2 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/blo
On Fri, 15 Aug 2014, Matias Bjørling wrote:
* NVMe queues are merged with the tags structure of blk-mq.
I see the driver's queue suspend logic is removed, but I didn't mean to
imply it was safe to do so without replacing it with something else. I
thought maybe we could use the blk_stop/start_
On Sun, 10 Aug 2014, Matias Bjørling wrote:
On Sat, Jul 26, 2014 at 11:07 AM, Matias Bjørling wrote:
This converts the NVMe driver to a blk-mq request-based driver.
Willy, do you need me to make any changes to the conversion? Can you
pick it up for 3.17?
Hi Matias,
I'm starting to get a l
On Thu, 14 Aug 2014, Jens Axboe wrote:
On 08/14/2014 02:25 AM, Matias Bjørling wrote:
The result is set to BLK_MQ_RQ_QUEUE_ERROR, or am I mistaken?
Looks OK to me, looking at the code, 'result' is initialized to
BLK_MQ_RQ_QUEUE_BUSY though. Which looks correct, we don't want to error
on a susp
On Thu, 14 Aug 2014, Matias Bjorling wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The queue's tags were freed in 'blk_mq_map_swqueue'
On Thu, 14 Aug 2014, Jens Axboe wrote:
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with slab debugging? Matias, might be worth trying.
The allocation and freeing of blk-mq parts seems a bit a
On Tue, 4 Aug 2015, Christoph Hellwig wrote:
NVMe support currently isn't included as I don't have a multihost
NVMe setup to test on, but if I can find a volunteer to test it I'm
happy to write the code for it.
Looks pretty good so far. I'd be happy to give try it out with NVMe
subsystems.
--
T
From: Dave Jiang
This is in perperation of un-exporting the pcie_set_mps() function
symbol. A driver should not be changing the MPS as that is the
responsibility of the PCI subsystem.
Signed-off-by: Dave Jiang
---
drivers/infiniband/hw/qib/qib_pcie.c | 27 +--
1 file
do nothing" to update the
down stream port to match the upstream port if it is capable.
Dave Jiang (2):
QIB: Removing usage of pcie_set_mps()
PCIE: Remove symbol export for pcie_set_mps()
Keith Busch (1):
pci: Default MPS tuning to match upstream port
arch/arm/kernel/bios32.c
From: Dave Jiang
The setting of PCIe MPS should be left to the PCI subsystem and not
the driver. An ill configured MPS by the driver could cause the device
to not function or unstablize the entire system. Removing the exported
symbol.
Signed-off-by: Dave Jiang
---
drivers/pci/pci.c |1 -
1
hot adding a switch, or explicit request to rescan.
Signed-off-by: Keith Busch
Cc: Dave Jiang
Cc: Austin Bolen
Cc: Myron Stowe
Cc: Jon Mason
Cc: Bjorn Helgaas
---
arch/arm/kernel/bios32.c | 12
arch/powerpc/kernel/pci-common.c |7 ---
arch/tile/kernel/pci_
On Mon, 17 Aug 2015, Bjorn Helgaas wrote:
On Wed, Jul 29, 2015 at 04:18:53PM -0600, Keith Busch wrote:
The new pcie tuning will check the device's MPS against the parent bridge
when it is initially added to the pci subsystem, prior to attaching
to a driver. If MPS is mismatched, the downs
On Wed, May 11, 2016 at 11:25:16AM +0200, Johannes Thumshirn wrote:
> What ever happened to this patch?
> I can easily reproduce the bug using
> while [ true ]; do rmmod nvme nvme_core; modprobe nvme; done
This patch was supposed to fix using a doorbell between resets when the
driver had BAR0 unma
On Sat, May 21, 2016 at 01:36:00AM +0300, Alexey Khoroshilov wrote:
> kref_put(&ns->kref, nvme_free_ns) is called in nvme_get_ns_from_disk()
> under dev_list_lock spinlock, while nvme_free_ns() locks the spinlock
> by itself. This can lead to a deadlock.
>
> The patch moves try_module_get() and it
On Tue, May 24, 2016 at 02:19:17AM -0700, Christoph Hellwig wrote:
> On Tue, May 24, 2016 at 11:15:52AM +0200, Johannes Thumshirn wrote:
> > As I've probably missed v4.7, is it possible to get it for v4.8?
> > Or should I take on the PCI helper functions Christoph suggested first?
>
> Let's get th
On Thu, Nov 12, 2015 at 11:37:54PM -0800, Christoph Hellwig wrote:
> Jens, Keith: any chance to get this to Linux for 4.4 (and -stable)?
I agreed, looks good to me.
Acked-by: Keith Busch
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of
From: Liu Jiang
Previously msi_domain_alloc() assumes MSI irqdomains always have parent
irqdomains, but that's not true for the new Intel VMD devices. So relax
msi_domain_alloc() to support parentless MSI irqdomains.
Signed-off-by: Jiang Liu
Tested-by: Keith Busch
---
kernel/irq/msi.
And use the max bus resource from the parent rather than assume 255.
Signed-off-by: Keith Busch
---
drivers/pci/probe.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index f14a970..ae5a4b3 100644
--- a/drivers/pci
ngs.
Commended the potential list corruption if NMI, interrupt, and irq
teardown occur concurrently.
Using raw spinlock for irq list manipulation.
Fix IRQ flags: removed IRQF_SHARED.
Fixed the SoB in patch 1, added my Tested-by.
Keith Busch (5):
pci: skip child bus with conflicting resour
orts. Devices or drivers
requiring these features should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Signed-off-by: Keith Busch
---
arch/x86/Kconfig | 13 +
arch/x86/include/asm/hw_irq.h |
PCI-e segments will continue to use the lower 16 bits as required by
ACPI. Special domains may use the full 32-bits.
Signed-off-by: Keith Busch
---
lib/filter.c |2 +-
lib/pci.h|2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/filter.c b/lib/filter.c
index
New x86 pci h/w will require dma operations specific to that domain. This
patch allows those domains to register their operations, and sets devices
as they are discovere3d in that domain to use them.
Signed-off-by: Keith Busch
---
arch/x86/include/asm/device.h | 10 ++
arch/x86/pci
Signed-off-by: Keith Busch
---
drivers/pci/msi.c | 2 ++
kernel/irq/irqdomain.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 45a5148..1aa1ad4 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1120,6 +1120,7 @@ struct pci_dev
On Fri, Nov 13, 2015 at 04:27:07PM -0500, Thomas Gleixner wrote:
> On Fri, 13 Nov 2015, Keith Busch wrote:
> > +/**
> > + * struct vmd_irq_list - list of driver requested irq's mapping to a vmd
> > vector
> > + * &irq_list: list of child irq's t
New x86 pci h/w will require dma operations specific to that domain. This
patch allows those domains to register their operations, and sets devices
as they are discovere3d in that domain to use them.
Signed-off-by: Keith Busch
---
arch/x86/include/asm/device.h | 10 ++
arch/x86/pci
Does not allocate a child bus if the new bus number does not fit in the
parent's bus resource window.
Signed-off-by: Keith Busch
---
drivers/pci/probe.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index edb1984..6e29f7a 100644
orts. Devices or drivers
requiring these features should either not be placed below VMD-owned
root ports, or VMD should be disabled by BIOS for such endpoints.
Signed-off-by: Keith Busch
---
arch/x86/Kconfig | 13 +
arch/x86/include/asm/hw_irq.h |
PCI-e segments will continue to use the lower 16 bits as required by
ACPI. Special domains may use the full 32-bits.
Signed-off-by: Keith Busch
---
lib/filter.c |2 +-
lib/pci.h|2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/filter.c b/lib/filter.c
index
New pci device provides additional pci domains that start above what 16
bits can address.
Signed-off-by: Keith Busch
---
drivers/pci/pcie/aer/aer_inject.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/pci/pcie/aer/aer_inject.c
b/drivers/pci/pcie
ved
subordinate bus. The way to fix that is to call "pci_remove_bus_device"
instead, but I don't think we want to remove the bridge dev since it
is accessible, albeit not very useful as a bridge device.
Keith Busch (6):
pci: child bus alloc fix on constrained resource
Export msi and
Signed-off-by: Keith Busch
---
drivers/pci/msi.c | 2 ++
kernel/irq/irqdomain.c | 1 +
2 files changed, 3 insertions(+)
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index 53e4632..0fec654 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -1126,6 +1126,7 @@ struct pci_dev
From: Liu Jiang
Previously msi_domain_alloc() assumes MSI irqdomains always have parent
irqdomains, but that's not true for the new Intel VMD devices. So relax
msi_domain_alloc() to support parentless MSI irqdomains.
Signed-off-by: Jiang Liu
Signed-off-by: Keith Busch
---
kernel/irq/
On Thu, Dec 17, 2015 at 11:15:45AM -0600, Bjorn Helgaas wrote:
> > @@ -45,7 +45,7 @@ pci_filter_parse_slot_v33(struct pci_filter *f, char *str)
> > if (str[0] && strcmp(str, "*"))
> > {
> > long int x = strtol(str, &e, 16);
> > - if ((e && *e) || (x < 0 || x > 0x
On Thu, Dec 17, 2015 at 11:27:18AM -0600, Bjorn Helgaas wrote:
> On Mon, Dec 07, 2015 at 02:32:24PM -0700, Keith Busch wrote:
> > + if (busnr > parent->busn_res.end) {
> > + dev_printk(KERN_DEBUG, &parent->dev,
> > + "c
On Thu, Dec 17, 2015 at 11:46:15AM -0600, Bjorn Helgaas wrote:
> On Mon, Dec 07, 2015 at 02:32:28PM -0700, Keith Busch wrote:
> > - u16 domain;
> > + int domain;
>
> If you want 32 bits explicitly, why don't you use u32 here?
It matches the types already defined
301 - 400 of 931 matches
Mail list logo