On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
> +int pci_p2pdma_add_client(struct list_head *head, struct device *dev)
It feels like code tried to be a generic p2pdma provider first. Then got
converted to PCI, yet all dev parameters are still struct device.
Maybe, dev parameter should also be stru
Hi Thomas,
At 03/09/2018 11:08 PM, Thomas Gleixner wrote:
[...]
I'm not sure if there is a clear indicator whether physcial hotplug is
supported or not, but the ACPI folks (x86) and architecture maintainers
+cc Rafael
should be able to answer that question. I have a machine which says:
Stephen,
> In preparation to enabling -Wvla, remove VLAs and replace them with
> fixed-length arrays instead.
>
> scsi_dh_{alua,emc,rdac} use variable-length array declarations to
> store command blocks, with the appropriate size as determined by
> COMMAND_SIZE. This patch replaces these with fix
On 3/12/2018 9:55 PM, Sinan Kaya wrote:
> On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
>> -if (nvmeq->sq_cmds_io)
>
> I think you should keep the code as it is for the case where
> (!nvmeq->sq_cmds_is_io && nvmeq->sq_cmds_io)
Never mind. I misunderstood the code.
>
> You are changing the b
On 3/12/2018 3:35 PM, Logan Gunthorpe wrote:
> - if (nvmeq->sq_cmds_io)
I think you should keep the code as it is for the case where
(!nvmeq->sq_cmds_is_io && nvmeq->sq_cmds_io)
You are changing the behavior for NVMe drives with CMB buffers.
You can change the if statement here with the state
On 3/12/2018 1:41 PM, Jonathan Corbet wrote:
This all seems good, but...could we consider moving this documentation to
driver-api/PCI as it's converted to RST? That would keep it together with
similar materials and bring a bit more coherence to Documentation/ as a
whole.
Yup, I'll change this
On Mon, 12 Mar 2018 15:41:26 +, Bart Van Assche
wrote:
> On Sat, 2018-03-10 at 14:14 +0100, Stephen Kitt wrote:
> > The two patches I sent were supposed to be alternative solutions; see
> > https://marc.info/?l=linux-scsi&m=152063671005295&w=2 for the
> > introduction (I seem to have messed up
On Mon, 12 Mar 2018 13:35:19 -0600
Logan Gunthorpe wrote:
> Add a restructured text file describing how to write drivers
> with support for P2P DMA transactions. The document describes
> how to use the APIs that were added in the previous few
> commits.
>
> Also adds an index for the PCI documen
For peer-to-peer transactions to work the downstream ports in each
switch must not have the ACS flags set. At this time there is no way
to dynamically change the flags and update the corresponding IOMMU
groups so this is done at enumeration time before the groups are
assigned.
This effectively mea
QUEUE_FLAG_PCI_P2P is introduced meaning a driver's request queue
supports targeting P2P memory.
REQ_PCI_P2P is introduced to indicate a particular bio request is
directed to/from PCI P2P memory. A request with this flag is not
accepted unless the corresponding queues have the QUEUE_FLAG_PCI_P2P
f
In order to use PCI P2P memory pci_p2pmem_[un]map_sg() functions must be
called to map the correct PCI bus address.
To do this, check the first page in the scatter list to see if it is P2P
memory or not. At the moment, scatter lists that contain P2P memory must
be homogeneous so if the first page
We create a configfs attribute in each nvme-fabrics target port to
enable p2p memory use. When enabled, the port will only then use the
p2p memory if a p2p memory device can be found which is behind the
same switch as the RDMA port and all the block devices in use. If
the user enabled it an no devi
Register the CMB buffer as p2pmem and use the appropriate allocation
functions to create and destroy the IO SQ.
If the CMB supports WDS and RDS, publish it for use as P2P memory
by other devices.
Signed-off-by: Logan Gunthorpe
---
drivers/nvme/host/pci.c | 75 +++
The DMA address used when mapping PCI P2P memory must be the PCI bus
address. Thus, introduce pci_p2pmem_[un]map_sg() to map the correct
addresses when using P2P memory.
For this, we assume that an SGL passed to these functions contain all
P2P memory or no P2P memory.
Signed-off-by: Logan Gunthor
Add a restructured text file describing how to write drivers
with support for P2P DMA transactions. The document describes
how to use the APIs that were added in the previous few
commits.
Also adds an index for the PCI documentation tree even though this
is the only PCI document that has ben conve
Hi Everyone,
Here's v3 of our series to introduce P2P based copy offload to NVMe
fabrics. This version has been rebased onto v4.16-rc5.
Thanks,
Logan
Changes in v3:
* Many more fixes and minor cleanups that were spotted by Bjorn
* Additional explanation of the ACS change in both the commit m
Introduce a quirk to use CMB-like memory on older devices that have
an exposed BAR but do not advertise support for using CMBLOC and
CMBSIZE.
We'd like to use some of these older cards to test P2P memory.
Signed-off-by: Logan Gunthorpe
Reviewed-by: Sagi Grimberg
---
drivers/nvme/host/nvme.h |
Add a sysfs group to display statistics about P2P memory that is
registered in each PCI device.
Attributes in the group display the total amount of P2P memory, the
amount available and whether it is published or not.
Signed-off-by: Logan Gunthorpe
---
Documentation/ABI/testing/sysfs-bus-pci | 2
For P2P requests, we must use the pci_p2pmem_[un]map_sg() functions
instead of the dma_map_sg functions.
With that, we can then indicate PCI_P2P support in the request queue.
For this, we create an NVME_F_PCI_P2P flag which tells the core to
set QUEUE_FLAG_PCI_P2P in the request queue.
Signed-off
Some PCI devices may have memory mapped in a BAR space that's
intended for use in peer-to-peer transactions. In order to enable
such transactions the memory must be registered with ZONE_DEVICE pages
so it can be used by DMA interfaces in existing drivers.
Add an interface for other subsystems to f
On 3/12/18 2:54 AM, Nikolay Borisov wrote:
>
>
> On 23.02.2018 13:45, Nikolay Borisov wrote:
>> This flag was added by 6039257378e4 ("direct-io: add flag to allow aio
>> writes beyond i_size") to support XFS. However, with the rework of
>> XFS' DIO's path to use iomap in acdda3aae146 ("xfs: use i
From: Josef Bacik
I messed up changing the size of an NBD device while it was connected by
not actually updating the device or doing the uevent. Fix this by
updating everything if we're connected and we change the size.
cc: sta...@vger.kernel.org
Fixes: 639812a ("nbd: don't set the device size
On Sat, 2018-03-10 at 14:14 +0100, Stephen Kitt wrote:
> The two patches I sent were supposed to be alternative solutions; see
> https://marc.info/?l=linux-scsi&m=152063671005295&w=2 for the introduction (I
> seem to have messed up the headers, so the mails didn’t end up threaded
> properly).
The
> -Original Message-
> From: Ming Lei [mailto:ming@redhat.com]
> Sent: Thursday, March 08, 2018 9:32 PM
> To: James Bottomley ; Jens Axboe
> ; Martin K . Petersen
> Cc: Christoph Hellwig ; linux-s...@vger.kernel.org; linux-
> bl...@vger.kernel.org; Meelis Roos ; Don Brace
> ; Kashyap D
On 23.02.2018 13:45, Nikolay Borisov wrote:
> This flag was added by 6039257378e4 ("direct-io: add flag to allow aio
> writes beyond i_size") to support XFS. However, with the rework of
> XFS' DIO's path to use iomap in acdda3aae146 ("xfs: use iomap_dio_rw")
> it became redundant. So let's remove
On Mon, Mar 12, 2018 at 08:52:02AM +0100, Christoph Hellwig wrote:
> On Sat, Mar 10, 2018 at 11:01:43PM +0800, Ming Lei wrote:
> > > I really dislike this being open coded in drivers. It really should
> > > be helper chared with the blk-mq map building that drivers just use.
> > >
> > > For now j
On Fri, Mar 09, 2018 at 10:24:45AM -0700, Keith Busch wrote:
> On Thu, Mar 08, 2018 at 08:42:20AM +0100, Christoph Hellwig wrote:
> >
> > So I suspect we'll need to go with a patch like this, just with a way
> > better changelog.
>
> I have to agree this is required for that use case. I'll run so
On Sat, Mar 10, 2018 at 11:15:20AM +0100, Christoph Hellwig wrote:
> This looks generally fine to me:
>
> Reviewed-by: Christoph Hellwig
>
> As a follow on we should probably kill virtscsi_queuecommand_single and
> thus virtscsi_host_template_single as well.
> > Given storage IO is always C/S mo
On Sat, Mar 10, 2018 at 11:01:43PM +0800, Ming Lei wrote:
> > I really dislike this being open coded in drivers. It really should
> > be helper chared with the blk-mq map building that drivers just use.
> >
> > For now just have a low-level blk_pci_map_queues that
> > blk_mq_pci_map_queues, hpsa
Linux-Regression-ID: lr#15a115
On Fri, 2018-03-09 at 11:32 +0800, Ming Lei wrote:
> From 84676c1f21 (genirq/affinity: assign vectors to all possible CPUs),
> one msix vector can be created without any online CPU mapped, then one
> command's completion may not be notified.
>
> This patch setups ma
30 matches
Mail list logo