Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Bjorn Helgaas
On Fri, Mar 23, 2018 at 03:59:14PM -0600, Logan Gunthorpe wrote: > On 23/03/18 03:50 PM, Bjorn Helgaas wrote: > > Popping way up the stack, my original point was that I'm trying to > > remove restrictions on what devices can participate in > > peer-to-peer DMA. I think it's fairly clear that in co

[PATCH 2/3] nvme-pci: Remove unused queue parameter

2018-03-23 Thread Keith Busch
All nvme queue memory is allocated up front. We don't take the node into consideration when creating queues anymore, so removing the unused parameter. Signed-off-by: Keith Busch --- drivers/nvme/host/pci.c | 10 +++--- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/drivers/nvm

[PATCH 3/3] nvme-pci: Separate IO and admin queue IRQ vectors

2018-03-23 Thread Keith Busch
From: Jianchao Wang The admin and first IO queues shared the first irq vector, which has an affinity mask including cpu0. If a system allows cpu0 to be offlined, the admin queue may not be usable if no other CPUs in the affinity mask are online. This is a problem since unlike IO queues, there is

[PATCH 1/3] blk-mq: Allow PCI vector offset for mapping queues

2018-03-23 Thread Keith Busch
The PCI interrupt vectors intended to be associated with a queue may not start at 0. This patch adds an offset parameter so blk-mq may find the intended affinity mask. The default value is 0 so existing drivers that don't care about this parameter don't need to change. Signed-off-by: Keith Busch

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Logan Gunthorpe
On 23/03/18 03:50 PM, Bjorn Helgaas wrote: > Popping way up the stack, my original point was that I'm trying to > remove restrictions on what devices can participate in peer-to-peer > DMA. I think it's fairly clear that in conventional PCI, any devices > in the same PCI hierarchy, i.e., below th

Re: [PATCH v3 01/11] PCI/P2PDMA: Support peer-to-peer memory

2018-03-23 Thread Bjorn Helgaas
On Thu, Mar 22, 2018 at 10:57:32PM +, Stephen Bates wrote: > > I've seen the response that peers directly below a Root Port could not > > DMA to each other through the Root Port because of the "route to self" > > issue, and I'm not disputing that. > > Bjorn > > You asked me for a referen

Multi-Actuator SAS HDD First Look

2018-03-23 Thread Tim Walker
Seagate announced their split actuator SAS drive, which will probably require some kernel changes for full support. It's targeted at cloud provider JBODs and RAID. Here are some of the drive's architectural points. Since the two LUNs share many common components (e.g. spindle) Seagate allocated so

Re: problem with bio handling on raid5 and pblk

2018-03-23 Thread Javier González
> On 22 Mar 2018, at 18.00, Matias Bjørling wrote: > > On 03/22/2018 03:34 PM, Javier González wrote: >> Hi, >> I have been looking into a bug report when using pblk and raid5 on top >> and I am having problems understanding if the problem is in pblk's bio >> handling or on raid5's bio assumption

Re: [PATCH 4/4] nvme: lightnvm: add late setup of block size and metadata

2018-03-23 Thread Matias Bjørling
On 02/05/2018 01:15 PM, Matias Bjørling wrote: The nvme driver sets up the size of the nvme namespace in two steps. First it initializes the device with standard logical block and metadata sizes, and then sets the correct logical block and metadata size. Due to the OCSSD 2.0 specification relies

[PATCH] lightnvm: remove function name in strings

2018-03-23 Thread Matias Bjørling
For the sysfs functions, the function names are embedded into their error strings. If the function name later changes, the string may not be updated accordingly. Update the strings to use __func__ to avoid this. Signed-off-by: Matias Bjørling --- drivers/nvme/host/lightnvm.c | 12 ++-- 1