On Fri, Mar 23, 2018 at 03:59:14PM -0600, Logan Gunthorpe wrote:
> On 23/03/18 03:50 PM, Bjorn Helgaas wrote:
> > Popping way up the stack, my original point was that I'm trying to
> > remove restrictions on what devices can participate in
> > peer-to-peer DMA. I think it's fairly clear that in co
All nvme queue memory is allocated up front. We don't take the node
into consideration when creating queues anymore, so removing the unused
parameter.
Signed-off-by: Keith Busch
---
drivers/nvme/host/pci.c | 10 +++---
1 file changed, 3 insertions(+), 7 deletions(-)
diff --git a/drivers/nvm
From: Jianchao Wang
The admin and first IO queues shared the first irq vector, which has an
affinity mask including cpu0. If a system allows cpu0 to be offlined,
the admin queue may not be usable if no other CPUs in the affinity mask
are online. This is a problem since unlike IO queues, there is
The PCI interrupt vectors intended to be associated with a queue may
not start at 0. This patch adds an offset parameter so blk-mq may find
the intended affinity mask. The default value is 0 so existing drivers
that don't care about this parameter don't need to change.
Signed-off-by: Keith Busch
On 23/03/18 03:50 PM, Bjorn Helgaas wrote:
> Popping way up the stack, my original point was that I'm trying to
> remove restrictions on what devices can participate in peer-to-peer
> DMA. I think it's fairly clear that in conventional PCI, any devices
> in the same PCI hierarchy, i.e., below th
On Thu, Mar 22, 2018 at 10:57:32PM +, Stephen Bates wrote:
> > I've seen the response that peers directly below a Root Port could not
> > DMA to each other through the Root Port because of the "route to self"
> > issue, and I'm not disputing that.
>
> Bjorn
>
> You asked me for a referen
Seagate announced their split actuator SAS drive, which will probably
require some kernel changes for full support. It's targeted at cloud
provider JBODs and RAID.
Here are some of the drive's architectural points. Since the two LUNs
share many common components (e.g. spindle) Seagate allocated so
> On 22 Mar 2018, at 18.00, Matias Bjørling wrote:
>
> On 03/22/2018 03:34 PM, Javier González wrote:
>> Hi,
>> I have been looking into a bug report when using pblk and raid5 on top
>> and I am having problems understanding if the problem is in pblk's bio
>> handling or on raid5's bio assumption
On 02/05/2018 01:15 PM, Matias Bjørling wrote:
The nvme driver sets up the size of the nvme namespace in two steps.
First it initializes the device with standard logical block and
metadata sizes, and then sets the correct logical block and metadata
size. Due to the OCSSD 2.0 specification relies
For the sysfs functions, the function names are embedded into their
error strings. If the function name later changes, the string may
not be updated accordingly. Update the strings to use __func__
to avoid this.
Signed-off-by: Matias Bjørling
---
drivers/nvme/host/lightnvm.c | 12 ++--
1
10 matches
Mail list logo