RE: [PATCH v4] cxlflash: Base support for IBM CXL Flash Adapter

2015-06-08 Thread Stephen Bates
Hi I just wanted to add support for this patchset. There are others considering implementing block IO devices that use the CAPI (CXL) interface and having this patch-set upstream will be very useful in the future. Supported-by: Stephen Bates Cheers Stephen -Original Message- From

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Stephen Bates
>> >> I'd like to attend LSF/MM and would like to discuss polling for block >> drivers. >> >> Currently there is blk-iopoll but it is neither as widely used as NAPI >> in the networking field and accoring to Sagi's findings in [1] >> performance with polling is not on par with IRQ usage. >> >> On L

[LSF/MM TOPIC][LSF/MM ATTEND] IO completion polling for block drivers

2017-01-11 Thread Stephen Bates
CPU load and average completion times [2]. Stephen Bates [1] http://marc.info/?l=linux-block&m=146307410101827&w=2 [2] http://marc.info/?l=linux-block&m=147803441801858&w=2 -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majord

Re: [LSF/MM TOPIC][LSF/MM ATTEND] NAPI polling for block drivers

2017-01-11 Thread Stephen Bates
> > This is a separate topic. The initial proposal is for polling for > interrupt mitigation, you are talking about polling in the context of > polling for completion of an IO. > > We can definitely talk about this form of polling as well, but it should > be a separate topic and probably proposed i

[LSF/MM TOPIC] BPF for Block Devices

2019-02-07 Thread Stephen Bates
Hi All > A BPF track will join the annual LSF/MM Summit this year! Please read the > updated description and CFP information below. Well if we are adding BPF to LSF/MM I have to submit a request to discuss BPF for block devices please! There has been quite a bit of activity around the concept

Re: [RFC 6/8] nvmet: Be careful about using iomem accesses when dealing with p2pmem

2017-04-07 Thread Stephen Bates
On 2017-04-06, 6:33 AM, "Sagi Grimberg" wrote: > Say it's connected via 2 legs, the bar is accessed from leg A and the > data from the disk comes via leg B. In this case, the data is heading > towards the p2p device via leg B (might be congested), the completion > goes directly to the RC, and the

Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory

2017-04-20 Thread Stephen Bates
> Yes, this makes sense I think we really just want to distinguish host > memory or not in terms of the dev_pagemap type. I would like to see mutually exclusive flags for host memory (or not) and persistence (or not). Stephen

Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory

2017-04-20 Thread Stephen Bates
>> Yes, this makes sense I think we really just want to distinguish host >> memory or not in terms of the dev_pagemap type. > >> I would like to see mutually exclusive flags for host memory (or not) and >> persistence (or not). >> > > Why persistence? It has zero meaning to the mm. I like the ide

Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory

2017-04-25 Thread Stephen Bates
> My first reflex when reading this thread was to think that this whole domain > lends it self excellently to testing via Qemu. Could it be that doing this in > the opposite direction might be a safer approach in the long run even though > (significant) more work up-front? While the idea of QEM

Re: [RFC 0/8] Copy Offload with Peer-to-Peer PCI Memory

2017-04-25 Thread Stephen Bates
>> Yes, that's why I used 'significant'. One good thing is that given resources >> it can easily be done in parallel with other development, and will give >> additional >> insight of some form. > >Yup, well if someone wants to start working on an emulated RDMA device >that actually simulates prop