dmaengine support for PMEM

2018-08-21 Thread Stephen Bates
Hi Dave

I hope you are well. Logan and I were looking at adding DMA support to PMEM and 
then were informed you have proposed some patches to do just that for the ioat 
DMA engine. The latest version of those I can see were the v7 from August 2017. 
Is there a more recent version? What happened to that series?

https://lists.01.org/pipermail/linux-nvdimm/2017-August/012208.html

Cheers
 
Stephen
 

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: dmaengine support for PMEM

2018-08-21 Thread Dave Jiang


On 08/21/2018 10:37 AM, Stephen  Bates wrote:
> Hi Dave
> 
> I hope you are well. Logan and I were looking at adding DMA support to PMEM 
> and then were informed you have proposed some patches to do just that for the 
> ioat DMA engine. The latest version of those I can see were the v7 from 
> August 2017. Is there a more recent version? What happened to that series?
> 
> https://lists.01.org/pipermail/linux-nvdimm/2017-August/012208.html
> 
> Cheers
>  
> Stephen
>  
> 

Hi Guys. Nothing has happened...yet... It's just on hold for now.

Here's where I left it last
https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma

I do think we need to do some rework with the dmaengine in order to get
better efficiency as well. At some point I would like to see a call in
dmaengine that will take a request (similar to mq) and just operate on
that and submit the descriptors in a single call. I think that can
possibly deprecate all the host of function pointers for dmaengine. I'm
hoping to find some time to take a look at some of this work towards the
end of the year. But I'd be highly interested if you guys have ideas and
thoughts on this topic. And you are welcome to take my patches and run
with it.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: dmaengine support for PMEM

2018-08-21 Thread Stephen Bates
>Here's where I left it last
>
> https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma

Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing! 

> I do think we need to do some rework with the dmaengine in order to get
>  better efficiency as well. At some point I would like to see a call in
 > dmaengine that will take a request (similar to mq) and just operate on
 > that and submit the descriptors in a single call. I think that can
 > possibly deprecate all the host of function pointers for dmaengine. I'm
 > hoping to find some time to take a look at some of this work towards the
 > end of the year. But I'd be highly interested if you guys have ideas and
 > thoughts on this topic. And you are welcome to take my patches and run
 > with it.

OK, we were experimenting with a single PMEM driver and making decisions on DMA 
vs memcpy based on IO size rather than forcing the user to choose which driver 
to use. 

Stephen


___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: dmaengine support for PMEM

2018-08-21 Thread Logan Gunthorpe



On 21/08/18 12:11 PM, Dave Jiang wrote:
> 
> 
> On 08/21/2018 11:07 AM, Stephen  Bates wrote:
>>>Here's where I left it last
>>>
>>> https://git.kernel.org/pub/scm/linux/kernel/git/djiang/linux.git/log/?h=pmem_blk_dma
>>
>> Thanks Dave. I'll certainly rebase these on 4.18.x and do some testing! 
>> 
>>> I do think we need to do some rework with the dmaengine in order to get
>>>  better efficiency as well. At some point I would like to see a call in
>>  > dmaengine that will take a request (similar to mq) and just operate on
>>  > that and submit the descriptors in a single call. I think that can
>>  > possibly deprecate all the host of function pointers for dmaengine. I'm
>>  > hoping to find some time to take a look at some of this work towards the
>>  > end of the year. But I'd be highly interested if you guys have ideas and
>>  > thoughts on this topic. And you are welcome to take my patches and run
>>  > with it.
>>
>> OK, we were experimenting with a single PMEM driver and making decisions on 
>> DMA vs memcpy based on IO size rather than forcing the user to choose which 
>> driver to use. 
> 
> Oh yeah. Also I think what we discovered is that the block layer will
> not send anything larger than 4k buffers in SGs. So unless your DMA
> engine is very efficient with processing 4k you don't get great
> performance. Not sure how to get around that since existing DMA engines
> tend to prefer larger buffers to get better performance.

Yeah, that's exactly what we were running up against. Then we found your
patch set that pretty much dealt with a lot of the problems we were seeing.

>From a code perspective, I like the split modules, but I guess it puts a
burden on the user to blacklist one or the other to get DMA or not.
Which may depend on work load.

Logan
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm