On Wed, Aug 23, 2017 at 12:56 PM, Dave Jiang <dave.ji...@intel.com> wrote:
>
>
> On 08/23/2017 11:39 AM, Dan Williams wrote:
>> On Mon, Aug 21, 2017 at 2:11 PM, Dave Jiang <dave.ji...@intel.com> wrote:
>>> Adding a DMA supported blk-mq driver for pmem.
>>
>> "Add support for offloading pmem block-device I/O operations to a DMA 
>> engine."
>>
>>> This provides signficant CPU
>>
>> *significant
>>
>>> utilization reduction.
>>
>> "at the cost of some increased latency and bandwidth reduction in some 
>> cases."
>>
>>> By default the pmem driver will be using blk-mq with
>>
>> "By default the current cpu-copy based pmem driver will load, but this
>> driver can be manually selected with a modprobe configuration."
>>
>>> DMA through the dmaengine API. DMA can be turned off with use_dma=0 kernel
>>> parameter.
>>
>> Do we need the module option? It seems for debug / testing a user can
>> simply unload the ioatdma driver, otherwise we should use dma by
>> default.
>>
>>> Additional kernel parameters are provided:
>>>
>>> queue_depth: The queue depth for blk-mq. Typically in relation to what the
>>>              DMA engine can provide per queue/channel. This needs to take
>>>              into account of num_sg as well for some DMA engines. i.e.
>>>              num_sg * queue_depth < total descriptors available per queue or
>>>              channel.
>>>
>>> q_per_node: Hardware queues per node. Typically the number of channels the
>>>             DMA engine can provide per socket.
>>> num_sg: Number of scatterlist we can handle per I/O request.
>>
>> Why do these need to be configurable?
>
> The concern is with other arch/platforms that have different DMA
> engines. The configurations would be platform dependent.

...but these are answers we should be able to get from dmaengine and
the specific DMA drivers in use. An end user has no chance of guessing
the right values.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to