> -----Original Message-----
> From: Linux-nvdimm [mailto:linux-nvdimm-boun...@lists.01.org] On Behalf Of
> Dave Jiang
> Sent: Monday, August 7, 2017 11:39 AM
> To: vinod.k...@intel.com; dan.j.willi...@intel.com
> Cc: dmaeng...@vger.kernel.org; linux-nvdimm@lists.01.org
> Subject: [PATCH v4 0/8] Adding blk-mq and DMA support to pmem block driver
> 
...
> The following series implements adds blk-mq support to the pmem block
> driver
> and also adds infrastructure code to ioatdma and dmaengine in order to
> support copying to and from scatterlist in order to process block
> requests provided by blk-mq. The usage of DMA engines available on certain
> platforms allow us to drastically reduce CPU utilization and at the same
> time
> maintain performance that is good enough. Experimentations have been done
> on
> DRAM backed pmem block device that showed the utilization of DMA engine is
> beneficial. User can revert back to original behavior by providing
> queue_mode=0 to the nd_pmem kernel module if desired.

This needs the same error handling as memcpy_mcsafe():
* if pmem is the source, skip over known bad addresses already 
in the ARS list (don't purposely force the DMA engine to run
into errors)
* add any newly detected bad addresses that the DMA engine
finds to the ARS list (so they can be avoided in the future)
* if pmem is the destination, clear those addresses from the ARS
list (since fresh new data is being written)

If the DMA engine handles uncorrectable memory errors well and the
CPU does not survive UCEs, it would be preferable to use the DMA
engine instead of the CPU for all transfers - not just large
transfers.  

---
Robert Elliott, HPE Persistent Memory


_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to