Make blkdev_issue_flush() behave correctly according to its required
semantics - all volatile cached data is flushed to stable storage.

Eventually this needs to be replaced with something much more precise by
tracking dirty DAX entries via the radix tree in struct address_space, but
for now this gives us correctness even if the performance is quite bad.

Userspace applications looking to avoid the fsync/msync penalty should
consider more fine-grained flushing via the NVML library:

https://github.com/pmem/nvml

Signed-off-by: Ross Zwisler <ross.zwis...@linux.intel.com>
---
 drivers/nvdimm/pmem.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 0ba6a97..eea7997 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -80,7 +80,14 @@ static void pmem_make_request(struct request_queue *q, 
struct bio *bio)
        if (do_acct)
                nd_iostat_end(bio, start);
 
-       if (bio_data_dir(bio))
+       if (bio->bi_rw & REQ_FLUSH) {
+               void __pmem *addr = pmem->virt_addr + pmem->data_offset;
+               size_t size = pmem->size - pmem->data_offset;
+
+               wb_cache_pmem(addr, size);
+       }
+
+       if (bio_data_dir(bio) || (bio->bi_rw & REQ_FLUSH))
                wmb_pmem();
 
        bio_endio(bio);
@@ -189,6 +196,7 @@ static int pmem_attach_disk(struct device *dev,
        blk_queue_physical_block_size(pmem->pmem_queue, PAGE_SIZE);
        blk_queue_max_hw_sectors(pmem->pmem_queue, UINT_MAX);
        blk_queue_bounce_limit(pmem->pmem_queue, BLK_BOUNCE_ANY);
+       blk_queue_flush(pmem->pmem_queue, REQ_FLUSH);
        queue_flag_set_unlocked(QUEUE_FLAG_NONROT, pmem->pmem_queue);
 
        disk = alloc_disk(0);
-- 
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to