On Thu, Oct 29, 2015 at 7:24 AM, Jeff Moyer <jmo...@redhat.com> wrote: > Ross Zwisler <ross.zwis...@linux.intel.com> writes: > >> This series implements the very slow but correct handling for >> blkdev_issue_flush() with DAX mappings, as discussed here: >> >> https://lkml.org/lkml/2015/10/26/116 >> >> I don't think that we can actually do the >> >> on_each_cpu(sync_cache, ...); >> >> ...where sync_cache is something like: >> >> cache_disable(); >> wbinvd(); >> pcommit(); >> cache_enable(); >> >> solution as proposed by Dan because WBINVD + PCOMMIT doesn't guarantee that >> your writes actually make it durably onto the DIMMs. I believe you really do >> need to loop through the cache lines, flush them with CLWB, then fence and >> PCOMMIT. > > *blink* > *blink* > > So much for not violating the principal of least surprise. I suppose > you've asked the hardware folks, and they've sent you down this path?
The SDM states that wbinvd only asynchronously "signals" L3 to flush. >> I do worry that the cost of blindly flushing the entire PMEM namespace on >> each >> fsync or msync will be prohibitively expensive, and that we'll by very >> incentivized to move to the radix tree based dirty page tracking as soon as >> possible. :) > > Sure, but wbinvd would be quite costly as well. Either way I think a > better solution will be required in the near term. > As Peter points out the irqoff latency that wbinvd introduces also makes it not optimal. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/