On Thu, Jan 19, 2017 at 8:40 PM, Xiong Zhou <[email protected]> wrote:
> Hi,
>
> At first, I am not sure whether this is an issue.
>
> mmap a file in a DAX mountpoint, open another file
> in a non-DAX mountpoint with O_DIRECT, write the
> mapped area to the other file.
>
> This write Success on pmem ramdisk(memmap=2G!20G like)
> This write Fail(Bad address) on nvdimm pmem devices.
> This write Fail(Bad address) on brd based ramdisk.
>
> If we skip the O_DIRECT flag, all tests pass.
>
> If we write from DAX to DAX, all tests pass.
> If we write from non-DAX to DAX, all tests pass.
>
> Kernel version: Linus tree commit 44b4b46.
>
> I have checked back to v4.6 testing on nvdimm devices,
> all the same results. I do remember that this test
> passed on nvdimms back to May 2016 and i have some
> notes for that. However things changed a lot, test
> scripts, kernel code, even the nvdimm and machine
> firmweare.
>

This is expected and is the difference between a namespace in "raw"
mode and a namespace in "memory" mode. You can check your namespace's
mode with "ndctl list" (ndctl is packaged in Fedora).

The reason why memmap=ss!nn namespaces work by default is that we
assume they are relatively small and can afford to allocate struct
page in system memory. We don't make the same assumption with
NFIT-defined namespaces. They might be so large that trying to
allocate struct page for them could consume all of system memory. So
you have to convert them into "memory" mode and make a decision at the
time as to whether you want to use a portion of the pmem capacity as
struct page storage, or to go ahead and allocate struct page from
system memory.  By default ndctl will opt to reserve space from pmem
with a command like:

    ndctl create-namespace --reconfig=namespace0.0 --mode=memory --force
_______________________________________________
Linux-nvdimm mailing list
[email protected]
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to