Re: [PATCH 17/17] xfs: support for synchronous DAX faults

2017-10-24 Thread Dave Chinner
On Tue, Oct 24, 2017 at 05:24:14PM +0200, Jan Kara wrote:
> From: Christoph Hellwig 
> 
> Return IOMAP_F_DIRTY from xfs_file_iomap_begin() when asked to prepare
> blocks for writing and the inode is pinned, and has dirty fields other
> than the timestamps.

That's "fdatasync dirty", not "fsync dirty".

IOMAP_F_DIRTY needs a far better description of it's semantics than
"/* block mapping is not yet on persistent storage */" so we know
exactly what filesystems are supposed to be implementing here. I
suspect that what it really is meant to say is:

/*
 * IOMAP_F_DIRTY indicates the inode has uncommitted metadata to
 * written data and requires fdatasync to commit to persistent storage.
 */

[]

> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index f179bdf1644d..b43be199fbdf 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -33,6 +33,7 @@
>  #include "xfs_error.h"
>  #include "xfs_trans.h"
>  #include "xfs_trans_space.h"
> +#include "xfs_inode_item.h"
>  #include "xfs_iomap.h"
>  #include "xfs_trace.h"
>  #include "xfs_icache.h"
> @@ -1086,6 +1087,10 @@ xfs_file_iomap_begin(
>   trace_xfs_iomap_found(ip, offset, length, 0, );
>   }
>  
> + if ((flags & IOMAP_WRITE) && xfs_ipincount(ip) &&
> + (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
> + iomap->flags |= IOMAP_F_DIRTY;

This is the very definition of an inode that is "fdatasync dirty".

H, shouldn't this also be set for read faults, too?

Cheers,

Dave.
-- 
Dave Chinner
da...@fromorbit.com
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[no subject]

2017-10-24 Thread Al Rayan Investment
please enjoy the new order. Faster delivery is in mid-November. 

Give me a good price, the deposit must be arranged in the 25th TH 

Thank you!
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 17/17] xfs: support for synchronous DAX faults

2017-10-24 Thread Ross Zwisler
On Tue, Oct 24, 2017 at 05:24:14PM +0200, Jan Kara wrote:
> From: Christoph Hellwig 
> 
> Return IOMAP_F_DIRTY from xfs_file_iomap_begin() when asked to prepare
> blocks for writing and the inode is pinned, and has dirty fields other
> than the timestamps.  In __xfs_filemap_fault() we then detect this case
> and call dax_finish_sync_fault() to make sure all metadata is committed,
> and to insert the page table entry.
> 
> Note that this will also dirty corresponding radix tree entry which is
> what we want - fsync(2) will still provide data integrity guarantees for
> applications not using userspace flushing. And applications using
> userspace flushing can avoid calling fsync(2) and thus avoid the
> performance overhead.
> 
> [JK: Added VM_SYNC flag handling]
> 
> Signed-off-by: Christoph Hellwig 
> Signed-off-by: Jan Kara 

I don't know enough about XFS dirty inode handling to be able to comment on
the changes in xfs_file_iomap_begin(), but the rest looks good.

Reviewed-by: Ross Zwisler 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 01/17] mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags

2017-10-24 Thread Ross Zwisler
On Tue, Oct 24, 2017 at 05:23:58PM +0200, Jan Kara wrote:
> From: Dan Williams 
> 
> The mmap(2) syscall suffers from the ABI anti-pattern of not validating
> unknown flags. However, proposals like MAP_SYNC need a mechanism to
> define new behavior that is known to fail on older kernels without the
> support. Define a new MAP_SHARED_VALIDATE flag pattern that is
> guaranteed to fail on all legacy mmap implementations.
> 
> It is worth noting that the original proposal was for a standalone
> MAP_VALIDATE flag. However, when that  could not be supported by all
> archs Linus observed:
> 
> I see why you *think* you want a bitmap. You think you want
> a bitmap because you want to make MAP_VALIDATE be part of MAP_SYNC
> etc, so that people can do
> 
> ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED
>   | MAP_SYNC, fd, 0);
> 
> and "know" that MAP_SYNC actually takes.
> 
> And I'm saying that whole wish is bogus. You're fundamentally
> depending on special semantics, just make it explicit. It's already
> not portable, so don't try to make it so.
> 
> Rename that MAP_VALIDATE as MAP_SHARED_VALIDATE, make it have a value
> of 0x3, and make people do
> 
> ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED_VALIDATE
>   | MAP_SYNC, fd, 0);
> 
> and then the kernel side is easier too (none of that random garbage
> playing games with looking at the "MAP_VALIDATE bit", but just another
> case statement in that map type thing.
> 
> Boom. Done.
> 
> Similar to ->fallocate() we also want the ability to validate the
> support for new flags on a per ->mmap() 'struct file_operations'
> instance basis.  Towards that end arrange for flags to be generically
> validated against a mmap_supported_flags exported by 'struct
> file_operations'. By default all existing flags are implicitly
> supported, but new flags require MAP_SHARED_VALIDATE and
> per-instance-opt-in.
> 
> Cc: Jan Kara 
> Cc: Arnd Bergmann 
> Cc: Andy Lutomirski 
> Cc: Andrew Morton 
> Suggested-by: Christoph Hellwig 
> Suggested-by: Linus Torvalds 
> Signed-off-by: Dan Williams 
> Signed-off-by: Jan Kara 

Looks great.

Reviewed-by: Ross Zwisler 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH] mmap.2: Add description of MAP_SHARED_VALIDATE and MAP_SYNC

2017-10-24 Thread Ross Zwisler
On Tue, Oct 24, 2017 at 05:24:15PM +0200, Jan Kara wrote:
> Signed-off-by: Jan Kara 

This looks unchanged since the previous version?

> ---
>  man2/mmap.2 | 30 ++
>  1 file changed, 30 insertions(+)
> 
> diff --git a/man2/mmap.2 b/man2/mmap.2
> index 47c3148653be..598ff0c64f7f 100644
> --- a/man2/mmap.2
> +++ b/man2/mmap.2
> @@ -125,6 +125,21 @@ are carried through to the underlying file.
>  to the underlying file requires the use of
>  .BR msync (2).)
>  .TP
> +.B MAP_SHARED_VALIDATE
> +The same as
> +.B MAP_SHARED
> +except that
> +.B MAP_SHARED
> +mappings ignore unknown flags in
> +.IR flags .
> +In contrast when creating mapping of
> +.B MAP_SHARED_VALIDATE
> +mapping type, the kernel verifies all passed flags are known and fails the
> +mapping with
> +.BR EOPNOTSUPP
> +otherwise. This mapping type is also required to be able to use some mapping
> +flags.
> +.TP
>  .B MAP_PRIVATE
>  Create a private copy-on-write mapping.
>  Updates to the mapping are not visible to other processes
> @@ -352,6 +367,21 @@ option.
>  Because of the security implications,
>  that option is normally enabled only on embedded devices
>  (i.e., devices where one has complete control of the contents of user 
> memory).
> +.TP
> +.BR MAP_SYNC " (since Linux 4.15)"
> +This flags is available only with
> +.B MAP_SHARED_VALIDATE
> +mapping type. Mappings of
> +.B MAP_SHARED
> +type will silently ignore this flag.
> +This flag is supported only for files supporting DAX (direct mapping of 
> persistent
> +memory). For other files, creating mapping with this flag results in
> +.B EOPNOTSUPP
> +error. Shared file mappings with this flag provide the guarantee that while
> +some memory is writeably mapped in the address space of the process, it will
> +be visible in the same file at the same offset even after the system crashes 
> or
> +is rebooted. This allows users of such mappings to make data modifications
> +persistent in a more efficient way using appropriate CPU instructions.
>  .PP
>  Of the above flags, only
>  .B MAP_FIXED
> -- 
> 2.12.3
> 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 2/2] dm log writes: add support for DAX

2017-10-24 Thread Ross Zwisler
On Tue, Oct 24, 2017 at 03:22:23PM -0400, Mike Snitzer wrote:
> On Fri, Oct 20 2017 at  1:24am -0400,
> Ross Zwisler  wrote:
> 
> > Now that we have the ability log filesystem writes using a flat buffer, add
> > support for DAX.  Unfortunately we can't easily track data that has been
> > written via mmap() now that the dax_flush() abstraction was removed by this
> > commit:
> > 
> > commit c3ca015fab6d ("dax: remove the pmem_dax_ops->flush abstraction")
> > 
> > Otherwise we could just treat each flush as a big write, and store the data
> > that is being synced to media.  It may be worthwhile to add the dax_flush()
> > entry point back, just as a notifier so we can do this logging.
> > 
> > The motivation for this support is the need for an xfstest that can test
> > the new MAP_SYNC DAX flag.  By logging the filesystem activity with
> > dm-log-writes we can show that the MAP_SYNC page faults are writing out
> > their metadata as they happen, instead of requiring an explicit
> > msync/fsync.
> > 
> > Signed-off-by: Ross Zwisler 
> 
> I've picked this up, please see:
> https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-4.15/dm=ae613bbb0144e84cb3c0ebfa9f4fd4d1507c2f0e
> 
> I tweaked the header and tweaked a couple whitespace nits.  Also
> switched version bump from 1.0.1 to 1.1.0.
> 
> Thanks,
> Mike

Sure, your tweaks look fine.  Thanks!
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 2/2] dm log writes: add support for DAX

2017-10-24 Thread Mike Snitzer
On Fri, Oct 20 2017 at  1:24am -0400,
Ross Zwisler  wrote:

> Now that we have the ability log filesystem writes using a flat buffer, add
> support for DAX.  Unfortunately we can't easily track data that has been
> written via mmap() now that the dax_flush() abstraction was removed by this
> commit:
> 
> commit c3ca015fab6d ("dax: remove the pmem_dax_ops->flush abstraction")
> 
> Otherwise we could just treat each flush as a big write, and store the data
> that is being synced to media.  It may be worthwhile to add the dax_flush()
> entry point back, just as a notifier so we can do this logging.
> 
> The motivation for this support is the need for an xfstest that can test
> the new MAP_SYNC DAX flag.  By logging the filesystem activity with
> dm-log-writes we can show that the MAP_SYNC page faults are writing out
> their metadata as they happen, instead of requiring an explicit
> msync/fsync.
> 
> Signed-off-by: Ross Zwisler 

I've picked this up, please see:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-4.15/dm=ae613bbb0144e84cb3c0ebfa9f4fd4d1507c2f0e

I tweaked the header and tweaked a couple whitespace nits.  Also
switched version bump from 1.0.1 to 1.1.0.

Thanks,
Mike
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 1/2] dm log writes: Add support for inline data buffers

2017-10-24 Thread Mike Snitzer
On Fri, Oct 20 2017 at  1:24am -0400,
Ross Zwisler  wrote:

> Currently dm-log-writes supports writing filesystem data via BIOs, and
> writing internal metadata from a flat buffer via write_metadata().
> 
> For DAX writes, though, we won't have a BIO, but will instead have an
> iterator that we'll want to use to fill a flat data buffer.
> 
> So, create write_inline_data() which allows us to write filesystem data
> using a flat buffer as a source, and wire it up in log_one_block().
> 
> Signed-off-by: Ross Zwisler 

Hi,

I picked this up but tweaked some whitespace and couple style nits, see:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-4.15/dm=6a697d036324c7fbe63fb49599027269006161e7

Thanks,
Mike
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 14/17] dax: Implement dax_finish_sync_fault()

2017-10-24 Thread Jan Kara
Implement a function that filesystems can call to finish handling of
synchronous page faults. It takes care of syncing appropriare file range
and insertion of page table entry.

Reviewed-by: Ross Zwisler 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 fs/dax.c  | 83 +++
 include/linux/dax.h   |  2 ++
 include/trace/events/fs_dax.h |  2 ++
 3 files changed, 87 insertions(+)

diff --git a/fs/dax.c b/fs/dax.c
index bb9ff907738c..78233c716757 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1492,3 +1492,86 @@ int dax_iomap_fault(struct vm_fault *vmf, enum 
page_entry_size pe_size,
}
 }
 EXPORT_SYMBOL_GPL(dax_iomap_fault);
+
+/**
+ * dax_insert_pfn_mkwrite - insert PTE or PMD entry into page tables
+ * @vmf: The description of the fault
+ * @pe_size: Size of entry to be inserted
+ * @pfn: PFN to insert
+ *
+ * This function inserts writeable PTE or PMD entry into page tables for mmaped
+ * DAX file.  It takes care of marking corresponding radix tree entry as dirty
+ * as well.
+ */
+static int dax_insert_pfn_mkwrite(struct vm_fault *vmf,
+ enum page_entry_size pe_size,
+ pfn_t pfn)
+{
+   struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+   void *entry, **slot;
+   pgoff_t index = vmf->pgoff;
+   int vmf_ret, error;
+
+   spin_lock_irq(>tree_lock);
+   entry = get_unlocked_mapping_entry(mapping, index, );
+   /* Did we race with someone splitting entry or so? */
+   if (!entry ||
+   (pe_size == PE_SIZE_PTE && !dax_is_pte_entry(entry)) ||
+   (pe_size == PE_SIZE_PMD && !dax_is_pmd_entry(entry))) {
+   put_unlocked_mapping_entry(mapping, index, entry);
+   spin_unlock_irq(>tree_lock);
+   trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+ VM_FAULT_NOPAGE);
+   return VM_FAULT_NOPAGE;
+   }
+   radix_tree_tag_set(>page_tree, index, PAGECACHE_TAG_DIRTY);
+   entry = lock_slot(mapping, slot);
+   spin_unlock_irq(>tree_lock);
+   switch (pe_size) {
+   case PE_SIZE_PTE:
+   error = vm_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
+   vmf_ret = dax_fault_return(error);
+   break;
+#ifdef CONFIG_FS_DAX_PMD
+   case PE_SIZE_PMD:
+   vmf_ret = vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
+   pfn, true);
+   break;
+#endif
+   default:
+   vmf_ret = VM_FAULT_FALLBACK;
+   }
+   put_locked_mapping_entry(mapping, index);
+   trace_dax_insert_pfn_mkwrite(mapping->host, vmf, vmf_ret);
+   return vmf_ret;
+}
+
+/**
+ * dax_finish_sync_fault - finish synchronous page fault
+ * @vmf: The description of the fault
+ * @pe_size: Size of entry to be inserted
+ * @pfn: PFN to insert
+ *
+ * This function ensures that the file range touched by the page fault is
+ * stored persistently on the media and handles inserting of appropriate page
+ * table entry.
+ */
+int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
+ pfn_t pfn)
+{
+   int err;
+   loff_t start = ((loff_t)vmf->pgoff) << PAGE_SHIFT;
+   size_t len = 0;
+
+   if (pe_size == PE_SIZE_PTE)
+   len = PAGE_SIZE;
+   else if (pe_size == PE_SIZE_PMD)
+   len = PMD_SIZE;
+   else
+   WARN_ON_ONCE(1);
+   err = vfs_fsync_range(vmf->vma->vm_file, start, start + len - 1, 1);
+   if (err)
+   return VM_FAULT_SIGBUS;
+   return dax_insert_pfn_mkwrite(vmf, pe_size, pfn);
+}
+EXPORT_SYMBOL_GPL(dax_finish_sync_fault);
diff --git a/include/linux/dax.h b/include/linux/dax.h
index e7fa4b8f45bc..d403f78b706c 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -96,6 +96,8 @@ ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter 
*iter,
const struct iomap_ops *ops);
 int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
pfn_t *pfnp, const struct iomap_ops *ops);
+int dax_finish_sync_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
+ pfn_t pfn);
 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
 int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
  pgoff_t index);
diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h
index 88a9d19b8ff8..7725459fafef 100644
--- a/include/trace/events/fs_dax.h
+++ b/include/trace/events/fs_dax.h
@@ -190,6 +190,8 @@ DEFINE_EVENT(dax_pte_fault_class, name, \
 DEFINE_PTE_FAULT_EVENT(dax_pte_fault);
 DEFINE_PTE_FAULT_EVENT(dax_pte_fault_done);
 DEFINE_PTE_FAULT_EVENT(dax_load_hole);

[PATCH 13/17] dax, iomap: Add support for synchronous faults

2017-10-24 Thread Jan Kara
Add a flag to iomap interface informing the caller that inode needs
fdstasync(2) for returned extent to become persistent and use it in DAX
fault code so that we don't map such extents into page tables
immediately. Instead we propagate the information that fdatasync(2) is
necessary from dax_iomap_fault() with a new VM_FAULT_NEEDDSYNC flag.
Filesystem fault handler is then responsible for calling fdatasync(2)
and inserting pfn into page tables.

Reviewed-by: Ross Zwisler 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 fs/dax.c  | 39 +--
 include/linux/iomap.h |  1 +
 include/linux/mm.h|  6 +-
 3 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index efc210ff6665..bb9ff907738c 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1091,6 +1091,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
unsigned flags = IOMAP_FAULT;
int error, major = 0;
bool write = vmf->flags & FAULT_FLAG_WRITE;
+   bool sync;
int vmf_ret = 0;
void *entry;
pfn_t pfn;
@@ -1169,6 +1170,8 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
goto finish_iomap;
}
 
+   sync = (vma->vm_flags & VM_SYNC) && (iomap.flags & IOMAP_F_DIRTY);
+
switch (iomap.type) {
case IOMAP_MAPPED:
if (iomap.flags & IOMAP_F_NEW) {
@@ -1182,12 +1185,27 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
 
entry = dax_insert_mapping_entry(mapping, vmf, entry,
 dax_iomap_sector(, pos),
-0, write);
+0, write && !sync);
if (IS_ERR(entry)) {
error = PTR_ERR(entry);
goto error_finish_iomap;
}
 
+   /*
+* If we are doing synchronous page fault and inode needs fsync,
+* we can insert PTE into page tables only after that happens.
+* Skip insertion for now and return the pfn so that caller can
+* insert it after fsync is done.
+*/
+   if (sync) {
+   if (WARN_ON_ONCE(!pfnp)) {
+   error = -EIO;
+   goto error_finish_iomap;
+   }
+   *pfnp = pfn;
+   vmf_ret = VM_FAULT_NEEDDSYNC | major;
+   goto finish_iomap;
+   }
trace_dax_insert_mapping(inode, vmf, entry);
if (write)
error = vm_insert_mixed_mkwrite(vma, vaddr, pfn);
@@ -1287,6 +1305,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
struct address_space *mapping = vma->vm_file->f_mapping;
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
+   bool sync;
unsigned int iomap_flags = (write ? IOMAP_WRITE : 0) | IOMAP_FAULT;
struct inode *inode = mapping->host;
int result = VM_FAULT_FALLBACK;
@@ -1371,6 +1390,8 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
if (iomap.offset + iomap.length < pos + PMD_SIZE)
goto finish_iomap;
 
+   sync = (vma->vm_flags & VM_SYNC) && (iomap.flags & IOMAP_F_DIRTY);
+
switch (iomap.type) {
case IOMAP_MAPPED:
error = dax_iomap_pfn(, pos, PMD_SIZE, );
@@ -1379,10 +1400,24 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, 
pfn_t *pfnp,
 
entry = dax_insert_mapping_entry(mapping, vmf, entry,
dax_iomap_sector(, pos),
-   RADIX_DAX_PMD, write);
+   RADIX_DAX_PMD, write && !sync);
if (IS_ERR(entry))
goto finish_iomap;
 
+   /*
+* If we are doing synchronous page fault and inode needs fsync,
+* we can insert PMD into page tables only after that happens.
+* Skip insertion for now and return the pfn so that caller can
+* insert it after fsync is done.
+*/
+   if (sync) {
+   if (WARN_ON_ONCE(!pfnp))
+   goto finish_iomap;
+   *pfnp = pfn;
+   result = VM_FAULT_NEEDDSYNC;
+   goto finish_iomap;
+   }
+
trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry);
result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn,
write);
diff --git 

[PATCH 09/17] dax: Fix comment describing dax_iomap_fault()

2017-10-24 Thread Jan Kara
Add missing argument description.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/dax.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 675fab8ec41f..5214ed9ba508 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1435,7 +1435,8 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 /**
  * dax_iomap_fault - handle a page fault on a DAX file
  * @vmf: The description of the fault
- * @ops: iomap ops passed from the file system
+ * @pe_size: Size of the page to fault in
+ * @ops: Iomap ops passed from the file system
  *
  * When a page fault occurs, filesystems may call this helper in
  * their fault handler for DAX files. dax_iomap_fault() assumes the caller
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 17/17] xfs: support for synchronous DAX faults

2017-10-24 Thread Jan Kara
From: Christoph Hellwig 

Return IOMAP_F_DIRTY from xfs_file_iomap_begin() when asked to prepare
blocks for writing and the inode is pinned, and has dirty fields other
than the timestamps.  In __xfs_filemap_fault() we then detect this case
and call dax_finish_sync_fault() to make sure all metadata is committed,
and to insert the page table entry.

Note that this will also dirty corresponding radix tree entry which is
what we want - fsync(2) will still provide data integrity guarantees for
applications not using userspace flushing. And applications using
userspace flushing can avoid calling fsync(2) and thus avoid the
performance overhead.

[JK: Added VM_SYNC flag handling]

Signed-off-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 fs/xfs/xfs_file.c  | 15 ++-
 fs/xfs/xfs_iomap.c |  5 +
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 7c6b8def6eed..02093df4b314 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -44,6 +44,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static const struct vm_operations_struct xfs_file_vm_ops;
 
@@ -1040,7 +1041,11 @@ __xfs_filemap_fault(
 
xfs_ilock(XFS_I(inode), XFS_MMAPLOCK_SHARED);
if (IS_DAX(inode)) {
-   ret = dax_iomap_fault(vmf, pe_size, NULL, _iomap_ops);
+   pfn_t pfn;
+
+   ret = dax_iomap_fault(vmf, pe_size, , _iomap_ops);
+   if (ret & VM_FAULT_NEEDDSYNC)
+   ret = dax_finish_sync_fault(vmf, pe_size, pfn);
} else {
if (write_fault)
ret = iomap_page_mkwrite(vmf, _iomap_ops);
@@ -1131,6 +1136,13 @@ xfs_file_mmap(
struct file *filp,
struct vm_area_struct *vma)
 {
+   /*
+* We don't support synchronous mappings for non-DAX files. At least
+* until someone comes with a sensible use case.
+*/
+   if (!IS_DAX(file_inode(filp)) && (vma->vm_flags & VM_SYNC))
+   return -EOPNOTSUPP;
+
file_accessed(filp);
vma->vm_ops = _file_vm_ops;
if (IS_DAX(file_inode(filp)))
@@ -1149,6 +1161,7 @@ const struct file_operations xfs_file_operations = {
.compat_ioctl   = xfs_file_compat_ioctl,
 #endif
.mmap   = xfs_file_mmap,
+   .mmap_supported_flags = MAP_SYNC,
.open   = xfs_file_open,
.release= xfs_file_release,
.fsync  = xfs_file_fsync,
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index f179bdf1644d..b43be199fbdf 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -33,6 +33,7 @@
 #include "xfs_error.h"
 #include "xfs_trans.h"
 #include "xfs_trans_space.h"
+#include "xfs_inode_item.h"
 #include "xfs_iomap.h"
 #include "xfs_trace.h"
 #include "xfs_icache.h"
@@ -1086,6 +1087,10 @@ xfs_file_iomap_begin(
trace_xfs_iomap_found(ip, offset, length, 0, );
}
 
+   if ((flags & IOMAP_WRITE) && xfs_ipincount(ip) &&
+   (ip->i_itemp->ili_fsync_fields & ~XFS_ILOG_TIMESTAMP))
+   iomap->flags |= IOMAP_F_DIRTY;
+
xfs_bmbt_to_iomap(ip, iomap, );
 
if (shared)
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH] mmap.2: Add description of MAP_SHARED_VALIDATE and MAP_SYNC

2017-10-24 Thread Jan Kara
Signed-off-by: Jan Kara 
---
 man2/mmap.2 | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/man2/mmap.2 b/man2/mmap.2
index 47c3148653be..598ff0c64f7f 100644
--- a/man2/mmap.2
+++ b/man2/mmap.2
@@ -125,6 +125,21 @@ are carried through to the underlying file.
 to the underlying file requires the use of
 .BR msync (2).)
 .TP
+.B MAP_SHARED_VALIDATE
+The same as
+.B MAP_SHARED
+except that
+.B MAP_SHARED
+mappings ignore unknown flags in
+.IR flags .
+In contrast when creating mapping of
+.B MAP_SHARED_VALIDATE
+mapping type, the kernel verifies all passed flags are known and fails the
+mapping with
+.BR EOPNOTSUPP
+otherwise. This mapping type is also required to be able to use some mapping
+flags.
+.TP
 .B MAP_PRIVATE
 Create a private copy-on-write mapping.
 Updates to the mapping are not visible to other processes
@@ -352,6 +367,21 @@ option.
 Because of the security implications,
 that option is normally enabled only on embedded devices
 (i.e., devices where one has complete control of the contents of user memory).
+.TP
+.BR MAP_SYNC " (since Linux 4.15)"
+This flags is available only with
+.B MAP_SHARED_VALIDATE
+mapping type. Mappings of
+.B MAP_SHARED
+type will silently ignore this flag.
+This flag is supported only for files supporting DAX (direct mapping of 
persistent
+memory). For other files, creating mapping with this flag results in
+.B EOPNOTSUPP
+error. Shared file mappings with this flag provide the guarantee that while
+some memory is writeably mapped in the address space of the process, it will
+be visible in the same file at the same offset even after the system crashes or
+is rebooted. This allows users of such mappings to make data modifications
+persistent in a more efficient way using appropriate CPU instructions.
 .PP
 Of the above flags, only
 .B MAP_FIXED
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 15/17] ext4: Simplify error handling in ext4_dax_huge_fault()

2017-10-24 Thread Jan Kara
If transaction starting fails, just bail out of the function immediately
instead of checking for that condition throughout the function.

Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/ext4/file.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 3cec0b95672f..208adfc3e673 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -302,16 +302,17 @@ static int ext4_dax_huge_fault(struct vm_fault *vmf,
down_read(_I(inode)->i_mmap_sem);
handle = ext4_journal_start_sb(sb, EXT4_HT_WRITE_PAGE,
   EXT4_DATA_TRANS_BLOCKS(sb));
+   if (IS_ERR(handle)) {
+   up_read(_I(inode)->i_mmap_sem);
+   sb_end_pagefault(sb);
+   return VM_FAULT_SIGBUS;
+   }
} else {
down_read(_I(inode)->i_mmap_sem);
}
-   if (!IS_ERR(handle))
-   result = dax_iomap_fault(vmf, pe_size, NULL, _iomap_ops);
-   else
-   result = VM_FAULT_SIGBUS;
+   result = dax_iomap_fault(vmf, pe_size, NULL, _iomap_ops);
if (write) {
-   if (!IS_ERR(handle))
-   ext4_journal_stop(handle);
+   ext4_journal_stop(handle);
up_read(_I(inode)->i_mmap_sem);
sb_end_pagefault(sb);
} else {
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 06/17] dax: Create local variable for vmf->flags & FAULT_FLAG_WRITE test

2017-10-24 Thread Jan Kara
There are already two users and more are coming.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/dax.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index c09465884bbe..5ea71381dba0 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1116,6 +1116,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
struct iomap iomap = { 0 };
unsigned flags = IOMAP_FAULT;
int error, major = 0;
+   bool write = vmf->flags & FAULT_FLAG_WRITE;
int vmf_ret = 0;
void *entry;
 
@@ -1130,7 +1131,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
goto out;
}
 
-   if ((vmf->flags & FAULT_FLAG_WRITE) && !vmf->cow_page)
+   if (write && !vmf->cow_page)
flags |= IOMAP_WRITE;
 
entry = grab_mapping_entry(mapping, vmf->pgoff, 0);
@@ -1207,7 +1208,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
break;
case IOMAP_UNWRITTEN:
case IOMAP_HOLE:
-   if (!(vmf->flags & FAULT_FLAG_WRITE)) {
+   if (!write) {
vmf_ret = dax_load_hole(mapping, entry, vmf);
goto finish_iomap;
}
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 0/17 v5] dax, ext4, xfs: Synchronous page faults

2017-10-24 Thread Jan Kara
Hello,

here is the fifth version of my patches to implement synchronous page faults
for DAX mappings to make flushing of DAX mappings possible from userspace so
that they can be flushed on finer than page granularity and also avoid the
overhead of a syscall.

We use a new mmap flag MAP_SYNC to indicate that page faults for the mapping
should be synchronous.  The guarantee provided by this flag is: While a block
is writeably mapped into page tables of this mapping, it is guaranteed to be
visible in the file at that offset also after a crash.

How I implement this is that ->iomap_begin() indicates by a flag that inode
block mapping metadata is unstable and may need flushing (use the same test as
whether fdatasync() has metadata to write). If yes, DAX fault handler refrains
from inserting / write-enabling the page table entry and returns special flag
VM_FAULT_NEEDDSYNC together with a PFN to map to the filesystem fault handler.
The handler then calls fdatasync() (vfs_fsync_range()) for the affected range
and after that calls DAX code to update the page table entry appropriately.

I did some basic performance testing on the patches over ramdisk - timed
latency of page faults when faulting 512 pages. I did several tests: with file
preallocated / with file empty, with background file copying going on / without
it, with / without MAP_SYNC (so that we get comparison).  The results are
(numbers are in microseconds):

File preallocated, no background load no MAP_SYNC:
min=9 avg=10 max=46
8 - 15 us: 508
16 - 31 us: 3
32 - 63 us: 1

File preallocated, no background load, MAP_SYNC:
min=9 avg=10 max=47
8 - 15 us: 508
16 - 31 us: 2
32 - 63 us: 2

File empty, no background load, no MAP_SYNC:
min=21 avg=22 max=70
16 - 31 us: 506
32 - 63 us: 5
64 - 127 us: 1

File empty, no background load, MAP_SYNC:
min=40 avg=124 max=242
32 - 63 us: 1
64 - 127 us: 333
128 - 255 us: 178

File empty, background load, no MAP_SYNC:
min=21 avg=23 max=67
16 - 31 us: 507
32 - 63 us: 4
64 - 127 us: 1

File empty, background load, MAP_SYNC:
min=94 avg=112 max=181
64 - 127 us: 489
128 - 255 us: 23

So here we can see the difference between MAP_SYNC vs non MAP_SYNC is about
100-200 us when we need to wait for transaction commit in this setup. 

Anyway, here are the patches and since Ross already posted his patches to test
the functionality, I think we are ready to get this merged. I've talked with
Dan and he said he could take the patches through his tree, I'd just like to
get a final ack from Christoph on the patch modifying mmap(2).  Comments are
welcome.

Changes since v4:
* fixed couple of minor things in the manpage
* make legacy mmap flags always supported, remove them from mask declared
  to be supported by ext4 and xfs

Changes since v3:
* updated some changelogs
* folded fs support for VM_SYNC flag into patches implementing the
  functionality
* removed ->mmap_validate, use ->mmap_supported_flags instead
* added some Reviewed-by tags
* added manpage patch

Changes since v2:
* avoid unnecessary flushing of faulted page (Ross) - I've realized it makes no
  sense to remeasure my benchmark results (after actually doing that and seeing
  no difference, sigh) since I use ramdisk and not real PMEM HW and so flushes
  are ignored.
* handle nojournal mode of ext4
* other smaller cleanups & fixes (Ross)
* factor larger part of finishing of synchronous fault into a helper (Christoph)
* reorder pfnp argument of dax_iomap_fault() (Christoph)
* add XFS support from Christoph
* use proper MAP_SYNC support in mmap(2)
* rebased on top of 4.14-rc4

Changes since v1:
* switched to using mmap flag MAP_SYNC
* cleaned up fault handlers to avoid passing pfn in vmf->orig_pte
* switched to not touching page tables before we are ready to insert final
  entry as it was unnecessary and not really simplifying anything
* renamed fault flag to VM_FAULT_NEEDDSYNC
* other smaller fixes found by reviewers

Honza
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 07/17] dax: Inline dax_insert_mapping() into the callsite

2017-10-24 Thread Jan Kara
dax_insert_mapping() has only one callsite and we will need to further
fine tune what it does for synchronous faults. Just inline it into the
callsite so that we don't have to pass awkward bools around.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/dax.c | 46 +++---
 1 file changed, 19 insertions(+), 27 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 5ea71381dba0..5b20c6456926 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -858,32 +858,6 @@ static int dax_iomap_pfn(struct iomap *iomap, loff_t pos, 
size_t size,
return rc;
 }
 
-static int dax_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
- loff_t pos, void *entry)
-{
-   const sector_t sector = dax_iomap_sector(iomap, pos);
-   struct vm_area_struct *vma = vmf->vma;
-   struct address_space *mapping = vma->vm_file->f_mapping;
-   unsigned long vaddr = vmf->address;
-   void *ret;
-   int rc;
-   pfn_t pfn;
-
-   rc = dax_iomap_pfn(iomap, pos, PAGE_SIZE, );
-   if (rc < 0)
-   return rc;
-
-   ret = dax_insert_mapping_entry(mapping, vmf, entry, sector, 0);
-   if (IS_ERR(ret))
-   return PTR_ERR(ret);
-
-   trace_dax_insert_mapping(mapping->host, vmf, ret);
-   if (vmf->flags & FAULT_FLAG_WRITE)
-   return vm_insert_mixed_mkwrite(vma, vaddr, pfn);
-   else
-   return vm_insert_mixed(vma, vaddr, pfn);
-}
-
 /*
  * The user has performed a load from a hole in the file.  Allocating a new
  * page in the file would cause excessive storage usage for workloads with
@@ -1119,6 +1093,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
bool write = vmf->flags & FAULT_FLAG_WRITE;
int vmf_ret = 0;
void *entry;
+   pfn_t pfn;
 
trace_dax_pte_fault(inode, vmf, vmf_ret);
/*
@@ -1201,7 +1176,24 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
count_memcg_event_mm(vma->vm_mm, PGMAJFAULT);
major = VM_FAULT_MAJOR;
}
-   error = dax_insert_mapping(vmf, , pos, entry);
+   error = dax_iomap_pfn(, pos, PAGE_SIZE, );
+   if (error < 0)
+   goto error_finish_iomap;
+
+   entry = dax_insert_mapping_entry(mapping, vmf, entry,
+dax_iomap_sector(, pos),
+0);
+   if (IS_ERR(entry)) {
+   error = PTR_ERR(entry);
+   goto error_finish_iomap;
+   }
+
+   trace_dax_insert_mapping(inode, vmf, entry);
+   if (write)
+   error = vm_insert_mixed_mkwrite(vma, vaddr, pfn);
+   else
+   error = vm_insert_mixed(vma, vaddr, pfn);
+
/* -EBUSY is fine, somebody else faulted on the same PTE */
if (error == -EBUSY)
error = 0;
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 08/17] dax: Inline dax_pmd_insert_mapping() into the callsite

2017-10-24 Thread Jan Kara
dax_pmd_insert_mapping() has only one callsite and we will need to
further fine tune what it does for synchronous faults. Just inline it
into the callsite so that we don't have to pass awkward bools around.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/dax.c  | 47 +--
 include/trace/events/fs_dax.h |  1 -
 2 files changed, 19 insertions(+), 29 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 5b20c6456926..675fab8ec41f 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1235,33 +1235,11 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 }
 
 #ifdef CONFIG_FS_DAX_PMD
-static int dax_pmd_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
-   loff_t pos, void *entry)
-{
-   struct address_space *mapping = vmf->vma->vm_file->f_mapping;
-   const sector_t sector = dax_iomap_sector(iomap, pos);
-   struct inode *inode = mapping->host;
-   void *ret = NULL;
-   pfn_t pfn = {};
-   int rc;
-
-   rc = dax_iomap_pfn(iomap, pos, PMD_SIZE, );
-   if (rc < 0)
-   goto fallback;
-
-   ret = dax_insert_mapping_entry(mapping, vmf, entry, sector,
-   RADIX_DAX_PMD);
-   if (IS_ERR(ret))
-   goto fallback;
-
-   trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, ret);
-   return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
-   pfn, vmf->flags & FAULT_FLAG_WRITE);
-
-fallback:
-   trace_dax_pmd_insert_mapping_fallback(inode, vmf, PMD_SIZE, pfn, ret);
-   return VM_FAULT_FALLBACK;
-}
+/*
+ * The 'colour' (ie low bits) within a PMD of a page offset.  This comes up
+ * more often than one might expect in the below functions.
+ */
+#define PG_PMD_COLOUR  ((PMD_SIZE >> PAGE_SHIFT) - 1)
 
 static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
void *entry)
@@ -1317,6 +1295,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
void *entry;
loff_t pos;
int error;
+   pfn_t pfn;
 
/*
 * Check whether offset isn't beyond end of file now. Caller is
@@ -1394,7 +1373,19 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 
switch (iomap.type) {
case IOMAP_MAPPED:
-   result = dax_pmd_insert_mapping(vmf, , pos, entry);
+   error = dax_iomap_pfn(, pos, PMD_SIZE, );
+   if (error < 0)
+   goto finish_iomap;
+
+   entry = dax_insert_mapping_entry(mapping, vmf, entry,
+   dax_iomap_sector(, pos),
+   RADIX_DAX_PMD);
+   if (IS_ERR(entry))
+   goto finish_iomap;
+
+   trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, entry);
+   result = vmf_insert_pfn_pmd(vma, vmf->address, vmf->pmd, pfn,
+   write);
break;
case IOMAP_UNWRITTEN:
case IOMAP_HOLE:
diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h
index fbc4a06f7310..88a9d19b8ff8 100644
--- a/include/trace/events/fs_dax.h
+++ b/include/trace/events/fs_dax.h
@@ -148,7 +148,6 @@ DEFINE_EVENT(dax_pmd_insert_mapping_class, name, \
TP_ARGS(inode, vmf, length, pfn, radix_entry))
 
 DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping);
-DEFINE_PMD_INSERT_MAPPING_EVENT(dax_pmd_insert_mapping_fallback);
 
 DECLARE_EVENT_CLASS(dax_pte_fault_class,
TP_PROTO(struct inode *inode, struct vm_fault *vmf, int result),
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 12/17] mm: Define MAP_SYNC and VM_SYNC flags

2017-10-24 Thread Jan Kara
Define new MAP_SYNC flag and corresponding VMA VM_SYNC flag. As the
MAP_SYNC flag is not part of LEGACY_MAP_MASK, currently it will be
refused by all MAP_SHARED_VALIDATE map attempts and silently ignored for
everything else.

Reviewed-by: Ross Zwisler 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 fs/proc/task_mmu.c  | 1 +
 include/linux/mm.h  | 1 +
 include/linux/mman.h| 8 ++--
 include/uapi/asm-generic/mman.h | 1 +
 4 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 5589b4bd4b85..ea78b37deeaa 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -664,6 +664,7 @@ static void show_smap_vma_flags(struct seq_file *m, struct 
vm_area_struct *vma)
[ilog2(VM_ACCOUNT)] = "ac",
[ilog2(VM_NORESERVE)]   = "nr",
[ilog2(VM_HUGETLB)] = "ht",
+   [ilog2(VM_SYNC)]= "sf",
[ilog2(VM_ARCH_1)]  = "ar",
[ilog2(VM_WIPEONFORK)]  = "wf",
[ilog2(VM_DONTDUMP)]= "dd",
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ca72b67153d5..5411cb7442de 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -189,6 +189,7 @@ extern unsigned int kobjsize(const void *objp);
 #define VM_ACCOUNT 0x0010  /* Is a VM accounted object */
 #define VM_NORESERVE   0x0020  /* should the VM suppress accounting */
 #define VM_HUGETLB 0x0040  /* Huge TLB Page VM */
+#define VM_SYNC0x0080  /* Synchronous page faults */
 #define VM_ARCH_1  0x0100  /* Architecture-specific flag */
 #define VM_WIPEONFORK  0x0200  /* Wipe VMA contents in child. */
 #define VM_DONTDUMP0x0400  /* Do not include in the core dump */
diff --git a/include/linux/mman.h b/include/linux/mman.h
index 94b63b4d71ff..8f7cc87828e6 100644
--- a/include/linux/mman.h
+++ b/include/linux/mman.h
@@ -9,7 +9,7 @@
 
 /*
  * Arrange for legacy / undefined architecture specific flags to be
- * ignored by default in LEGACY_MAP_MASK.
+ * ignored by mmap handling code.
  */
 #ifndef MAP_32BIT
 #define MAP_32BIT 0
@@ -23,6 +23,9 @@
 #ifndef MAP_UNINITIALIZED
 #define MAP_UNINITIALIZED 0
 #endif
+#ifndef MAP_SYNC
+#define MAP_SYNC 0
+#endif
 
 /*
  * The historical set of flags that all mmap implementations implicitly
@@ -125,7 +128,8 @@ calc_vm_flag_bits(unsigned long flags)
 {
return _calc_vm_trans(flags, MAP_GROWSDOWN,  VM_GROWSDOWN ) |
   _calc_vm_trans(flags, MAP_DENYWRITE,  VM_DENYWRITE ) |
-  _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED);
+  _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED) |
+  _calc_vm_trans(flags, MAP_SYNC,   VM_SYNC  );
 }
 
 unsigned long vm_commit_limit(void);
diff --git a/include/uapi/asm-generic/mman.h b/include/uapi/asm-generic/mman.h
index 7162cd4cca73..00e55627d2df 100644
--- a/include/uapi/asm-generic/mman.h
+++ b/include/uapi/asm-generic/mman.h
@@ -12,6 +12,7 @@
 #define MAP_NONBLOCK   0x1 /* do not block on IO */
 #define MAP_STACK  0x2 /* give out an address that is best 
suited for process/thread stacks */
 #define MAP_HUGETLB0x4 /* create a huge page mapping */
+#define MAP_SYNC   0x8 /* perform synchronous page faults for 
the mapping */
 
 /* Bits [26:31] are reserved, see mman-common.h for MAP_HUGETLB usage */
 
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 02/17] mm: Remove VM_FAULT_HWPOISON_LARGE_MASK

2017-10-24 Thread Jan Kara
It is unused.

Reviewed-by: Ross Zwisler 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 include/linux/mm.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 065d99deb847..ca72b67153d5 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1182,8 +1182,6 @@ static inline void clear_page_pfmemalloc(struct page 
*page)
 #define VM_FAULT_FALLBACK 0x0800   /* huge page fault failed, fall back to 
small */
 #define VM_FAULT_DONE_COW   0x1000 /* ->fault has fully handled COW */
 
-#define VM_FAULT_HWPOISON_LARGE_MASK 0xf000 /* encodes hpage index for large 
hwpoison */
-
 #define VM_FAULT_ERROR (VM_FAULT_OOM | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | \
 VM_FAULT_HWPOISON | VM_FAULT_HWPOISON_LARGE | \
 VM_FAULT_FALLBACK)
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 03/17] dax: Simplify arguments of dax_insert_mapping()

2017-10-24 Thread Jan Kara
dax_insert_mapping() has lots of arguments and a lot of them is actuall
duplicated by passing vm_fault structure as well. Change the function to
take the same arguments as dax_pmd_insert_mapping().

Reviewed-by: Ross Zwisler 
Reviewed-by: Christoph Hellwig 
Signed-off-by: Jan Kara 
---
 fs/dax.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index f001d8c72a06..0bc42ac294ca 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -820,23 +820,30 @@ int dax_writeback_mapping_range(struct address_space 
*mapping,
 }
 EXPORT_SYMBOL_GPL(dax_writeback_mapping_range);
 
-static int dax_insert_mapping(struct address_space *mapping,
-   struct block_device *bdev, struct dax_device *dax_dev,
-   sector_t sector, size_t size, void *entry,
-   struct vm_area_struct *vma, struct vm_fault *vmf)
+static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos)
+{
+   return iomap->blkno + (((pos & PAGE_MASK) - iomap->offset) >> 9);
+}
+
+static int dax_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
+ loff_t pos, void *entry)
 {
+   const sector_t sector = dax_iomap_sector(iomap, pos);
+   struct vm_area_struct *vma = vmf->vma;
+   struct address_space *mapping = vma->vm_file->f_mapping;
unsigned long vaddr = vmf->address;
void *ret, *kaddr;
pgoff_t pgoff;
int id, rc;
pfn_t pfn;
 
-   rc = bdev_dax_pgoff(bdev, sector, size, );
+   rc = bdev_dax_pgoff(iomap->bdev, sector, PAGE_SIZE, );
if (rc)
return rc;
 
id = dax_read_lock();
-   rc = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), , );
+   rc = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(PAGE_SIZE),
+  , );
if (rc < 0) {
dax_read_unlock(id);
return rc;
@@ -936,11 +943,6 @@ int __dax_zero_page_range(struct block_device *bdev,
 }
 EXPORT_SYMBOL_GPL(__dax_zero_page_range);
 
-static sector_t dax_iomap_sector(struct iomap *iomap, loff_t pos)
-{
-   return iomap->blkno + (((pos & PAGE_MASK) - iomap->offset) >> 9);
-}
-
 static loff_t
 dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
struct iomap *iomap)
@@ -1087,7 +1089,6 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
struct inode *inode = mapping->host;
unsigned long vaddr = vmf->address;
loff_t pos = (loff_t)vmf->pgoff << PAGE_SHIFT;
-   sector_t sector;
struct iomap iomap = { 0 };
unsigned flags = IOMAP_FAULT;
int error, major = 0;
@@ -1140,9 +1141,9 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
goto error_finish_iomap;
}
 
-   sector = dax_iomap_sector(, pos);
-
if (vmf->cow_page) {
+   sector_t sector = dax_iomap_sector(, pos);
+
switch (iomap.type) {
case IOMAP_HOLE:
case IOMAP_UNWRITTEN:
@@ -1175,8 +1176,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
count_memcg_event_mm(vmf->vma->vm_mm, PGMAJFAULT);
major = VM_FAULT_MAJOR;
}
-   error = dax_insert_mapping(mapping, iomap.bdev, iomap.dax_dev,
-   sector, PAGE_SIZE, entry, vmf->vma, vmf);
+   error = dax_insert_mapping(vmf, , pos, entry);
/* -EBUSY is fine, somebody else faulted on the same PTE */
if (error == -EBUSY)
error = 0;
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


[PATCH 01/17] mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags

2017-10-24 Thread Jan Kara
From: Dan Williams 

The mmap(2) syscall suffers from the ABI anti-pattern of not validating
unknown flags. However, proposals like MAP_SYNC need a mechanism to
define new behavior that is known to fail on older kernels without the
support. Define a new MAP_SHARED_VALIDATE flag pattern that is
guaranteed to fail on all legacy mmap implementations.

It is worth noting that the original proposal was for a standalone
MAP_VALIDATE flag. However, when that  could not be supported by all
archs Linus observed:

I see why you *think* you want a bitmap. You think you want
a bitmap because you want to make MAP_VALIDATE be part of MAP_SYNC
etc, so that people can do

ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED
| MAP_SYNC, fd, 0);

and "know" that MAP_SYNC actually takes.

And I'm saying that whole wish is bogus. You're fundamentally
depending on special semantics, just make it explicit. It's already
not portable, so don't try to make it so.

Rename that MAP_VALIDATE as MAP_SHARED_VALIDATE, make it have a value
of 0x3, and make people do

ret = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_SHARED_VALIDATE
| MAP_SYNC, fd, 0);

and then the kernel side is easier too (none of that random garbage
playing games with looking at the "MAP_VALIDATE bit", but just another
case statement in that map type thing.

Boom. Done.

Similar to ->fallocate() we also want the ability to validate the
support for new flags on a per ->mmap() 'struct file_operations'
instance basis.  Towards that end arrange for flags to be generically
validated against a mmap_supported_flags exported by 'struct
file_operations'. By default all existing flags are implicitly
supported, but new flags require MAP_SHARED_VALIDATE and
per-instance-opt-in.

Cc: Jan Kara 
Cc: Arnd Bergmann 
Cc: Andy Lutomirski 
Cc: Andrew Morton 
Suggested-by: Christoph Hellwig 
Suggested-by: Linus Torvalds 
Signed-off-by: Dan Williams 
Signed-off-by: Jan Kara 
---
 arch/alpha/include/uapi/asm/mman.h   |  1 +
 arch/mips/include/uapi/asm/mman.h|  1 +
 arch/parisc/include/uapi/asm/mman.h  |  1 +
 arch/xtensa/include/uapi/asm/mman.h  |  1 +
 include/linux/fs.h   |  1 +
 include/linux/mman.h | 39 
 include/uapi/asm-generic/mman-common.h   |  1 +
 mm/mmap.c| 15 +++
 tools/include/uapi/asm-generic/mman-common.h |  1 +
 9 files changed, 61 insertions(+)

diff --git a/arch/alpha/include/uapi/asm/mman.h 
b/arch/alpha/include/uapi/asm/mman.h
index 3b26cc62dadb..f6d118aaedb9 100644
--- a/arch/alpha/include/uapi/asm/mman.h
+++ b/arch/alpha/include/uapi/asm/mman.h
@@ -11,6 +11,7 @@
 
 #define MAP_SHARED 0x01/* Share changes */
 #define MAP_PRIVATE0x02/* Changes are private */
+#define MAP_SHARED_VALIDATE 0x03   /* share + validate extension flags */
 #define MAP_TYPE   0x0f/* Mask for type of mapping (OSF/1 is 
_wrong_) */
 #define MAP_FIXED  0x100   /* Interpret addr exactly */
 #define MAP_ANONYMOUS  0x10/* don't use a file */
diff --git a/arch/mips/include/uapi/asm/mman.h 
b/arch/mips/include/uapi/asm/mman.h
index da3216007fe0..93268e4cd3c7 100644
--- a/arch/mips/include/uapi/asm/mman.h
+++ b/arch/mips/include/uapi/asm/mman.h
@@ -28,6 +28,7 @@
  */
 #define MAP_SHARED 0x001   /* Share changes */
 #define MAP_PRIVATE0x002   /* Changes are private */
+#define MAP_SHARED_VALIDATE 0x003  /* share + validate extension flags */
 #define MAP_TYPE   0x00f   /* Mask for type of mapping */
 #define MAP_FIXED  0x010   /* Interpret addr exactly */
 
diff --git a/arch/parisc/include/uapi/asm/mman.h 
b/arch/parisc/include/uapi/asm/mman.h
index 775b5d5e41a1..bca652aa1677 100644
--- a/arch/parisc/include/uapi/asm/mman.h
+++ b/arch/parisc/include/uapi/asm/mman.h
@@ -11,6 +11,7 @@
 
 #define MAP_SHARED 0x01/* Share changes */
 #define MAP_PRIVATE0x02/* Changes are private */
+#define MAP_SHARED_VALIDATE 0x03   /* share + validate extension flags */
 #define MAP_TYPE   0x03/* Mask for type of mapping */
 #define MAP_FIXED  0x04/* Interpret addr exactly */
 #define MAP_ANONYMOUS  0x10/* don't use a file */
diff --git a/arch/xtensa/include/uapi/asm/mman.h 
b/arch/xtensa/include/uapi/asm/mman.h
index b15b278aa314..9ab426374714 100644
--- a/arch/xtensa/include/uapi/asm/mman.h
+++ b/arch/xtensa/include/uapi/asm/mman.h
@@ -35,6 +35,7 @@
  */
 #define MAP_SHARED 0x001   /* Share changes */
 #define MAP_PRIVATE0x002   /* Changes are private */

[PATCH 04/17] dax: Factor out getting of pfn out of iomap

2017-10-24 Thread Jan Kara
Factor out code to get pfn out of iomap that is shared between PTE and
PMD fault path.

Reviewed-by: Christoph Hellwig 
Reviewed-by: Ross Zwisler 
Signed-off-by: Jan Kara 
---
 fs/dax.c | 83 +---
 1 file changed, 43 insertions(+), 40 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 0bc42ac294ca..116eef8d6c69 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -825,30 +825,53 @@ static sector_t dax_iomap_sector(struct iomap *iomap, 
loff_t pos)
return iomap->blkno + (((pos & PAGE_MASK) - iomap->offset) >> 9);
 }
 
-static int dax_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
- loff_t pos, void *entry)
+static int dax_iomap_pfn(struct iomap *iomap, loff_t pos, size_t size,
+pfn_t *pfnp)
 {
const sector_t sector = dax_iomap_sector(iomap, pos);
-   struct vm_area_struct *vma = vmf->vma;
-   struct address_space *mapping = vma->vm_file->f_mapping;
-   unsigned long vaddr = vmf->address;
-   void *ret, *kaddr;
pgoff_t pgoff;
+   void *kaddr;
int id, rc;
-   pfn_t pfn;
+   long length;
 
-   rc = bdev_dax_pgoff(iomap->bdev, sector, PAGE_SIZE, );
+   rc = bdev_dax_pgoff(iomap->bdev, sector, size, );
if (rc)
return rc;
-
id = dax_read_lock();
-   rc = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(PAGE_SIZE),
-  , );
-   if (rc < 0) {
-   dax_read_unlock(id);
-   return rc;
+   length = dax_direct_access(iomap->dax_dev, pgoff, PHYS_PFN(size),
+  , pfnp);
+   if (length < 0) {
+   rc = length;
+   goto out;
}
+   rc = -EINVAL;
+   if (PFN_PHYS(length) < size)
+   goto out;
+   if (pfn_t_to_pfn(*pfnp) & (PHYS_PFN(size)-1))
+   goto out;
+   /* For larger pages we need devmap */
+   if (length > 1 && !pfn_t_devmap(*pfnp))
+   goto out;
+   rc = 0;
+out:
dax_read_unlock(id);
+   return rc;
+}
+
+static int dax_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
+ loff_t pos, void *entry)
+{
+   const sector_t sector = dax_iomap_sector(iomap, pos);
+   struct vm_area_struct *vma = vmf->vma;
+   struct address_space *mapping = vma->vm_file->f_mapping;
+   unsigned long vaddr = vmf->address;
+   void *ret;
+   int rc;
+   pfn_t pfn;
+
+   rc = dax_iomap_pfn(iomap, pos, PAGE_SIZE, );
+   if (rc < 0)
+   return rc;
 
ret = dax_insert_mapping_entry(mapping, vmf, entry, sector, 0);
if (IS_ERR(ret))
@@ -1223,46 +1246,26 @@ static int dax_pmd_insert_mapping(struct vm_fault *vmf, 
struct iomap *iomap,
 {
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
const sector_t sector = dax_iomap_sector(iomap, pos);
-   struct dax_device *dax_dev = iomap->dax_dev;
-   struct block_device *bdev = iomap->bdev;
struct inode *inode = mapping->host;
-   const size_t size = PMD_SIZE;
-   void *ret = NULL, *kaddr;
-   long length = 0;
-   pgoff_t pgoff;
+   void *ret = NULL;
pfn_t pfn = {};
-   int id;
+   int rc;
 
-   if (bdev_dax_pgoff(bdev, sector, size, ) != 0)
+   rc = dax_iomap_pfn(iomap, pos, PMD_SIZE, );
+   if (rc < 0)
goto fallback;
 
-   id = dax_read_lock();
-   length = dax_direct_access(dax_dev, pgoff, PHYS_PFN(size), , 
);
-   if (length < 0)
-   goto unlock_fallback;
-   length = PFN_PHYS(length);
-
-   if (length < size)
-   goto unlock_fallback;
-   if (pfn_t_to_pfn(pfn) & PG_PMD_COLOUR)
-   goto unlock_fallback;
-   if (!pfn_t_devmap(pfn))
-   goto unlock_fallback;
-   dax_read_unlock(id);
-
ret = dax_insert_mapping_entry(mapping, vmf, entry, sector,
RADIX_DAX_PMD);
if (IS_ERR(ret))
goto fallback;
 
-   trace_dax_pmd_insert_mapping(inode, vmf, length, pfn, ret);
+   trace_dax_pmd_insert_mapping(inode, vmf, PMD_SIZE, pfn, ret);
return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
pfn, vmf->flags & FAULT_FLAG_WRITE);
 
-unlock_fallback:
-   dax_read_unlock(id);
 fallback:
-   trace_dax_pmd_insert_mapping_fallback(inode, vmf, length, pfn, ret);
+   trace_dax_pmd_insert_mapping_fallback(inode, vmf, PMD_SIZE, pfn, ret);
return VM_FAULT_FALLBACK;
 }
 
-- 
2.12.3

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH] mmap.2: Add description of MAP_SHARED_VALIDATE and MAP_SYNC

2017-10-24 Thread Ross Zwisler
On Tue, Oct 24, 2017 at 03:27:13PM +0200, Jan Kara wrote:
> On Fri 20-10-17 15:47:53, Ross Zwisler wrote:
> > On Thu, Oct 19, 2017 at 02:58:17PM +0200, Jan Kara wrote:
> > > Signed-off-by: Jan Kara 
> > > ---
> > >  man2/mmap.2 | 30 ++
> > >  1 file changed, 30 insertions(+)
> > > 
> > > diff --git a/man2/mmap.2 b/man2/mmap.2
> > > index 47c3148653be..598ff0c64f7f 100644
> > > --- a/man2/mmap.2
> > > +++ b/man2/mmap.2
> > > @@ -125,6 +125,21 @@ are carried through to the underlying file.
> > >  to the underlying file requires the use of
> > >  .BR msync (2).)
> > >  .TP
> > > +.B MAP_SHARED_VALIDATE
> > > +The same as
> > > +.B MAP_SHARED
> > > +except that
> > > +.B MAP_SHARED
> > > +mappings ignore unknown flags in
> > > +.IR flags .
> > > +In contrast when creating mapping of
> > > +.B MAP_SHARED_VALIDATE
> > > +mapping type, the kernel verifies all passed flags are known and fails 
> > > the
> > > +mapping with
> > > +.BR EOPNOTSUPP
> > > +otherwise. This mapping type is also required to be able to use some 
> > > mapping
> > > +flags.
> > > +.TP
> > 
> > Some small nits:
> > 
> > I think you should maybe include a "(since Linux 4.15)" type note after the
> > MAP_SHARED_VALIDATE header.  You also need to update the following line:
> > 
> >Both of these flags are described in POSIX.1-2001 and POSIX.1-2008.
> > 
> > Which used to refer to MAP_SYNC and MAP_PRIVATE.
> 
> Thanks, I've fixed these.

Cool, you can add my:

Reviewed-by: Ross Zwisler 
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH] mmap.2: Add description of MAP_SHARED_VALIDATE and MAP_SYNC

2017-10-24 Thread Jan Kara
On Fri 20-10-17 15:47:53, Ross Zwisler wrote:
> On Thu, Oct 19, 2017 at 02:58:17PM +0200, Jan Kara wrote:
> > Signed-off-by: Jan Kara 
> > ---
> >  man2/mmap.2 | 30 ++
> >  1 file changed, 30 insertions(+)
> > 
> > diff --git a/man2/mmap.2 b/man2/mmap.2
> > index 47c3148653be..598ff0c64f7f 100644
> > --- a/man2/mmap.2
> > +++ b/man2/mmap.2
> > @@ -125,6 +125,21 @@ are carried through to the underlying file.
> >  to the underlying file requires the use of
> >  .BR msync (2).)
> >  .TP
> > +.B MAP_SHARED_VALIDATE
> > +The same as
> > +.B MAP_SHARED
> > +except that
> > +.B MAP_SHARED
> > +mappings ignore unknown flags in
> > +.IR flags .
> > +In contrast when creating mapping of
> > +.B MAP_SHARED_VALIDATE
> > +mapping type, the kernel verifies all passed flags are known and fails the
> > +mapping with
> > +.BR EOPNOTSUPP
> > +otherwise. This mapping type is also required to be able to use some 
> > mapping
> > +flags.
> > +.TP
> 
> Some small nits:
> 
> I think you should maybe include a "(since Linux 4.15)" type note after the
> MAP_SHARED_VALIDATE header.  You also need to update the following line:
> 
>Both of these flags are described in POSIX.1-2001 and POSIX.1-2008.
> 
> Which used to refer to MAP_SYNC and MAP_PRIVATE.

Thanks, I've fixed these.

Honza

-- 
Jan Kara 
SUSE Labs, CR
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH 01/17] mm: introduce MAP_SHARED_VALIDATE, a mechanism to safely define new mmap flags

2017-10-24 Thread Jan Kara
On Fri 20-10-17 00:27:07, Christoph Hellwig wrote:
> > if (file) {
> > struct inode *inode = file_inode(file);
> > +   unsigned long flags_mask = file->f_op->mmap_supported_flags;
> > +
> > +   if (!flags_mask)
> > +   flags_mask = LEGACY_MAP_MASK;
> >  
> > switch (flags & MAP_TYPE) {
> > case MAP_SHARED:
> > +   /*
> > +* Silently ignore unsupported flags - MAP_SHARED has
> > +* traditionally behaved like that and we don't want
> > +* to break compatibility.
> > +*/
> > +   flags &= flags_mask;
> > +   /*
> > +* Force use of MAP_SHARED_VALIDATE with non-legacy
> > +* flags. E.g. MAP_SYNC is dangerous to use with
> > +* MAP_SHARED as you don't know which consistency model
> > +* you will get.
> > +*/
> > +   flags &= LEGACY_MAP_MASK;
> > +   /* fall through */
> > +   case MAP_SHARED_VALIDATE:
> > +   if (flags & ~flags_mask)
> > +   return -EOPNOTSUPP;
> 
> Hmmm.  I'd expect this to worth more like:
> 
>   case MAP_SHARED:
>   /* Ignore all new flags that need validation: */
>   flags &= LEGACY_MAP_MASK;
>   /*FALLTHROUGH*/
>   case MAP_SHARED_VALIDATE:
>   if (flags & ~file->f_op->mmap_supported_flags)
>   return -EOPNOTSUPP;
> 
> with the legacy mask always implicitly support as indicated in my
> comment to the XFS patch.

I was thinking about this. Originally I thought that mmap_supported_flags
would allow also to declare some legacy flags as unsupported and also it
seemed as a nicer symmetric interface to me. But I guess the need to mask
out legacy flags is mostly theoretical so I'm fine giving that up. So I'll
change this as you suggest.

> Although even the ignoring in MAP_SHARED seems dangerous, but I guess
> we need that to keep strict backwards compatibility.  In world I'd
> rather do
> 
>   case MAP_SHARED:
>   if (flags & ~LEGACY_MAP_MASK)
>   return -EINVAL;

Yes, I think just ignoring new flags for MAP_SHARED is safer...

Honza

-- 
Jan Kara 
SUSE Labs, CR
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm