On Wednesday, November 02, 2016 6:36 AM Jan Kara wrote:
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 8e8b76d11bb4..2a4ebe3c67c6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -297,8 +297,6 @@ struct vm_fault {
> gfp_t gfp_mask; /* gfp mask
Dear Customer,
This is to confirm that one or more of your parcels has been shipped.
Shipment Label is attached to this email.
Warm regards,
Vincent Waller,
Operation Manager.
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
Hi,
forgot to add Kirill to CC since this modifies the fault path he changed
recently. I don't want to resend the whole series just because of this so
at least I'm pinging him like this...
Honza
On Tue 01-11-16 23:36:06, Jan Kara
On Wed, Nov 02, 2016 at 09:12:35AM +1100, Dave Chinner wrote:
> On Tue, Nov 01, 2016 at 10:06:10PM +0100, Jan Kara wrote:
> > Hello,
> >
> > this patch set converts ext4 DAX IO paths to the new iomap framework and
> > removes the old bh-based DAX functions. As a result ext4 gains PMD page
> >
Currently PTE gets updated in wp_pfn_shared() after dax_pfn_mkwrite()
has released corresponding radix tree entry lock. When we want to
writeprotect PTE on cache flush, we need PTE modification to happen
under radix tree entry lock to ensure consisten updates of PTE and radix
tree (standard faults
Add orig_pte field to vm_fault structure to allow ->page_mkwrite
handlers to fully handle the fault. This also allows us to save some
passing of extra arguments around.
Signed-off-by: Jan Kara
---
include/linux/mm.h | 4 +--
mm/internal.h | 2 +-
mm/khugepaged.c| 7
Currently we never clear dirty tags in DAX mappings and thus address
ranges to flush accumulate. Now that we have locking of radix tree
entries, we have all the locking necessary to reliably clear the radix
tree dirty tag when flushing caches for corresponding address range.
Similarly to
Currently finish_mkwrite_fault() returns 0 when PTE got changed before
we acquired PTE lock and VM_FAULT_WRITE when we succeeded in modifying
the PTE. This is somewhat confusing since 0 generally means success, it
is also inconsistent with finish_fault() which returns 0 on success.
Change
Currently we never clear dirty tags in DAX mappings and thus address
ranges to flush accumulate. Now that we have locking of radix tree
entries, we have all the locking necessary to reliably clear the radix
tree dirty tag when flushing caches for corresponding address range.
Similarly to
We will need more information in the ->page_mkwrite() helper for DAX to
be able to fully finish faults there. Pass vm_fault structure to
do_page_mkwrite() and use it there so that information propagates
properly from upper layers.
Reviewed-by: Ross Zwisler
To allow full handling of COW faults add memcg field to struct vm_fault
and a return value of ->fault() handler meaning that COW fault is fully
handled and memcg charge must not be canceled. This will allow us to
remove knowledge about special DAX locking from the generic fault code.
Reviewed-by:
DAX will need to implement its own version of page_check_address(). To
avoid duplicating page table walking code, export follow_pte() which
does what we need.
Signed-off-by: Jan Kara
---
include/linux/mm.h | 2 ++
mm/memory.c| 4 ++--
2 files changed, 4 insertions(+), 2
Provide a helper function for finishing write faults due to PTE being
read-only. The helper will be used by DAX to avoid the need of
complicating generic MM code with DAX locking specifics.
Reviewed-by: Ross Zwisler
Signed-off-by: Jan Kara
---
Every single user of vmf->virtual_address typed that entry to unsigned
long before doing anything with it. So just change the type of that
entry to unsigned long immediately.
Signed-off-by: Jan Kara
---
arch/powerpc/platforms/cell/spufs/file.c | 4 ++--
Introduce function finish_fault() as a helper function for finishing
page faults. It is rather thin wrapper around alloc_set_pte() but since
we'd want to call this from DAX code or filesystems, it is still useful
to avoid some boilerplate code.
Reviewed-by: Ross Zwisler
On Tue, Nov 01, 2016 at 10:06:10PM +0100, Jan Kara wrote:
> Hello,
>
> this patch set converts ext4 DAX IO paths to the new iomap framework and
> removes the old bh-based DAX functions. As a result ext4 gains PMD page
> fault support, also some other minor bugs get fixed. The patch set is based
>
Implement basic iomap_begin function that handles reading and use it for
DAX reads.
Signed-off-by: Jan Kara
---
fs/ext4/ext4.h | 2 ++
fs/ext4/file.c | 40 +++-
fs/ext4/inode.c | 54 ++
3
No one uses functions using the get_block callback anymore. Rip them
out.
Signed-off-by: Jan Kara
---
fs/dax.c| 315
include/linux/dax.h | 12 --
2 files changed, 327 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
Currently we don't allow unaligned writes without inode_lock. This is
because zeroing of partial blocks could cause data corruption for racing
unaligned writes to the same block. However DAX handles zeroing during
block allocation and thus zeroing of partial blocks cannot race. Allow
DAX unaligned
Hello,
this patch set converts ext4 DAX IO paths to the new iomap framework and
removes the old bh-based DAX functions. As a result ext4 gains PMD page
fault support, also some other minor bugs get fixed. The patch set is based
on Ross' DAX PMD page fault support series [1]. It passes xfstests
Currently the last user of ext2_get_blocks() for DAX inodes was
dax_truncate_page(). Convert that to iomap_zero_range() so that all DAX
IO uses the iomap path.
Signed-off-by: Jan Kara
---
fs/ext2/inode.c | 11 ---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git
Currently mapping of blocks for DAX writes happen with
EXT4_GET_BLOCKS_PRE_IO flag set. That has a result that each
ext4_map_blocks() call creates a separate written extent, although it
could be merged to the neighboring extents in the extent tree. The
reason for using this flag is that in case
Currently iomap_end() doesn't do anything for DAX page faults for both ext2
and XFS. ext2_iomap_end() just checks for a write underrun, and
xfs_file_iomap_end() checks to see if it needs to finish a delayed
allocation. However, in the future iomap_end() calls might be needed to
make sure we have
To be able to correctly calculate the sector from a file position and a
struct iomap there is a complex little bit of logic that currently happens
in both dax_iomap_actor() and dax_iomap_fault(). This will need to be
repeated yet again in the DAX PMD fault handler when it is added, so break
it
No functional change.
The static functions put_locked_mapping_entry() and
put_unlocked_mapping_entry() will soon be used in error cases in
grab_mapping_entry(), so move their definitions above this function.
Signed-off-by: Ross Zwisler
Reviewed-by: Jan Kara
DAX radix tree locking currently locks entries based on the unique
combination of the 'mapping' pointer and the pgoff_t 'index' for the entry.
This works for PTEs, but as we move to PMDs we will need to have all the
offsets within the range covered by the PMD to map to the same bit lock.
To
On 10/28/2016 04:54 AM, Boylston, Brian wrote:
> Boaz Harrosh wrote on 2016-10-26:
>> On 10/26/2016 06:50 PM, Brian Boylston wrote:
>>> Introduce memcpy_nocache() as a memcpy() that avoids the processor cache
>>> if possible. Without arch-specific support, this defaults to just
>>> memcpy(). For
27 matches
Mail list logo