From: Ira Weiny
To support kmap_atomic_prot(), all architectures need to support
protections passed to their kmap_atomic_high() function. Pass
protections into kmap_atomic_high() and change the name to
kmap_atomic_high_prot() to match.
Then define kmap_atomic_prot() as a core function which
From: Ira Weiny
The kmap infrastructure has been copied almost verbatim to every architecture.
This series consolidates obvious duplicated code by defining core functions
which call into the architectures only when needed.
Some of the k[un]map_atomic() implementations have some similarities but
From: Ira Weiny
kmap_atomic_prot() is now exported by all architectures. Use this
function rather than open coding a driver specific kmap_atomic.
Reviewed-by: Christian König
Reviewed-by: Christoph Hellwig
Signed-off-by: Ira Weiny
---
drivers/gpu/drm/ttm/ttm_bo_util.c| 56
From: Ira Weiny
Move the kmap() build bug to kmap_init() to facilitate patches to lift
kmap() to the core.
Reviewed-by: Christoph Hellwig
Signed-off-by: Ira Weiny
---
Changes from V1:
combine code onto 1 line.
---
arch/xtensa/include/asm/highmem.h | 5 -
arch/xtensa/mm/highmem.c
On Mon, May 04, 2020 at 02:35:09AM +0100, Al Viro wrote:
> On Sun, May 03, 2020 at 06:09:01PM -0700, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > The kmap infrastructure has been copied almost verbatim to every
> > architecture.
> > This series consol
From: Ira Weiny
Continue the kmap clean up with 2 follow on patches
These apply after the kmap cleanup V2 series:
https://lore.kernel.org/lkml/20200504010912.982044-1-ira.we...@intel.com/
Ira Weiny (2):
kmap: Remove kmap_atomic_to_page()
parisc/kmap: Remove duplicate kmap code
arch/csky
From: Ira Weiny
parisc reimplements the kmap calls except to flush it's dcache. This is
arguably an abuse of kmap but regardless it is messy and confusing.
Remove the duplicate code and have parisc define
ARCH_HAS_FLUSH_ON_KUNMAP for a kunmap_flush_on_unmap() architecture
specific ca
From: Ira Weiny
kmap_atomic_to_page() has no callers and is only defined on 1 arch and
declared on another. Remove it.
Suggested-by: Al Viro
Signed-off-by: Ira Weiny
---
arch/csky/include/asm/highmem.h | 1 -
arch/csky/mm/highmem.c | 13 -
arch/nds32/include/asm
From: Ira Weiny
All architectures do exactly the same thing for kunmap(); remove all the
duplicate definitions and lift the call to the core.
This also has the benefit of changing kmap_unmap() on a number of
architectures to be an inline call rather than an actual function.
Signed-off-by: Ira
From: Ira Weiny
To support kmap_atomic_prot() on all architectures each arch must
support protections passed in to them.
Change csky, mips, nds32 and xtensa to use their global kmap_prot value
rather than a hard coded value which was equal.
Signed-off-by: Ira Weiny
---
arch/csky/mm/highmem.c
From: Ira Weiny
Move the kmap() build bug to kmap_init() to facilitate patches to lift
kmap() to the core.
Signed-off-by: Ira Weiny
---
arch/xtensa/include/asm/highmem.h | 5 -
arch/xtensa/mm/highmem.c | 5 +
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch
From: Ira Weiny
We want to support kmap_atomic_prot() on all architectures and it makes
sense to define kmap_atomic() to use the default kmap_prot.
So we ensure all arch's have a globally available kmap_prot either as a
define or exported symbol.
Signed-off-by: Ira Weiny
---
arch/micro
From: Ira Weiny
Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap()
in favor of might_sleep().
Besides the benefits of might_sleep(), this normalizes the
implementations such that they can be made generic in subsequent
patches.
Reviewed-by: Dan Williams
Signed-off-by: Ira
From: Ira Weiny
The kmap infrastructure has been copied almost verbatim to every architecture.
This series consolidates obvious duplicated code by defining core functions
which call into the architectures only when needed.
Some of the k[un]map_atomic() implementations have some similarities but
From: Ira Weiny
To support kmap_atomic_prot(), all architectures need to support
protections passed to their kmap_atomic_high() function. Pass
protections into kmap_atomic_high() and change the name to
kmap_atomic_high_prot() to match.
Then define kmap_atomic_prot() as a core function which
From: Ira Weiny
Every single architecture (including !CONFIG_HIGHMEM) calls...
pagefault_enable();
preempt_enable();
... before returning from __kunmap_atomic(). Lift this code into the
kunmap_atomic() macro.
While we are at it rename __kunmap_atomic() to kunmap_atomic_high
From: Ira Weiny
kmap_atomic_prot() is now exported by all architectures. Use this
function rather than open coding a driver specific kmap_atomic.
Signed-off-by: Ira Weiny
---
drivers/gpu/drm/ttm/ttm_bo_util.c| 56 ++--
drivers/gpu/drm/vmwgfx/vmwgfx_blit.c | 16
From: Ira Weiny
The kmap code for all the architectures is almost 100% identical.
Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to
indicate if an arch defines kmap_flush_tlb() and call if if needed.
This also has the benefit of changing kmap() on a number of
architectures to
From: Ira Weiny
Every arch has the same code to ensure atomic operations and a check for
!HIGHMEM page.
Remove the duplicate code by defining a core kmap_atomic() which only
calls the arch specific kmap_atomic_high() when the page is high memory.
Signed-off-by: Ira Weiny
---
Changes from V0
On Fri, May 01, 2020 at 12:31:54AM +0530, Souptick Joarder wrote:
> Document path Documentation/vm/pin_user_pages.rst is not a correct
> reference and it should be Documentation/core-api/pin_user_pages.rst.
>
> Signed-off-by: Souptick Joarder
Reviewed-by: Ira Weiny
> ---
&
On Fri, May 01, 2020 at 01:41:58AM +0530, Souptick Joarder wrote:
> As per documentation, pin_user_pages_fast() & get_user_pages_fast()
> will return 0, if nr_pages <= 0. But this can be figure out only after
> going inside the internal_get_user_pages_fast().
Why is nr_pages not unsigned? I seem
On Tue, Oct 22, 2019 at 02:32:04PM +0300, Boaz Harrosh wrote:
> On 20/10/2019 18:59, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > In order for users to determine if a file is currently operating in DAX
> > mode (effective DAX). Define a statx attribute value
On Tue, Apr 28, 2020 at 01:27:38PM -0700, Darrick J. Wong wrote:
> On Mon, Apr 27, 2020 at 05:21:35PM -0700, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
[snip]
> > +
> > + 3. If the persistent FS_XFLAG_DAX flag is set on a directory, this flag
> >
On Tue, Apr 28, 2020 at 12:44:43PM -0700, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)"
>
> x86 uses page->lru of the pages used for pgds, but that's not immediately
> obvious to anyone looking to make changes. Add a struct list_head to
> the union so it's clearly in use for pgds.
>
>
From: Ira Weiny
Update the Usage section to reflect the new individual dax selection
functionality.
Signed-off-by: Ira Weiny
---
Changes from V11:
Minor changes from Darrick
Changes from V10:
Clarifications from Dave
Add '-c' to xfs_io examples
Chang
From: Ira Weiny
Update the Usage section to reflect the new individual dax selection
functionality.
Signed-off-by: Ira Weiny
---
Changes from V11:
Minor changes from Darrick
Changes from V10:
Clarifications from Dave
Add '-c' to xfs_io examples
Chang
Sorry ignore this one...
I got the 'reply to' wrong...
Ira
On Tue, Apr 28, 2020 at 03:19:42PM -0700, 'Ira Weiny' wrote:
> From: Ira Weiny
>
> Update the Usage section to reflect the new individual dax selection
> functionality.
>
> Signed-off-by:
On Tue, Apr 28, 2020 at 07:21:18PM -0700, Randy Dunlap wrote:
> On 4/28/20 3:21 PM, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > Update the Usage section to reflect the new individual dax selection
> > functionality.
> >
> > Signed-off-by: Ira W
From: Ira Weiny
Update the Usage section to reflect the new individual dax selection
functionality.
Signed-off-by: Ira Weiny
---
Changes from V11.1:
Make filesystem/file system consistently filesystem
grammatical fixes
Changes from V11:
Minor changes from Darrick
On Tue, Apr 28, 2020 at 03:52:51PM -0700, Matthew Wilcox wrote:
> On Tue, Apr 28, 2020 at 02:41:09PM -0700, Ira Weiny wrote:
> > On Tue, Apr 28, 2020 at 12:44:43PM -0700, Matthew Wilcox wrote:
> > > x86 uses page->lru of the pages used for pgds, but that's not immediate
On Mon, Oct 14, 2019 at 08:14:04PM +0530, Aneesh Kumar K.V wrote:
> On 10/14/19 7:22 PM, Kirill A. Shutemov wrote:
> > On Sun, Oct 13, 2019 at 11:43:10PM -0700, John Hubbard wrote:
> > > On 10/13/19 11:12 PM, kbuild test robot wrote:
> > > > Hi John,
> > > >
> > > > Thank you for the patch! Yet so
to avoid checkpatch line length
> complaints, and another line to fix another oversight
> that checkpatch called out: missing "int" on pdshift.
>
> Fixes: b798bec4741b ("mm/gup: change write parameter to flags in fast walk")
> Reported-by: kbuild test robot
et device subsystems add local
> lockdep coverage")
> Signed-off-by: Dan Carpenter
Reviewed-by: Ira Weiny
> ---
> drivers/acpi/nfit/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
> index
From: Ira Weiny
xfs_ioctl_setattr_dax_invalidate() currently checks if the DAX flag is
changing as a quick check.
But the implementation mixes the physical (XFS_DIFLAG2_DAX) and
effective (S_DAX) DAX flags.
Remove the use of the effective flag when determining if a change of the
physical flag
From: Ira Weiny
xfs_inode_supports_dax() should reflect if the inode can support DAX not
that it is enabled for DAX. Leave that to other helper functions.
Change the caller of xfs_inode_supports_dax() to call
xfs_inode_use_dax() which reflects new logic to override the effective
DAX flag with
From: Ira Weiny
Rather than open coding xfs_inode_supports_dax() in
xfs_ioctl_setattr_dax_invalidate() export xfs_inode_supports_dax() and
call it in preparation for swapping dax flags.
This also means updating xfs_inode_supports_dax() to return true for a
directory.
Signed-off-by: Ira Weiny
From: Ira Weiny
In order for users to determine if a file is currently operating in DAX
mode (effective DAX). Define a statx attribute value and set that
attribute if the effective DAX flag is set.
To go along with this we propose the following addition to the statx man
page:
STATX_ATTR_DAX
From: Ira Weiny
At LSF/MM'19 [1] [2] we discussed applications that overestimate memory
consumption due to their inability to detect whether the kernel will
instantiate page cache for a file, and cases where a global dax enable via a
mount option is too coarse.
The following patch s
From: Ira Weiny
Switching between DAX and non-DAX on a file is racy with respect to data
operations. However, if no data is involved the flag is safe to switch.
Allow toggling the physical flag if a file is empty. The file length
check is not racy with respect to other operations as it is
On Mon, Oct 21, 2019 at 11:26:21AM +1100, Dave Chinner wrote:
> On Sun, Oct 20, 2019 at 08:59:32AM -0700, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > xfs_ioctl_setattr_dax_invalidate() currently checks if the DAX flag is
> > changing as a quick check.
>
On Mon, Oct 21, 2019 at 11:45:36AM +1100, Dave Chinner wrote:
> On Sun, Oct 20, 2019 at 08:59:35AM -0700, ira.we...@intel.com wrote:
> > @@ -1232,12 +1233,10 @@ xfs_diflags_to_linux(
> > inode->i_flags |= S_NOATIME;
> > else
> > inode->i_flags &= ~S_NOATIME;
> > -#if 0
On Wed, Feb 13, 2019 at 11:00:06PM -0700, Jason Gunthorpe wrote:
> On Wed, Feb 13, 2019 at 05:53:14PM -0800, Ira Weiny wrote:
> > On Mon, Feb 11, 2019 at 03:54:47PM -0700, Jason Gunthorpe wrote:
> > > On Mon, Feb 11, 2019 at 05:44:32PM -0500, Daniel Jordan wrote:
> > >
On Thu, Feb 14, 2019 at 01:12:31PM -0700, Jason Gunthorpe wrote:
> On Thu, Feb 14, 2019 at 11:33:53AM -0800, Ira Weiny wrote:
>
> > > I think it had to do with double accounting pinned and mlocked pages
> > > and thus delivering a lower than expected limit to userspace.
&g
race.
>
>
> [1] https://lkml.kernel.org/r/20190204052135.25784-1-jhubb...@nvidia.com
>
> Cc: Christian Benvenuti
> Cc: Christoph Hellwig
> Cc: Christopher Lameter
> Cc: Dan Williams
> Cc: Dave Chinner
> Cc: Dennis Dalessandro
> Cc: Doug Ledford
>
ange get_user_pages() to use the new FOLL_LONGTERM flag and
> remove the specialized get_user_pages_longterm call.
>
> [1] https://lkml.org/lkml/2019/2/11/237
> [2] https://lkml.org/lkml/2019/2/11/1789
Any comments on this series? I've touched a lot of subsystems which I think
require
From: Ira Weiny
DAX pages were previously unprotected from longterm pins when users
called get_user_pages_fast().
Use the new FOLL_LONGTERM flag to check for DEVMAP pages and fall
back to regular GUP processing if a DEVMAP page is encountered.
Signed-off-by: Ira Weiny
---
mm/gup.c | 24
From: Ira Weiny
To facilitate additional options to get_user_pages_fast() change the
singular write parameter to be gup_flags.
This patch does not change any functionality. New functionality will
follow in subsequent patches.
Some of the get_user_pages_fast() call sites were unchanged because
From: Ira Weiny
Use the new FOLL_LONGTERM to get_user_pages_fast() to protect against
FS DAX pages being mapped.
Signed-off-by: Ira Weiny
---
drivers/infiniband/hw/hfi1/user_pages.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/infiniband/hw/hfi1
From: Ira Weiny
Use the new FOLL_LONGTERM to get_user_pages_fast() to protect against
FS DAX pages being mapped.
Signed-off-by: Ira Weiny
---
drivers/infiniband/hw/mthca/mthca_memfree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/mthca
From: Ira Weiny
Use the new FOLL_LONGTERM to get_user_pages_fast() to protect against
FS DAX pages being mapped.
Signed-off-by: Ira Weiny
---
drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c
From: Ira Weiny
In order to support more options in the GUP fast walk, change
the write parameter to flags throughout the call stack.
This patch does not change functionality and passes FOLL_WRITE
where write was previously used.
Signed-off-by: Ira Weiny
---
mm/gup.c | 52
From: Ira Weiny
Rather than have a separate get_user_pages_longterm() call,
introduce FOLL_LONGTERM and change the longterm callers to use
it.
This patch does not change any functionality.
FOLL_LONGTERM can only be supported with get_user_pages() as it
requires vmas to determine if DAX is in
From: Ira Weiny
Resending these as I had only 1 minor comment which I believe we have covered
in this series. I was anticipating these going through the mm tree as they
depend on a cleanup patch there and the IB changes are very minor. But they
could just as well go through the IB tree.
NOTE
On Wed, Feb 20, 2019 at 07:19:30AM -0800, Christoph Hellwig wrote:
> On Tue, Feb 19, 2019 at 09:30:33PM -0800, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > Resending these as I had only 1 minor comment which I believe we have
> > covered
> > in this
On Sun, Feb 03, 2019 at 09:21:33PM -0800, john.hubb...@gmail.com wrote:
> From: John Hubbard
>
[snip]
>
> +/*
> + * GUP_PIN_COUNTING_BIAS, and the associated functions that use it, overload
> + * the page's refcount so that two separate items are tracked: the original
> page
> + * reference
On Wed, Feb 06, 2019 at 10:23:10PM -0700, Jason Gunthorpe wrote:
> On Thu, Feb 07, 2019 at 02:52:58PM +1100, Dave Chinner wrote:
>
> > Requiring ODP capable hardware and applications that control RDMA
> > access to use file leases and be able to cancel/recall client side
> > delegations (like NFS
On Thu, Feb 07, 2019 at 10:28:05AM -0500, Tom Talpey wrote:
> On 2/7/2019 10:04 AM, Chuck Lever wrote:
> >
> >
> > > On Feb 7, 2019, at 12:23 AM, Jason Gunthorpe wrote:
> > >
> > > On Thu, Feb 07, 2019 at 02:52:58PM +1100, Dave Chinner wrote:
> > >
> > > > Requiring ODP capable hardware and ap
On Wed, Feb 06, 2019 at 07:13:16PM -0800, Dan Williams wrote:
> On Wed, Feb 6, 2019 at 6:42 PM Doug Ledford wrote:
> >
> > On Wed, 2019-02-06 at 14:44 -0800, Dan Williams wrote:
> > > On Wed, Feb 6, 2019 at 2:25 PM Doug Ledford wrote:
> > > > Can someone give me a real world scenario that someone
On Thu, Feb 07, 2019 at 04:55:37PM +, Christopher Lameter wrote:
> One approach that may be a clean way to solve this:
>
> 1. Long term GUP usage requires the virtual mapping to the pages be fixed
>for the duration of the GUP Map. There never has been a way to break
>the pinnning and t
On Thu, Feb 07, 2019 at 03:54:58PM -0800, Dan Williams wrote:
> On Thu, Feb 7, 2019 at 9:17 AM Jason Gunthorpe wrote:
> >
> > Insisting to run RDMA & DAX without ODP and building an elaborate
> > revoke mechanism to support non-ODP HW is inherently baroque.
> >
> > Use the HW that supports ODP.
>
From: Ira Weiny
write is unused in gup_fast_permitted so remove it.
Acked-by: Kirill A. Shutemov
Signed-off-by: Ira Weiny
---
arch/x86/include/asm/pgtable_64.h | 3 +--
mm/gup.c | 6 +++---
2 files changed, 4 insertions(+), 5 deletions(-)
diff --git a/arch/x86
On Tue, Jan 29, 2019 at 03:26:24PM +0200, Joel Nider wrote:
> ib_umem_get is a core function used by drivers that support RDMA.
> The 'owner' parameter signifies the process that owns the memory.
> Until now, it was assumed that the owning process was the current
> process. This adds the flexibilit
On Tue, Jan 29, 2019 at 10:44:48AM -0600, Steve Wise wrote:
>
> On 1/29/2019 7:26 AM, Joel Nider wrote:
> > As discussed at LPC'18, there is a need to be able to register a memory
> > region (MR) on behalf of another process. One example is the case of
> > post-copy container migration, in which C
er_pages was not taking into
> > > account the current value of pinned_vm.
> > >
> > > Cc: dennis.dalessan...@intel.com
> > > Cc: mike.marcinis...@intel.com
> > > Reviewed-by: Ira Weiny
> > > Signed-off-by: Davidlohr Bueso
> > > dri
ice-DAX
> sub-systems.
>
> The linux-nvdimm mailing hosts a patchwork instance for both DAX and
> NVDIMM patches.
>
> Cc: Jan Kara
> Cc: Ira Weiny
> Cc: Ross Zwisler
> Cc: Keith Busch
> Cc: Matthew Wilcox
> Signed-off-by: Dan Williams
Acked-by: Ira Weiny
> ---
On Fri, May 10, 2019 at 09:36:12AM -0700, Matthew Wilcox wrote:
> On Fri, May 10, 2019 at 10:12:40AM +0800, Huang, Ying wrote:
> > > + nr_reclaimed += (1 << compound_order(page));
> >
> > How about to change this to
> >
> >
> > nr_reclaimed += hpage_nr_pages(page);
>
> Please do
rath Vedartham
Reviewed-by: Ira Weiny
> ---
> mm/gup.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 91819b8..e6f3b7f 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -409,7 +409,7 @@ static struct page *follow
//lwn.net/Articles/753027/ : "The Trouble with get_user_pages()"
>
> Cc: Doug Ledford
> Cc: Jason Gunthorpe
> Cc: Mike Marciniszyn
> Cc: Dennis Dalessandro
> Cc: Christian Benvenuti
>
> Reviewed-by: Jan Kara
> Reviewed-by: Dennis Dalessandro
> Ack
On Thu, May 23, 2019 at 10:46:38AM -0700, John Hubbard wrote:
> On 5/23/19 10:32 AM, Jason Gunthorpe wrote:
> > On Thu, May 23, 2019 at 10:28:52AM -0700, Ira Weiny wrote:
> > > > @@ -686,8 +686,8 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp
> > &
From: Ira Weiny
Signed-off-by: Ira Weiny
---
Documentation/x86/exception-tables.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/x86/exception-tables.txt
b/Documentation/x86/exception-tables.txt
index e396bcd8d830..001c0f1ad935 100644
--- a/Documentation
On Mon, May 20, 2019 at 05:00:07PM +0300, Kirill Tkhai wrote:
> Similar to process_vm_readv() and process_vm_writev(),
> add declarations of a new syscall, which will allow
> to map memory from or to another process.
Shouldn't this be the last patch in the series so that the syscall is actually
im
On Thu, May 23, 2019 at 12:13:59PM -0700, John Hubbard wrote:
> On 5/23/19 12:04 PM, Ira Weiny wrote:
> > On Thu, May 23, 2019 at 10:46:38AM -0700, John Hubbard wrote:
> > > On 5/23/19 10:32 AM, Jason Gunthorpe wrote:
> > > > On Thu, May 23, 2019 at 10:
From: Ira Weiny
Device pages can be more than type MEMORY_DEVICE_PUBLIC.
Handle all device pages within release_pages()
This was found via code inspection while determining if release_pages()
and the new put_user_pages() could be interchangeable.
Cc: Jérôme Glisse
Cc: Dan Williams
Cc
On Thu, May 23, 2019 at 08:58:12PM -0700, Dan Williams wrote:
> On Thu, May 23, 2019 at 3:37 PM wrote:
> >
> > From: Ira Weiny
> >
> > Device pages can be more than type MEMORY_DEVICE_PUBLIC.
> >
> > Handle all device pages within release_pages()
> >
From: Ira Weiny
Device pages can be more than type MEMORY_DEVICE_PUBLIC.
Handle all device pages within release_pages()
This was found via code inspection while determining if release_pages()
and the new put_user_pages() could be interchangeable.
Cc: Jérôme Glisse
Cc: Michal Hocko
Reviewed
From: Ira Weiny
RFC I have no idea if this is correct or not. But looking at
release_pages() I see a call to both __ClearPageActive() and
__ClearPageWaiters() while in __page_cache_release() I do not.
Is this a bug which needs to be fixed? Did I miss clearing active
somewhere else in the call
On Thu, Apr 04, 2019 at 03:23:47PM +0800, Huang Shijie wrote:
> When CONFIG_HAVE_GENERIC_GUP is defined, the kernel will use its own
> get_user_pages_fast().
>
> In the following scenario, we will may meet the bug in the DMA case:
> .
> get_user_pages_fast(s
On Sun, Apr 07, 2019 at 03:11:00PM -0700, Dan Williams wrote:
> On Thu, Apr 4, 2019 at 2:47 AM Robin Murphy wrote:
> >
> > On 04/04/2019 06:04, Dan Williams wrote:
> > > On Wed, Apr 3, 2019 at 9:42 PM Anshuman Khandual
> > > wrote:
> > >>
> > >>
> > >>
> > >> On 04/03/2019 07:28 PM, Robin Murphy
On Mon, Mar 25, 2019 at 10:40:02AM -0400, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> Every time i read the code to check that the HMM structure does not
> vanish before it should thanks to the many lock protecting its removal
> i get a headache. Switch to reference counting instead it is much
On Mon, Mar 25, 2019 at 10:40:06AM -0400, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> A common use case for HMM mirror is user trying to mirror a range
> and before they could program the hardware it get invalidated by
> some core mm event. Instead of having user re-try right away to
> mirror
entries in pfns array.
>
> Changes since v1:
> - updated documentation
> - reformated some comments
>
> Signed-off-by: Jérôme Glisse
> Reviewed-by: Ralph Campbell
> Reviewed-by: John Hubbard
Reviewed-by: Ira Weiny
> Cc: Andrew Morton
> Cc: Dan Wil
On Mon, Mar 25, 2019 at 10:40:05AM -0400, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> Rename for consistency between code, comments and documentation. Also
> improves the comments on all the possible returns values. Improve the
> function by returning the number of populated entries in pfns ar
On Mon, Mar 25, 2019 at 10:40:06AM -0400, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> A common use case for HMM mirror is user trying to mirror a range
> and before they could program the hardware it get invalidated by
> some core mm event. Instead of having user re-try right away to
> mirror
On Thu, Mar 28, 2019 at 04:28:47PM -0700, John Hubbard wrote:
> On 3/28/19 4:21 PM, Jerome Glisse wrote:
> > On Thu, Mar 28, 2019 at 03:40:42PM -0700, John Hubbard wrote:
> >> On 3/28/19 3:31 PM, Jerome Glisse wrote:
> >>> On Thu, Mar 28, 2019 at 03:19:06PM -0700, John Hubbard wrote:
> On 3/28
On Thu, Mar 28, 2019 at 05:39:26PM -0700, John Hubbard wrote:
> On 3/28/19 2:21 PM, Jerome Glisse wrote:
> > On Thu, Mar 28, 2019 at 01:43:13PM -0700, John Hubbard wrote:
> >> On 3/28/19 12:11 PM, Jerome Glisse wrote:
> >>> On Thu, Mar 28, 2019 at 04:07:20AM -0700,
t hmm_vma_walk *hmm_vma_walk = walk->private;
> struct hmm_range *range = hmm_vma_walk->range;
> uint64_t *pfns = range->pfns;
> - unsigned long i;
> + unsigned long i, page_size;
>
> hmm_vma_walk->last = addr;
> - i = (addr - range->sta
On Mon, Mar 25, 2019 at 10:40:09AM -0400, Jerome Glisse wrote:
> From: Jérôme Glisse
>
> HMM mirror is a device driver helpers to mirror range of virtual address.
> It means that the process jobs running on the device can access the same
> virtual address as the CPU threads of that process. This
On Thu, Mar 28, 2019 at 09:50:03PM -0400, Jerome Glisse wrote:
> On Thu, Mar 28, 2019 at 06:18:35PM -0700, John Hubbard wrote:
> > On 3/28/19 6:00 PM, Jerome Glisse wrote:
> > > On Thu, Mar 28, 2019 at 09:57:09AM -0700, Ira Weiny wrote:
> > >> On Thu, Mar 28, 2019 at
On Thu, Mar 28, 2019 at 04:34:04PM -0700, John Hubbard wrote:
> On 3/28/19 4:24 PM, Jerome Glisse wrote:
> > On Thu, Mar 28, 2019 at 04:20:37PM -0700, John Hubbard wrote:
> >> On 3/28/19 4:05 PM, Jerome Glisse wrote:
> >>> On Thu, Mar 28, 2019 at 03:43:33PM -0700, John Hubbard wrote:
> On 3/28
On Thu, Mar 28, 2019 at 08:56:54PM -0400, Jerome Glisse wrote:
> On Thu, Mar 28, 2019 at 09:12:21AM -0700, Ira Weiny wrote:
> > On Mon, Mar 25, 2019 at 10:40:06AM -0400, Jerome Glisse wrote:
> > > From: Jérôme Glisse
> > >
[snip]
> > > +/*
> > > +
From: Ira Weiny
In order to support taking and/or checking for a LONGTERM lease on a FS
DAX inode these calls need to know if FOLL_LONGTERM was specified.
This patch passes the flags down but does not use them. It does this in
prep for 2 future patches.
---
mm/gup.c | 26
From: Ira Weiny
Now that there is a mechanism for users to safely take LONGTERM pins on
FS DAX pages. Remove the FS DAX exclusion from GUP with FOLL_LONGTERM.
Special processing remains in effect for CONFIG_CMA
---
mm/gup.c | 65 ++--
1 file
From: Ira Weiny
Now that the taking of LONGTERM leases is in place we can now facilitate
sending a SIGBUS to process if a file truncate or hole punch is
performed and they do not respond by releasing the lease.
The standard file lease_break_time is used to time out the LONGTERM
lease which is
From: Ira Weiny
Honestly I think I should remove this patch. It is removed later in the
series and ensuring the lease is there at GUP time does not guarantee
the lease is held. The user could remove the lease???
Regardless the code in GUP to take the lease holds it even if the user
does try
From: Ira Weiny
GUP longterm pins of non-pagecache file system pages (FS DAX) are
currently disallowed because they are unsafe.
The danger for pinning these pages comes from the fact that hole punch
and/or truncate of those files results in the pages being mapped and
pinned by a user space
From: Ira Weiny
---
fs/locks.c | 5 +
include/trace/events/filelock.h | 37 -
2 files changed, 41 insertions(+), 1 deletion(-)
diff --git a/fs/locks.c b/fs/locks.c
index ae508d192223..58c6d7a411b6 100644
--- a/fs/locks.c
+++ b/fs
From: Ira Weiny
If a user has failed to take a F_LONGTERM lease on a file and they
do a longterm pin on the pages associated with a file, take a
FL_LONGTERM lease for them.
If the user has not taken a lease on the file they are trying to pin
create a FL_LONGTERM lease and attach it to the inode
From: Ira Weiny
Signed-off-by: Ira Weiny
---
fs/locks.c | 20 ++-
include/trace/events/filelock.h | 35 +
2 files changed, 50 insertions(+), 5 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index eaa1cfaf73b0
From: Ira Weiny
---
fs/locks.c | 1 +
include/trace/events/filelock.h | 4 +++-
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/fs/locks.c b/fs/locks.c
index c77eee081d11..42b96bfc71fa 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -1592,6 +1592,7 @@ static void
From: Ira Weiny
In order to support longterm lease breaking operations. Lease break
code in the file systems need to know if a mapping is DAX.
Split out the logic to determine if a mapping is DAX and export it.
---
fs/dax.c| 23 ---
include/linux/dax.h | 6
401 - 500 of 969 matches
Mail list logo