[Nouveau] [Bug 110931] Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

--- Comment #5 from Andrey Taranov  ---
Created attachment 144584
  --> https://bugs.freedesktop.org/attachment.cgi?id=144584&action=edit
VBIOS

Produced with
# cp /sys/kernel/debug/dri/1/vbios.rom .

In my case, it is dri/1:
# cat /sys/kernel/debug/dri/1/name
nouveau dev=:01:00.0 master=pci::01:00.0 unique=:01:00.0

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [Bug 110931] Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

--- Comment #4 from Andrey Taranov  ---
Created attachment 144583
  --> https://bugs.freedesktop.org/attachment.cgi?id=144583&action=edit
Successful Xorg log

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [Bug 110931] Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

--- Comment #3 from Andrey Taranov  ---
Created attachment 144582
  --> https://bugs.freedesktop.org/attachment.cgi?id=144582&action=edit
Successful kernel log

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [Bug 110931] Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

--- Comment #2 from Andrey Taranov  ---
Created attachment 144581
  --> https://bugs.freedesktop.org/attachment.cgi?id=144581&action=edit
Failing Xorg log

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [Bug 110931] Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

Andrey Taranov  changed:

   What|Removed |Added

 Attachment #144579|0   |1
is obsolete||

--- Comment #1 from Andrey Taranov  ---
Created attachment 144580
  --> https://bugs.freedesktop.org/attachment.cgi?id=144580&action=edit
Failing kernel log

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [Bug 110931] New: Timeout initializing Falcon after cold boot

2019-06-17 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=110931

Bug ID: 110931
   Summary: Timeout initializing Falcon after cold boot
   Product: xorg
   Version: unspecified
  Hardware: x86-64 (AMD64)
OS: Linux (All)
Status: NEW
  Severity: normal
  Priority: medium
 Component: Driver/nouveau
  Assignee: nouveau@lists.freedesktop.org
  Reporter: taranov.and...@gmail.com
QA Contact: xorg-t...@lists.x.org

Created attachment 144579
  --> https://bugs.freedesktop.org/attachment.cgi?id=144579&action=edit
Failing kernel log

I've got a fairly recent Dell/Alienware GTX 1080. It fails on
initializing/resetting the Falcon during boot, and is not usable by Xorg
subsequently. See *-bad.log attachments.

Workaround: boot into Windows, and reboot into Linux. Everything works fine
after that. See *-good.log attachments.

Initialization timeout happens in acr_ls_sec2_post_run(), in
ls_ucode_msgqueue.c. The corresponding Xorg driver error message is "Failed to
initialise context object: 2D_NVC0 (0)"

-- 
You are receiving this mail because:
You are the assignee for the bug.___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 08/25] memremap: move dev_pagemap callbacks into a separate structure

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 12:59 PM Christoph Hellwig  wrote:
>
> On Mon, Jun 17, 2019 at 10:51:35AM -0700, Dan Williams wrote:
> > > -   struct dev_pagemap *pgmap = _pgmap;
> >
> > Whoops, needed to keep this line to avoid:
> >
> > tools/testing/nvdimm/test/iomap.c:109:11: error: ‘pgmap’ undeclared
> > (first use in this function); did you mean ‘_pgmap’?
>
> So I really shouldn't be tripping over this anymore, but can we somehow
> this mess?
>
>  - at least add it to the normal build system and kconfig deps instead
>of stashing it away so that things like buildbot can build it?
>  - at least allow building it (under COMPILE_TEST) if needed even when
>pmem.ko and friends are built in the kernel?

Done: https://patchwork.kernel.org/patch/11000477/
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 07/25] memremap: validate the pagemap type passed to devm_memremap_pages

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 12:59 PM Christoph Hellwig  wrote:
>
> On Mon, Jun 17, 2019 at 12:02:09PM -0700, Dan Williams wrote:
> > Need a lead in patch that introduces MEMORY_DEVICE_DEVDAX, otherwise:
>
> Or maybe a MEMORY_DEVICE_DEFAULT = 0 shared by fsdax and p2pdma?

I thought about that, but it seems is_pci_p2pdma_page() needs the
distinction between the 2 types.
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 08/25] memremap: move dev_pagemap callbacks into a separate structure

2019-06-17 Thread Christoph Hellwig
On Mon, Jun 17, 2019 at 02:08:14PM -0600, Logan Gunthorpe wrote:
> I just noticed this is missing a line to set pgmap->ops to
> pci_p2pdma_pagemap_ops. I must have gotten confused by the other users
> in my original review. Though I'm not sure how this compiles as the new
> struct is static and unused. However, it is rendered moot in Patch 16
> when this is all removed.

It probably was there in the original and got lost in the merge conflicts
from the rebase.  I should have dropped all the reviewed-bys for patches
with non-trivial merge resolution, sorry.
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [PATCH 16/25] PCI/P2PDMA: use the dev_pagemap internal refcount

2019-06-17 Thread Logan Gunthorpe



On 2019-06-17 6:27 a.m., Christoph Hellwig wrote:
> The functionality is identical to the one currently open coded in
> p2pdma.c.
> 
> Signed-off-by: Christoph Hellwig 

Reviewed-by: Logan Gunthorpe 

I also did a quick test with the full patch-set to ensure that the setup
and tear down paths for p2pdma still work correctly and it all does.

Thanks,

Logan

> ---
>  drivers/pci/p2pdma.c | 56 
>  1 file changed, 4 insertions(+), 52 deletions(-)
> 
> diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
> index 48a88158e46a..608f84df604a 100644
> --- a/drivers/pci/p2pdma.c
> +++ b/drivers/pci/p2pdma.c
> @@ -24,12 +24,6 @@ struct pci_p2pdma {
>   bool p2pmem_published;
>  };
>  
> -struct p2pdma_pagemap {
> - struct dev_pagemap pgmap;
> - struct percpu_ref ref;
> - struct completion ref_done;
> -};
> -
>  static ssize_t size_show(struct device *dev, struct device_attribute *attr,
>char *buf)
>  {
> @@ -78,32 +72,6 @@ static const struct attribute_group p2pmem_group = {
>   .name = "p2pmem",
>  };
>  
> -static struct p2pdma_pagemap *to_p2p_pgmap(struct percpu_ref *ref)
> -{
> - return container_of(ref, struct p2pdma_pagemap, ref);
> -}
> -
> -static void pci_p2pdma_percpu_release(struct percpu_ref *ref)
> -{
> - struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref);
> -
> - complete(&p2p_pgmap->ref_done);
> -}
> -
> -static void pci_p2pdma_percpu_kill(struct dev_pagemap *pgmap)
> -{
> - percpu_ref_kill(pgmap->ref);
> -}
> -
> -static void pci_p2pdma_percpu_cleanup(struct dev_pagemap *pgmap)
> -{
> - struct p2pdma_pagemap *p2p_pgmap =
> - container_of(pgmap, struct p2pdma_pagemap, pgmap);
> -
> - wait_for_completion(&p2p_pgmap->ref_done);
> - percpu_ref_exit(&p2p_pgmap->ref);
> -}
> -
>  static void pci_p2pdma_release(void *data)
>  {
>   struct pci_dev *pdev = data;
> @@ -153,11 +121,6 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
>   return error;
>  }
>  
> -static const struct dev_pagemap_ops pci_p2pdma_pagemap_ops = {
> - .kill   = pci_p2pdma_percpu_kill,
> - .cleanup= pci_p2pdma_percpu_cleanup,
> -};
> -
>  /**
>   * pci_p2pdma_add_resource - add memory for use as p2p memory
>   * @pdev: the device to add the memory to
> @@ -171,7 +134,6 @@ static const struct dev_pagemap_ops 
> pci_p2pdma_pagemap_ops = {
>  int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
>   u64 offset)
>  {
> - struct p2pdma_pagemap *p2p_pgmap;
>   struct dev_pagemap *pgmap;
>   void *addr;
>   int error;
> @@ -194,22 +156,12 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int 
> bar, size_t size,
>   return error;
>   }
>  
> - p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL);
> - if (!p2p_pgmap)
> + pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL);
> + if (!pgmap)
>   return -ENOMEM;
> -
> - init_completion(&p2p_pgmap->ref_done);
> - error = percpu_ref_init(&p2p_pgmap->ref,
> - pci_p2pdma_percpu_release, 0, GFP_KERNEL);
> - if (error)
> - goto pgmap_free;
> -
> - pgmap = &p2p_pgmap->pgmap;
> -
>   pgmap->res.start = pci_resource_start(pdev, bar) + offset;
>   pgmap->res.end = pgmap->res.start + size - 1;
>   pgmap->res.flags = pci_resource_flags(pdev, bar);
> - pgmap->ref = &p2p_pgmap->ref;
>   pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
>   pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) -
>   pci_resource_start(pdev, bar);
> @@ -223,7 +175,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int 
> bar, size_t size,
>   error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr,
>   pci_bus_address(pdev, bar) + offset,
>   resource_size(&pgmap->res), dev_to_node(&pdev->dev),
> - &p2p_pgmap->ref);
> + pgmap->ref);
>   if (error)
>   goto pages_free;
>  
> @@ -235,7 +187,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int 
> bar, size_t size,
>  pages_free:
>   devm_memunmap_pages(&pdev->dev, pgmap);
>  pgmap_free:
> - devm_kfree(&pdev->dev, p2p_pgmap);
> + devm_kfree(&pdev->dev, pgmap);
>   return error;
>  }
>  EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource);
> 


Re: [Nouveau] [PATCH 08/25] memremap: move dev_pagemap callbacks into a separate structure

2019-06-17 Thread Christoph Hellwig
On Mon, Jun 17, 2019 at 10:51:35AM -0700, Dan Williams wrote:
> > -   struct dev_pagemap *pgmap = _pgmap;
> 
> Whoops, needed to keep this line to avoid:
> 
> tools/testing/nvdimm/test/iomap.c:109:11: error: ‘pgmap’ undeclared
> (first use in this function); did you mean ‘_pgmap’?

So I really shouldn't be tripping over this anymore, but can we somehow
this mess?

 - at least add it to the normal build system and kconfig deps instead
   of stashing it away so that things like buildbot can build it?
 - at least allow building it (under COMPILE_TEST) if needed even when
   pmem.ko and friends are built in the kernel?
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH] drm/nouveau/svm: Convert to use hmm_range_fault()

2019-06-17 Thread Jerome Glisse
On Sat, Jun 08, 2019 at 12:14:50AM +0530, Souptick Joarder wrote:
> Hi Jason,
> 
> On Tue, May 21, 2019 at 12:27 AM Souptick Joarder  
> wrote:
> >
> > Convert to use hmm_range_fault().
> >
> > Signed-off-by: Souptick Joarder 
> 
> Would you like to take it through your new hmm tree or do I
> need to resend it ?

This patch is wrong as the API is different between the two see what
is in hmm.h to see the differences between hmm_vma_fault() hmm_range_fault()
a simple rename break things.

> 
> > ---
> >  drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c 
> > b/drivers/gpu/drm/nouveau/nouveau_svm.c
> > index 93ed43c..8d56bd6 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
> > @@ -649,7 +649,7 @@ struct nouveau_svmm {
> > range.values = nouveau_svm_pfn_values;
> > range.pfn_shift = NVIF_VMM_PFNMAP_V0_ADDR_SHIFT;
> >  again:
> > -   ret = hmm_vma_fault(&range, true);
> > +   ret = hmm_range_fault(&range, true);
> > if (ret == 0) {
> > mutex_lock(&svmm->mutex);
> > if (!hmm_vma_range_done(&range)) {
> > --
> > 1.9.1
> >
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 07/25] memremap: validate the pagemap type passed to devm_memremap_pages

2019-06-17 Thread Christoph Hellwig
On Mon, Jun 17, 2019 at 12:02:09PM -0700, Dan Williams wrote:
> Need a lead in patch that introduces MEMORY_DEVICE_DEVDAX, otherwise:

Or maybe a MEMORY_DEVICE_DEFAULT = 0 shared by fsdax and p2pdma?
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 10/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 5:28 AM Christoph Hellwig  wrote:
>
> Just check if there is a ->page_free operation set and take care of the
> static key enable, as well as the put using device managed resources.
> Also check that a ->page_free is provided for the pgmaps types that
> require it, and check for a valid type as well while we are at it.
>
> Note that this also fixes the fact that hmm never called
> dev_pagemap_put_ops and thus would leave the slow path enabled forever,
> even after a device driver unload or disable.
>
> Signed-off-by: Christoph Hellwig 
> ---
>  drivers/nvdimm/pmem.c | 23 +++--
>  include/linux/mm.h| 10 
>  kernel/memremap.c | 57 ++-
>  mm/hmm.c  |  2 --
>  4 files changed, 39 insertions(+), 53 deletions(-)
>
[..]
> diff --git a/kernel/memremap.c b/kernel/memremap.c
> index ba7156bd52d1..7272027fbdd7 100644
> --- a/kernel/memremap.c
> +++ b/kernel/memremap.c
[..]
> @@ -190,6 +219,12 @@ void *devm_memremap_pages(struct device *dev, struct 
> dev_pagemap *pgmap)
> return ERR_PTR(-EINVAL);
> }
>
> +   if (pgmap->type != MEMORY_DEVICE_PCI_P2PDMA) {

Once we have MEMORY_DEVICE_DEVDAX then this check needs to be fixed up
to skip that case as well, otherwise:

 Missing page_free method
 WARNING: CPU: 19 PID: 1518 at kernel/memremap.c:33
devm_memremap_pages+0x745/0x7d0
 RIP: 0010:devm_memremap_pages+0x745/0x7d0
 Call Trace:
  dev_dax_probe+0xc6/0x1e0 [device_dax]
  really_probe+0xef/0x390
  ? driver_allows_async_probing+0x50/0x50
  driver_probe_device+0xb4/0x100
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [Nouveau] [PATCH 07/25] memremap: validate the pagemap type passed to devm_memremap_pages

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 5:27 AM Christoph Hellwig  wrote:
>
> Most pgmap types are only supported when certain config options are
> enabled.  Check for a type that is valid for the current configuration
> before setting up the pagemap.
>
> Signed-off-by: Christoph Hellwig 
> ---
>  kernel/memremap.c | 27 +++
>  1 file changed, 27 insertions(+)
>
> diff --git a/kernel/memremap.c b/kernel/memremap.c
> index 6e1970719dc2..6a2dd31a6250 100644
> --- a/kernel/memremap.c
> +++ b/kernel/memremap.c
> @@ -157,6 +157,33 @@ void *devm_memremap_pages(struct device *dev, struct 
> dev_pagemap *pgmap)
> pgprot_t pgprot = PAGE_KERNEL;
> int error, nid, is_ram;
>
> +   switch (pgmap->type) {
> +   case MEMORY_DEVICE_PRIVATE:
> +   if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) {
> +   WARN(1, "Device private memory not supported\n");
> +   return ERR_PTR(-EINVAL);
> +   }
> +   break;
> +   case MEMORY_DEVICE_PUBLIC:
> +   if (!IS_ENABLED(CONFIG_DEVICE_PUBLIC)) {
> +   WARN(1, "Device public memory not supported\n");
> +   return ERR_PTR(-EINVAL);
> +   }
> +   break;
> +   case MEMORY_DEVICE_FS_DAX:
> +   if (!IS_ENABLED(CONFIG_ZONE_DEVICE) ||
> +   IS_ENABLED(CONFIG_FS_DAX_LIMITED)) {
> +   WARN(1, "File system DAX not supported\n");
> +   return ERR_PTR(-EINVAL);
> +   }
> +   break;
> +   case MEMORY_DEVICE_PCI_P2PDMA:

Need a lead in patch that introduces MEMORY_DEVICE_DEVDAX, otherwise:

 Invalid pgmap type 0
 WARNING: CPU: 6 PID: 1316 at kernel/memremap.c:183
devm_memremap_pages+0x1d8/0x700
 [..]
 RIP: 0010:devm_memremap_pages+0x1d8/0x700
 [..]
 Call Trace:
  dev_dax_probe+0xc7/0x1e0 [device_dax]
  really_probe+0xef/0x390
  driver_probe_device+0xb4/0x100
  device_driver_attach+0x4f/0x60
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [PATCH 06/59] drm/prime: Actually remove DRIVER_PRIME everywhere

2019-06-17 Thread Emil Velikov
On 2019/06/14, Daniel Vetter wrote:
> Split out to make the functional changes stick out more.
> 
Since this patch flew-by, as standalone one (intentionally or not) I'd
add, anything vaguely like:

"Core users of DRIVER_PRIME were removed from core with prior patches."

HTH
Emil


Re: [PATCH 08/25] memremap: move dev_pagemap callbacks into a separate structure

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 5:27 AM Christoph Hellwig  wrote:
>
> The dev_pagemap is a growing too many callbacks.  Move them into a
> separate ops structure so that they are not duplicated for multiple
> instances, and an attacker can't easily overwrite them.
>
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Logan Gunthorpe 
> Reviewed-by: Jason Gunthorpe 
> ---
>  drivers/dax/device.c  | 11 ++
>  drivers/dax/pmem/core.c   |  2 +-
>  drivers/nvdimm/pmem.c | 19 +---
>  drivers/pci/p2pdma.c  |  9 +---
>  include/linux/memremap.h  | 36 +--
>  kernel/memremap.c | 18 
>  mm/hmm.c  | 10 ++---
>  tools/testing/nvdimm/test/iomap.c |  9 
>  8 files changed, 65 insertions(+), 49 deletions(-)
>
[..]
> diff --git a/tools/testing/nvdimm/test/iomap.c 
> b/tools/testing/nvdimm/test/iomap.c
> index 219dd0a1cb08..a667d974155e 100644
> --- a/tools/testing/nvdimm/test/iomap.c
> +++ b/tools/testing/nvdimm/test/iomap.c
> @@ -106,11 +106,10 @@ EXPORT_SYMBOL(__wrap_devm_memremap);
>
>  static void nfit_test_kill(void *_pgmap)
>  {
> -   struct dev_pagemap *pgmap = _pgmap;

Whoops, needed to keep this line to avoid:

tools/testing/nvdimm/test/iomap.c:109:11: error: ‘pgmap’ undeclared
(first use in this function); did you mean ‘_pgmap’?


Re: [Nouveau] [PATCH 06/25] mm: factor out a devm_request_free_mem_region helper

2019-06-17 Thread Christoph Hellwig
On Mon, Jun 17, 2019 at 07:40:18PM +0200, Christoph Hellwig wrote:
> On Mon, Jun 17, 2019 at 10:37:12AM -0700, Dan Williams wrote:
> > > +struct resource *devm_request_free_mem_region(struct device *dev,
> > > +   struct resource *base, unsigned long size);
> > 
> > This appears to need a 'static inline' helper stub in the
> > CONFIG_DEVICE_PRIVATE=n case, otherwise this compile error triggers:
> > 
> > ld: mm/hmm.o: in function `hmm_devmem_add':
> > /home/dwillia2/git/linux/mm/hmm.c:1427: undefined reference to
> > `devm_request_free_mem_region'
> 
> *sigh* - hmm_devmem_add already only works for device private memory,
> so it shouldn't be built if that option is not enabled, but in the
> current code it is.  And a few patches later in the series we just
> kill it off entirely, and the only real caller of this function
> already depends on CONFIG_DEVICE_PRIVATE.  So I'm tempted to just
> ignore the strict bisectability requirement here instead of making
> things messy by either adding the proper ifdefs in hmm.c or providing
> a stub we don't really need.

Actually, I could just move the patch to mark CONFIG_DEVICE_PUBLIC
broken earlier, which would force hmm_devmem_add to only be built
when CONFIG_DEVICE_PRIVATE ist set.
___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

Re: [PATCH 06/25] mm: factor out a devm_request_free_mem_region helper

2019-06-17 Thread Christoph Hellwig
On Mon, Jun 17, 2019 at 10:37:12AM -0700, Dan Williams wrote:
> > +struct resource *devm_request_free_mem_region(struct device *dev,
> > +   struct resource *base, unsigned long size);
> 
> This appears to need a 'static inline' helper stub in the
> CONFIG_DEVICE_PRIVATE=n case, otherwise this compile error triggers:
> 
> ld: mm/hmm.o: in function `hmm_devmem_add':
> /home/dwillia2/git/linux/mm/hmm.c:1427: undefined reference to
> `devm_request_free_mem_region'

*sigh* - hmm_devmem_add already only works for device private memory,
so it shouldn't be built if that option is not enabled, but in the
current code it is.  And a few patches later in the series we just
kill it off entirely, and the only real caller of this function
already depends on CONFIG_DEVICE_PRIVATE.  So I'm tempted to just
ignore the strict bisectability requirement here instead of making
things messy by either adding the proper ifdefs in hmm.c or providing
a stub we don't really need.


Re: [PATCH 06/25] mm: factor out a devm_request_free_mem_region helper

2019-06-17 Thread Dan Williams
On Mon, Jun 17, 2019 at 5:27 AM Christoph Hellwig  wrote:
>
> Keep the physical address allocation that hmm_add_device does with the
> rest of the resource code, and allow future reuse of it without the hmm
> wrapper.
>
> Signed-off-by: Christoph Hellwig 
> Reviewed-by: Jason Gunthorpe 
> Reviewed-by: John Hubbard 
> ---
>  include/linux/ioport.h |  2 ++
>  kernel/resource.c  | 39 +++
>  mm/hmm.c   | 33 -
>  3 files changed, 45 insertions(+), 29 deletions(-)
>
> diff --git a/include/linux/ioport.h b/include/linux/ioport.h
> index da0ebaec25f0..76a33ae3bf6c 100644
> --- a/include/linux/ioport.h
> +++ b/include/linux/ioport.h
> @@ -286,6 +286,8 @@ static inline bool resource_overlaps(struct resource *r1, 
> struct resource *r2)
> return (r1->start <= r2->end && r1->end >= r2->start);
>  }
>
> +struct resource *devm_request_free_mem_region(struct device *dev,
> +   struct resource *base, unsigned long size);

This appears to need a 'static inline' helper stub in the
CONFIG_DEVICE_PRIVATE=n case, otherwise this compile error triggers:

ld: mm/hmm.o: in function `hmm_devmem_add':
/home/dwillia2/git/linux/mm/hmm.c:1427: undefined reference to
`devm_request_free_mem_region'


[PATCH] drm/prime: Actually remove DRIVER_PRIME everywhere

2019-06-17 Thread Daniel Vetter
Split out to make the functional changes stick out more.

v2: amdgpu gained DRIVER_SYNCOBJ_TIMELINE.

v3: amdgpu lost DRIVER_SYNCOBJ_TIMELINE.

v4: Don't add a space in i915_drv.c (Sam)

Cc: Sam Ravnborg 
Reviewed-by: Eric Anholt 
Signed-off-by: Daniel Vetter 
Cc: amd-...@lists.freedesktop.org
Cc: etna...@lists.freedesktop.org
Cc: freedr...@lists.freedesktop.org
Cc: intel-...@lists.freedesktop.org
Cc: l...@lists.freedesktop.org
Cc: linux-amlo...@lists.infradead.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-arm-...@vger.kernel.org
Cc: linux-asp...@lists.ozlabs.org
Cc: linux-renesas-...@vger.kernel.org
Cc: linux-rockc...@lists.infradead.org
Cc: linux-samsung-...@vger.kernel.org
Cc: linux-st...@st-md-mailman.stormreply.com
Cc: linux-te...@vger.kernel.org
Cc: nouveau@lists.freedesktop.org
Cc: NXP Linux Team 
Cc: spice-de...@lists.freedesktop.org
Cc: virtualizat...@lists.linux-foundation.org
Cc: VMware Graphics 
Cc: xen-de...@lists.xenproject.org
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
 drivers/gpu/drm/arc/arcpgu_drv.c| 3 +--
 drivers/gpu/drm/arm/display/komeda/komeda_kms.c | 2 +-
 drivers/gpu/drm/arm/hdlcd_drv.c | 4 +---
 drivers/gpu/drm/arm/malidp_drv.c| 3 +--
 drivers/gpu/drm/armada/armada_drv.c | 3 +--
 drivers/gpu/drm/aspeed/aspeed_gfx_drv.c | 3 +--
 drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c| 4 +---
 drivers/gpu/drm/bochs/bochs_drv.c   | 3 +--
 drivers/gpu/drm/cirrus/cirrus.c | 2 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.c   | 4 +---
 drivers/gpu/drm/exynos/exynos_drm_drv.c | 2 +-
 drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c   | 3 +--
 drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.c | 3 +--
 drivers/gpu/drm/i915/i915_drv.c | 2 +-
 drivers/gpu/drm/imx/imx-drm-core.c  | 3 +--
 drivers/gpu/drm/lima/lima_drv.c | 2 +-
 drivers/gpu/drm/mcde/mcde_drv.c | 2 +-
 drivers/gpu/drm/mediatek/mtk_drm_drv.c  | 3 +--
 drivers/gpu/drm/meson/meson_drv.c   | 4 +---
 drivers/gpu/drm/msm/msm_drv.c   | 1 -
 drivers/gpu/drm/mxsfb/mxsfb_drv.c   | 3 +--
 drivers/gpu/drm/nouveau/nouveau_drm.c   | 2 +-
 drivers/gpu/drm/omapdrm/omap_drv.c  | 2 +-
 drivers/gpu/drm/panfrost/panfrost_drv.c | 3 +--
 drivers/gpu/drm/pl111/pl111_drv.c   | 2 +-
 drivers/gpu/drm/qxl/qxl_drv.c   | 3 +--
 drivers/gpu/drm/radeon/radeon_drv.c | 2 +-
 drivers/gpu/drm/rcar-du/rcar_du_drv.c   | 3 +--
 drivers/gpu/drm/rockchip/rockchip_drm_drv.c | 3 +--
 drivers/gpu/drm/shmobile/shmob_drm_drv.c| 3 +--
 drivers/gpu/drm/sti/sti_drv.c   | 3 +--
 drivers/gpu/drm/stm/drv.c   | 3 +--
 drivers/gpu/drm/sun4i/sun4i_drv.c   | 2 +-
 drivers/gpu/drm/tegra/drm.c | 2 +-
 drivers/gpu/drm/tilcdc/tilcdc_drv.c | 3 +--
 drivers/gpu/drm/tinydrm/hx8357d.c   | 2 +-
 drivers/gpu/drm/tinydrm/ili9225.c   | 3 +--
 drivers/gpu/drm/tinydrm/ili9341.c   | 2 +-
 drivers/gpu/drm/tinydrm/mi0283qt.c  | 3 +--
 drivers/gpu/drm/tinydrm/repaper.c   | 3 +--
 drivers/gpu/drm/tinydrm/st7586.c| 3 +--
 drivers/gpu/drm/tinydrm/st7735r.c   | 3 +--
 drivers/gpu/drm/tve200/tve200_drv.c | 3 +--
 drivers/gpu/drm/udl/udl_drv.c   | 2 +-
 drivers/gpu/drm/v3d/v3d_drv.c   | 1 -
 drivers/gpu/drm/vboxvideo/vbox_drv.c| 2 +-
 drivers/gpu/drm/vc4/vc4_drv.c   | 1 -
 drivers/gpu/drm/vgem/vgem_drv.c | 3 +--
 drivers/gpu/drm/virtio/virtgpu_drv.c| 2 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 2 +-
 drivers/gpu/drm/xen/xen_drm_front.c | 3 +--
 drivers/gpu/drm/zte/zx_drm_drv.c| 3 +--
 include/drm/drm_drv.h   | 6 --
 54 files changed, 50 insertions(+), 94 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 0a577a389024..8e1b269351e8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -1309,7 +1309,7 @@ static struct drm_driver kms_driver = {
.driver_features =
DRIVER_USE_AGP | DRIVER_ATOMIC |
DRIVER_GEM |
-   DRIVER_PRIME | DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ,
+   DRIVER_RENDER | DRIVER_MODESET | DRIVER_SYNCOBJ,
.load = amdgpu_driver_load_kms,
.open = amdgpu_driver_open_kms,
.postclose = amdgpu_driver_postclose_kms,
diff --git a/drivers/gpu/drm/arc/arcpgu_drv.c b/drivers/gpu/drm/arc/arcpgu_drv.c
index af60c6d7a5f4..74240cc1c300 100644
--- a/drivers/gpu/drm/arc/arcpgu_drv.c
+++ b/drivers/gpu/drm/arc/arcpgu_drv.c
@@ -135,8 +135,7 @@ static int arcpgu_debugfs_init(struct drm_m

[Nouveau] [PATCH 16/25] PCI/P2PDMA: use the dev_pagemap internal refcount

2019-06-17 Thread Christoph Hellwig
The functionality is identical to the one currently open coded in
p2pdma.c.

Signed-off-by: Christoph Hellwig 
---
 drivers/pci/p2pdma.c | 56 
 1 file changed, 4 insertions(+), 52 deletions(-)

diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 48a88158e46a..608f84df604a 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -24,12 +24,6 @@ struct pci_p2pdma {
bool p2pmem_published;
 };
 
-struct p2pdma_pagemap {
-   struct dev_pagemap pgmap;
-   struct percpu_ref ref;
-   struct completion ref_done;
-};
-
 static ssize_t size_show(struct device *dev, struct device_attribute *attr,
 char *buf)
 {
@@ -78,32 +72,6 @@ static const struct attribute_group p2pmem_group = {
.name = "p2pmem",
 };
 
-static struct p2pdma_pagemap *to_p2p_pgmap(struct percpu_ref *ref)
-{
-   return container_of(ref, struct p2pdma_pagemap, ref);
-}
-
-static void pci_p2pdma_percpu_release(struct percpu_ref *ref)
-{
-   struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref);
-
-   complete(&p2p_pgmap->ref_done);
-}
-
-static void pci_p2pdma_percpu_kill(struct dev_pagemap *pgmap)
-{
-   percpu_ref_kill(pgmap->ref);
-}
-
-static void pci_p2pdma_percpu_cleanup(struct dev_pagemap *pgmap)
-{
-   struct p2pdma_pagemap *p2p_pgmap =
-   container_of(pgmap, struct p2pdma_pagemap, pgmap);
-
-   wait_for_completion(&p2p_pgmap->ref_done);
-   percpu_ref_exit(&p2p_pgmap->ref);
-}
-
 static void pci_p2pdma_release(void *data)
 {
struct pci_dev *pdev = data;
@@ -153,11 +121,6 @@ static int pci_p2pdma_setup(struct pci_dev *pdev)
return error;
 }
 
-static const struct dev_pagemap_ops pci_p2pdma_pagemap_ops = {
-   .kill   = pci_p2pdma_percpu_kill,
-   .cleanup= pci_p2pdma_percpu_cleanup,
-};
-
 /**
  * pci_p2pdma_add_resource - add memory for use as p2p memory
  * @pdev: the device to add the memory to
@@ -171,7 +134,6 @@ static const struct dev_pagemap_ops pci_p2pdma_pagemap_ops 
= {
 int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size,
u64 offset)
 {
-   struct p2pdma_pagemap *p2p_pgmap;
struct dev_pagemap *pgmap;
void *addr;
int error;
@@ -194,22 +156,12 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int 
bar, size_t size,
return error;
}
 
-   p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL);
-   if (!p2p_pgmap)
+   pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL);
+   if (!pgmap)
return -ENOMEM;
-
-   init_completion(&p2p_pgmap->ref_done);
-   error = percpu_ref_init(&p2p_pgmap->ref,
-   pci_p2pdma_percpu_release, 0, GFP_KERNEL);
-   if (error)
-   goto pgmap_free;
-
-   pgmap = &p2p_pgmap->pgmap;
-
pgmap->res.start = pci_resource_start(pdev, bar) + offset;
pgmap->res.end = pgmap->res.start + size - 1;
pgmap->res.flags = pci_resource_flags(pdev, bar);
-   pgmap->ref = &p2p_pgmap->ref;
pgmap->type = MEMORY_DEVICE_PCI_P2PDMA;
pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) -
pci_resource_start(pdev, bar);
@@ -223,7 +175,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, 
size_t size,
error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr,
pci_bus_address(pdev, bar) + offset,
resource_size(&pgmap->res), dev_to_node(&pdev->dev),
-   &p2p_pgmap->ref);
+   pgmap->ref);
if (error)
goto pages_free;
 
@@ -235,7 +187,7 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, 
size_t size,
 pages_free:
devm_memunmap_pages(&pdev->dev, pgmap);
 pgmap_free:
-   devm_kfree(&pdev->dev, p2p_pgmap);
+   devm_kfree(&pdev->dev, pgmap);
return error;
 }
 EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource);
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 17/25] nouveau: use alloc_page_vma directly

2019-06-17 Thread Christoph Hellwig
hmm_vma_alloc_locked_page is scheduled to go away, use the proper
mm function directly.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 drivers/gpu/drm/nouveau/nouveau_dmem.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c 
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 40c47d6a7d78..a50f6fd2fe24 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -148,11 +148,12 @@ nouveau_dmem_fault_alloc_and_copy(struct vm_area_struct 
*vma,
if (!spage || !(src_pfns[i] & MIGRATE_PFN_MIGRATE))
continue;
 
-   dpage = hmm_vma_alloc_locked_page(vma, addr);
+   dpage = alloc_page_vma(GFP_HIGHUSER, vma, addr);
if (!dpage) {
dst_pfns[i] = MIGRATE_PFN_ERROR;
continue;
}
+   lock_page(dpage);
 
dst_pfns[i] = migrate_pfn(page_to_pfn(dpage)) |
  MIGRATE_PFN_LOCKED;
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 18/25] nouveau: use devm_memremap_pages directly

2019-06-17 Thread Christoph Hellwig
Just use devm_memremap_pages instead of hmm_devmem_add pages to allow
killing that wrapper which doesn't provide a whole lot of benefits.

Signed-off-by: Christoph Hellwig 
---
 drivers/gpu/drm/nouveau/nouveau_dmem.c | 82 --
 1 file changed, 38 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c 
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index a50f6fd2fe24..0fb7a44b8bc4 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -72,7 +72,8 @@ struct nouveau_dmem_migrate {
 };
 
 struct nouveau_dmem {
-   struct hmm_devmem *devmem;
+   struct nouveau_drm *drm;
+   struct dev_pagemap pagemap;
struct nouveau_dmem_migrate migrate;
struct list_head chunk_free;
struct list_head chunk_full;
@@ -80,6 +81,11 @@ struct nouveau_dmem {
struct mutex mutex;
 };
 
+static inline struct nouveau_dmem *page_to_dmem(struct page *page)
+{
+   return container_of(page->pgmap, struct nouveau_dmem, pagemap);
+}
+
 struct nouveau_dmem_fault {
struct nouveau_drm *drm;
struct nouveau_fence *fence;
@@ -96,8 +102,7 @@ struct nouveau_migrate {
unsigned long dma_nr;
 };
 
-static void
-nouveau_dmem_free(struct hmm_devmem *devmem, struct page *page)
+static void nouveau_dmem_page_free(struct page *page)
 {
struct nouveau_dmem_chunk *chunk;
unsigned long idx;
@@ -260,29 +265,21 @@ static const struct migrate_vma_ops 
nouveau_dmem_fault_migrate_ops = {
.finalize_and_map   = nouveau_dmem_fault_finalize_and_map,
 };
 
-static vm_fault_t
-nouveau_dmem_fault(struct hmm_devmem *devmem,
-  struct vm_area_struct *vma,
-  unsigned long addr,
-  const struct page *page,
-  unsigned int flags,
-  pmd_t *pmdp)
+static vm_fault_t nouveau_dmem_migrate_to_ram(struct vm_fault *vmf)
 {
-   struct drm_device *drm_dev = dev_get_drvdata(devmem->device);
+   struct nouveau_dmem *dmem = page_to_dmem(vmf->page);
unsigned long src[1] = {0}, dst[1] = {0};
-   struct nouveau_dmem_fault fault = {0};
+   struct nouveau_dmem_fault fault = { .drm = dmem->drm };
int ret;
 
-
-
/*
 * FIXME what we really want is to find some heuristic to migrate more
 * than just one page on CPU fault. When such fault happens it is very
 * likely that more surrounding page will CPU fault too.
 */
-   fault.drm = nouveau_drm(drm_dev);
-   ret = migrate_vma(&nouveau_dmem_fault_migrate_ops, vma, addr,
- addr + PAGE_SIZE, src, dst, &fault);
+   ret = migrate_vma(&nouveau_dmem_fault_migrate_ops, vmf->vma,
+   vmf->address, vmf->address + PAGE_SIZE,
+   src, dst, &fault);
if (ret)
return VM_FAULT_SIGBUS;
 
@@ -292,10 +289,9 @@ nouveau_dmem_fault(struct hmm_devmem *devmem,
return 0;
 }
 
-static const struct hmm_devmem_ops
-nouveau_dmem_devmem_ops = {
-   .free = nouveau_dmem_free,
-   .fault = nouveau_dmem_fault,
+static const struct dev_pagemap_ops nouveau_dmem_pagemap_ops = {
+   .page_free  = nouveau_dmem_page_free,
+   .migrate_to_ram = nouveau_dmem_migrate_to_ram,
 };
 
 static int
@@ -581,7 +577,8 @@ void
 nouveau_dmem_init(struct nouveau_drm *drm)
 {
struct device *device = drm->dev->dev;
-   unsigned long i, size;
+   struct resource *res;
+   unsigned long i, size, pfn_first;
int ret;
 
/* This only make sense on PASCAL or newer */
@@ -591,6 +588,7 @@ nouveau_dmem_init(struct nouveau_drm *drm)
if (!(drm->dmem = kzalloc(sizeof(*drm->dmem), GFP_KERNEL)))
return;
 
+   drm->dmem->drm = drm;
mutex_init(&drm->dmem->mutex);
INIT_LIST_HEAD(&drm->dmem->chunk_free);
INIT_LIST_HEAD(&drm->dmem->chunk_full);
@@ -600,11 +598,8 @@ nouveau_dmem_init(struct nouveau_drm *drm)
 
/* Initialize migration dma helpers before registering memory */
ret = nouveau_dmem_migrate_init(drm);
-   if (ret) {
-   kfree(drm->dmem);
-   drm->dmem = NULL;
-   return;
-   }
+   if (ret)
+   goto out_free;
 
/*
 * FIXME we need some kind of policy to decide how much VRAM we
@@ -612,14 +607,16 @@ nouveau_dmem_init(struct nouveau_drm *drm)
 * and latter if we want to do thing like over commit then we
 * could revisit this.
 */
-   drm->dmem->devmem = hmm_devmem_add(&nouveau_dmem_devmem_ops,
-  device, size);
-   if (IS_ERR(drm->dmem->devmem)) {
-   kfree(drm->dmem);
-   drm->dmem = NULL;
-   return;
-   }
-
+   res = devm_request_free_mem_region(device, &iomem_resource, size);
+   if (IS_ERR(res))
+   goto out_free;
+   drm->dmem

[PATCH 24/25] mm: remove the HMM config option

2019-06-17 Thread Christoph Hellwig
All the mm/hmm.c code is better keyed off HMM_MIRROR.  Also let nouveau
depend on it instead of the mix of a dummy dependency symbol plus the
actually selected one.  Drop various odd dependencies, as the code is
pretty portable.

Signed-off-by: Christoph Hellwig 
---
 drivers/gpu/drm/nouveau/Kconfig |  3 +--
 include/linux/hmm.h | 10 +-
 include/linux/mm_types.h|  2 +-
 mm/Kconfig  | 30 +-
 mm/Makefile |  2 +-
 mm/hmm.c|  2 --
 6 files changed, 9 insertions(+), 40 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 6303d203ab1d..66c839d8e9d1 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -84,11 +84,10 @@ config DRM_NOUVEAU_BACKLIGHT
 
 config DRM_NOUVEAU_SVM
bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support"
-   depends on ARCH_HAS_HMM
depends on DEVICE_PRIVATE
depends on DRM_NOUVEAU
+   depends on HMM_MIRROR
depends on STAGING
-   select HMM_MIRROR
default n
help
  Say Y here if you want to enable experimental support for
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 454be41f2eaf..ffc52820d976 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -62,7 +62,7 @@
 #include 
 #include 
 
-#if IS_ENABLED(CONFIG_HMM)
+#ifdef CONFIG_HMM_MIRROR
 
 #include 
 #include 
@@ -334,9 +334,6 @@ static inline uint64_t hmm_pfn_from_pfn(const struct 
hmm_range *range,
return hmm_device_entry_from_pfn(range, pfn);
 }
 
-
-
-#if IS_ENABLED(CONFIG_HMM_MIRROR)
 /*
  * Mirroring: how to synchronize device page table with CPU page table.
  *
@@ -586,9 +583,4 @@ static inline void hmm_mm_destroy(struct mm_struct *mm) {}
 static inline void hmm_mm_init(struct mm_struct *mm) {}
 #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
 
-#else /* IS_ENABLED(CONFIG_HMM) */
-static inline void hmm_mm_destroy(struct mm_struct *mm) {}
-static inline void hmm_mm_init(struct mm_struct *mm) {}
-#endif /* IS_ENABLED(CONFIG_HMM) */
-
 #endif /* LINUX_HMM_H */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index f33a1289c101..8d37182f8dbe 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -501,7 +501,7 @@ struct mm_struct {
 #endif
struct work_struct async_put_work;
 
-#if IS_ENABLED(CONFIG_HMM)
+#ifdef CONFIG_HMM_MIRROR
/* HMM needs to track a few things per mm */
struct hmm *hmm;
 #endif
diff --git a/mm/Kconfig b/mm/Kconfig
index 4dbd718c8cf4..7fa785551f96 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -669,37 +669,18 @@ config ZONE_DEVICE
 
  If FS_DAX is enabled, then say Y.
 
-config ARCH_HAS_HMM_MIRROR
-   bool
-   default y
-   depends on (X86_64 || PPC64)
-   depends on MMU && 64BIT
-
-config ARCH_HAS_HMM
-   bool
-   depends on (X86_64 || PPC64)
-   depends on ZONE_DEVICE
-   depends on MMU && 64BIT
-   depends on MEMORY_HOTPLUG
-   depends on MEMORY_HOTREMOVE
-   depends on SPARSEMEM_VMEMMAP
-   default y
-
 config MIGRATE_VMA_HELPER
bool
 
 config DEV_PAGEMAP_OPS
bool
 
-config HMM
-   bool
-   select MMU_NOTIFIER
-   select MIGRATE_VMA_HELPER
-
 config HMM_MIRROR
bool "HMM mirror CPU page table into a device page table"
-   depends on ARCH_HAS_HMM
-   select HMM
+   depends on (X86_64 || PPC64)
+   depends on MMU && 64BIT
+   select MMU_NOTIFIER
+   select MIGRATE_VMA_HELPER
help
  Select HMM_MIRROR if you want to mirror range of the CPU page table 
of a
  process into a device page table. Here, mirror means "keep 
synchronized".
@@ -719,9 +700,8 @@ config DEVICE_PRIVATE
 
 config DEVICE_PUBLIC
bool "Addressable device memory (like GPU memory)"
-   depends on ARCH_HAS_HMM
depends on BROKEN
-   select HMM
+   depends on ZONE_DEVICE
select DEV_PAGEMAP_OPS
 
help
diff --git a/mm/Makefile b/mm/Makefile
index ac5e5ba78874..91c99040065c 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -102,5 +102,5 @@ obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
 obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
 obj-$(CONFIG_PERCPU_STATS) += percpu-stats.o
-obj-$(CONFIG_HMM) += hmm.o
+obj-$(CONFIG_HMM_MIRROR) += hmm.o
 obj-$(CONFIG_MEMFD_CREATE) += memfd.o
diff --git a/mm/hmm.c b/mm/hmm.c
index 17ed080d9c32..cefeec5c58aa 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -25,7 +25,6 @@
 #include 
 #include 
 
-#if IS_ENABLED(CONFIG_HMM_MIRROR)
 static const struct mmu_notifier_ops hmm_mmu_notifier_ops;
 
 static inline struct hmm *mm_get_hmm(struct mm_struct *mm)
@@ -1323,4 +1322,3 @@ long hmm_range_dma_unmap(struct hmm_range *range,
return cpages;
 }
 EXPORT_SYMBOL(hmm_range_dma_unmap);
-#endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
-- 
2.20.1



[PATCH 20/25] mm: remove hmm_devmem_add

2019-06-17 Thread Christoph Hellwig
There isn't really much value add in the hmm_devmem_add wrapper and
more, as using devm_memremap_pages directly now is just as simple.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 Documentation/vm/hmm.rst |  26 
 include/linux/hmm.h  | 129 ---
 mm/hmm.c | 110 -
 3 files changed, 265 deletions(-)

diff --git a/Documentation/vm/hmm.rst b/Documentation/vm/hmm.rst
index 7b6eeda5a7c0..b1c960fe246d 100644
--- a/Documentation/vm/hmm.rst
+++ b/Documentation/vm/hmm.rst
@@ -336,32 +336,6 @@ directly using struct page for device memory which left 
most kernel code paths
 unaware of the difference. We only need to make sure that no one ever tries to
 map those pages from the CPU side.
 
-HMM provides a set of helpers to register and hotplug device memory as a new
-region needing a struct page. This is offered through a very simple API::
-
- struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
-   struct device *device,
-   unsigned long size);
- void hmm_devmem_remove(struct hmm_devmem *devmem);
-
-The hmm_devmem_ops is where most of the important things are::
-
- struct hmm_devmem_ops {
- void (*free)(struct hmm_devmem *devmem, struct page *page);
- vm_fault_t (*fault)(struct hmm_devmem *devmem,
-  struct vm_area_struct *vma,
-  unsigned long addr,
-  struct page *page,
-  unsigned flags,
-  pmd_t *pmdp);
- };
-
-The first callback (free()) happens when the last reference on a device page is
-dropped. This means the device page is now free and no longer used by anyone.
-The second callback happens whenever the CPU tries to access a device page
-which it cannot do. This second callback must trigger a migration back to
-system memory.
-
 
 Migration to and from device memory
 ===
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 89571e8d9c63..50ef29958604 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -587,135 +587,6 @@ static inline void hmm_mm_init(struct mm_struct *mm) {}
 #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
 
 #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) ||  IS_ENABLED(CONFIG_DEVICE_PUBLIC)
-struct hmm_devmem;
-
-/*
- * struct hmm_devmem_ops - callback for ZONE_DEVICE memory events
- *
- * @free: call when refcount on page reach 1 and thus is no longer use
- * @fault: call when there is a page fault to unaddressable memory
- *
- * Both callback happens from page_free() and page_fault() callback of struct
- * dev_pagemap respectively. See include/linux/memremap.h for more details on
- * those.
- *
- * The hmm_devmem_ops callback are just here to provide a coherent and
- * uniq API to device driver and device driver should not register their
- * own page_free() or page_fault() but rely on the hmm_devmem_ops call-
- * back.
- */
-struct hmm_devmem_ops {
-   /*
-* free() - free a device page
-* @devmem: device memory structure (see struct hmm_devmem)
-* @page: pointer to struct page being freed
-*
-* Call back occurs whenever a device page refcount reach 1 which
-* means that no one is holding any reference on the page anymore
-* (ZONE_DEVICE page have an elevated refcount of 1 as default so
-* that they are not release to the general page allocator).
-*
-* Note that callback has exclusive ownership of the page (as no
-* one is holding any reference).
-*/
-   void (*free)(struct hmm_devmem *devmem, struct page *page);
-   /*
-* fault() - CPU page fault or get user page (GUP)
-* @devmem: device memory structure (see struct hmm_devmem)
-* @vma: virtual memory area containing the virtual address
-* @addr: virtual address that faulted or for which there is a GUP
-* @page: pointer to struct page backing virtual address (unreliable)
-* @flags: FAULT_FLAG_* (see include/linux/mm.h)
-* @pmdp: page middle directory
-* Return: VM_FAULT_MINOR/MAJOR on success or one of VM_FAULT_ERROR
-*   on error
-*
-* The callback occurs whenever there is a CPU page fault or GUP on a
-* virtual address. This means that the device driver must migrate the
-* page back to regular memory (CPU accessible).
-*
-* The device driver is free to migrate more than one page from the
-* fault() callback as an optimization. However if the device decides
-* to migrate more than one page it must always priotirize the faulting
-* address over the others.
-*
-* The struct page pointer is only given as a hint to allow quick
-* lookup of internal device driver data. A concurrent migration
-* might have already freed that page and the virtual addr

[PATCH 23/25] mm: sort out the DEVICE_PRIVATE Kconfig mess

2019-06-17 Thread Christoph Hellwig
The ZONE_DEVICE support doesn't depend on anything HMM related, just on
various bits of arch support as indicated by the architecture.  Also
don't select the option from nouveau as it isn't present in many setups,
and depend on it instead.

Signed-off-by: Christoph Hellwig 
---
 drivers/gpu/drm/nouveau/Kconfig | 2 +-
 mm/Kconfig  | 5 ++---
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index dba2613f7180..6303d203ab1d 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -85,10 +85,10 @@ config DRM_NOUVEAU_BACKLIGHT
 config DRM_NOUVEAU_SVM
bool "(EXPERIMENTAL) Enable SVM (Shared Virtual Memory) support"
depends on ARCH_HAS_HMM
+   depends on DEVICE_PRIVATE
depends on DRM_NOUVEAU
depends on STAGING
select HMM_MIRROR
-   select DEVICE_PRIVATE
default n
help
  Say Y here if you want to enable experimental support for
diff --git a/mm/Kconfig b/mm/Kconfig
index 406fa45e9ecc..4dbd718c8cf4 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -677,13 +677,13 @@ config ARCH_HAS_HMM_MIRROR
 
 config ARCH_HAS_HMM
bool
-   default y
depends on (X86_64 || PPC64)
depends on ZONE_DEVICE
depends on MMU && 64BIT
depends on MEMORY_HOTPLUG
depends on MEMORY_HOTREMOVE
depends on SPARSEMEM_VMEMMAP
+   default y
 
 config MIGRATE_VMA_HELPER
bool
@@ -709,8 +709,7 @@ config HMM_MIRROR
 
 config DEVICE_PRIVATE
bool "Unaddressable device memory (GPU memory, ...)"
-   depends on ARCH_HAS_HMM
-   select HMM
+   depends on ZONE_DEVICE
select DEV_PAGEMAP_OPS
 
help
-- 
2.20.1



[Nouveau] [PATCH 22/25] mm: simplify ZONE_DEVICE page private data

2019-06-17 Thread Christoph Hellwig
Remove the clumsy hmm_devmem_page_{get,set}_drvdata helpers, and
instead just access the page directly.  Also make the page data
a void pointer, and thus much easier to use.

Signed-off-by: Christoph Hellwig 
---
 drivers/gpu/drm/nouveau/nouveau_dmem.c | 18 +++--
 include/linux/hmm.h| 27 --
 include/linux/mm_types.h   |  2 +-
 mm/page_alloc.c|  8 
 4 files changed, 12 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c 
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 0fb7a44b8bc4..42c026010938 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -104,11 +104,8 @@ struct nouveau_migrate {
 
 static void nouveau_dmem_page_free(struct page *page)
 {
-   struct nouveau_dmem_chunk *chunk;
-   unsigned long idx;
-
-   chunk = (void *)hmm_devmem_page_get_drvdata(page);
-   idx = page_to_pfn(page) - chunk->pfn_first;
+   struct nouveau_dmem_chunk *chunk = page->zone_device_data;
+   unsigned long idx = page_to_pfn(page) - chunk->pfn_first;
 
/*
 * FIXME:
@@ -200,7 +197,7 @@ nouveau_dmem_fault_alloc_and_copy(struct vm_area_struct 
*vma,
 
dst_addr = fault->dma[fault->npages++];
 
-   chunk = (void *)hmm_devmem_page_get_drvdata(spage);
+   chunk = spage->zone_device_data;
src_addr = page_to_pfn(spage) - chunk->pfn_first;
src_addr = (src_addr << PAGE_SHIFT) + chunk->bo->bo.offset;
 
@@ -633,9 +630,8 @@ nouveau_dmem_init(struct nouveau_drm *drm)
list_add_tail(&chunk->list, &drm->dmem->chunk_empty);
 
page = pfn_to_page(chunk->pfn_first);
-   for (j = 0; j < DMEM_CHUNK_NPAGES; ++j, ++page) {
-   hmm_devmem_page_set_drvdata(page, (long)chunk);
-   }
+   for (j = 0; j < DMEM_CHUNK_NPAGES; ++j, ++page)
+   page->zone_device_data = chunk;
}
 
NV_INFO(drm, "DMEM: registered %ldMB of device memory\n", size >> 20);
@@ -698,7 +694,7 @@ nouveau_dmem_migrate_alloc_and_copy(struct vm_area_struct 
*vma,
if (!dpage || dst_pfns[i] == MIGRATE_PFN_ERROR)
continue;
 
-   chunk = (void *)hmm_devmem_page_get_drvdata(dpage);
+   chunk = dpage->zone_device_data;
dst_addr = page_to_pfn(dpage) - chunk->pfn_first;
dst_addr = (dst_addr << PAGE_SHIFT) + chunk->bo->bo.offset;
 
@@ -862,7 +858,7 @@ nouveau_dmem_convert_pfn(struct nouveau_drm *drm,
continue;
}
 
-   chunk = (void *)hmm_devmem_page_get_drvdata(page);
+   chunk = page->zone_device_data;
addr = page_to_pfn(page) - chunk->pfn_first;
addr = (addr + chunk->bo->bo.mem.start) << PAGE_SHIFT;
 
diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 50ef29958604..454be41f2eaf 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -586,33 +586,6 @@ static inline void hmm_mm_destroy(struct mm_struct *mm) {}
 static inline void hmm_mm_init(struct mm_struct *mm) {}
 #endif /* IS_ENABLED(CONFIG_HMM_MIRROR) */
 
-#if IS_ENABLED(CONFIG_DEVICE_PRIVATE) ||  IS_ENABLED(CONFIG_DEVICE_PUBLIC)
-/*
- * hmm_devmem_page_set_drvdata - set per-page driver data field
- *
- * @page: pointer to struct page
- * @data: driver data value to set
- *
- * Because page can not be on lru we have an unsigned long that driver can use
- * to store a per page field. This just a simple helper to do that.
- */
-static inline void hmm_devmem_page_set_drvdata(struct page *page,
-  unsigned long data)
-{
-   page->hmm_data = data;
-}
-
-/*
- * hmm_devmem_page_get_drvdata - get per page driver data field
- *
- * @page: pointer to struct page
- * Return: driver data value
- */
-static inline unsigned long hmm_devmem_page_get_drvdata(const struct page 
*page)
-{
-   return page->hmm_data;
-}
-#endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */
 #else /* IS_ENABLED(CONFIG_HMM) */
 static inline void hmm_mm_destroy(struct mm_struct *mm) {}
 static inline void hmm_mm_init(struct mm_struct *mm) {}
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8ec38b11b361..f33a1289c101 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -158,7 +158,7 @@ struct page {
struct {/* ZONE_DEVICE pages */
/** @pgmap: Points to the hosting device page map. */
struct dev_pagemap *pgmap;
-   unsigned long hmm_data;
+   void *zone_device_data;
unsigned long _zd_pad_1;/* uses mapping */
};
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 17a39d40a556..c0e031c52db5 100644
--- a/mm/page_alloc.c

[Nouveau] [PATCH 19/25] mm: remove hmm_vma_alloc_locked_page

2019-06-17 Thread Christoph Hellwig
The only user of it has just been removed, and there wasn't really any need
to wrap a basic memory allocator to start with.

Signed-off-by: Christoph Hellwig 
---
 include/linux/hmm.h |  3 ---
 mm/hmm.c| 14 --
 2 files changed, 17 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index e64824334b85..89571e8d9c63 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -589,9 +589,6 @@ static inline void hmm_mm_init(struct mm_struct *mm) {}
 #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) ||  IS_ENABLED(CONFIG_DEVICE_PUBLIC)
 struct hmm_devmem;
 
-struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma,
-  unsigned long addr);
-
 /*
  * struct hmm_devmem_ops - callback for ZONE_DEVICE memory events
  *
diff --git a/mm/hmm.c b/mm/hmm.c
index 307c12d7531c..0ef1a1921afb 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1327,20 +1327,6 @@ EXPORT_SYMBOL(hmm_range_dma_unmap);
 
 
 #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) ||  IS_ENABLED(CONFIG_DEVICE_PUBLIC)
-struct page *hmm_vma_alloc_locked_page(struct vm_area_struct *vma,
-  unsigned long addr)
-{
-   struct page *page;
-
-   page = alloc_page_vma(GFP_HIGHUSER, vma, addr);
-   if (!page)
-   return NULL;
-   lock_page(page);
-   return page;
-}
-EXPORT_SYMBOL(hmm_vma_alloc_locked_page);
-
-
 static void hmm_devmem_ref_release(struct percpu_ref *ref)
 {
struct hmm_devmem *devmem;
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 21/25] mm: mark DEVICE_PUBLIC as broken

2019-06-17 Thread Christoph Hellwig
The code hasn't been used since it was added to the tree, and doesn't
appear to actually be usable.  Mark it as BROKEN until either a user
comes along or we finally give up on it.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 mm/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/Kconfig b/mm/Kconfig
index 0d2ba7e1f43e..406fa45e9ecc 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -721,6 +721,7 @@ config DEVICE_PRIVATE
 config DEVICE_PUBLIC
bool "Addressable device memory (like GPU memory)"
depends on ARCH_HAS_HMM
+   depends on BROKEN
select HMM
select DEV_PAGEMAP_OPS
 
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 25/25] mm: don't select MIGRATE_VMA_HELPER from HMM_MIRROR

2019-06-17 Thread Christoph Hellwig
The migrate_vma helper is only used by noveau to migrate device private
pages around.  Other HMM_MIRROR users like amdgpu or infiniband don't
need it.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 drivers/gpu/drm/nouveau/Kconfig | 1 +
 mm/Kconfig  | 1 -
 2 files changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 66c839d8e9d1..96b9814e6d06 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -88,6 +88,7 @@ config DRM_NOUVEAU_SVM
depends on DRM_NOUVEAU
depends on HMM_MIRROR
depends on STAGING
+   select MIGRATE_VMA_HELPER
default n
help
  Say Y here if you want to enable experimental support for
diff --git a/mm/Kconfig b/mm/Kconfig
index 7fa785551f96..55c9c661e2ee 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -680,7 +680,6 @@ config HMM_MIRROR
depends on (X86_64 || PPC64)
depends on MMU && 64BIT
select MMU_NOTIFIER
-   select MIGRATE_VMA_HELPER
help
  Select HMM_MIRROR if you want to mirror range of the CPU page table 
of a
  process into a device page table. Here, mirror means "keep 
synchronized".
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 15/25] device-dax: use the dev_pagemap internal refcount

2019-06-17 Thread Christoph Hellwig
The functionality is identical to the one currently open coded in
device-dax.

Signed-off-by: Christoph Hellwig 
---
 drivers/dax/dax-private.h |  4 
 drivers/dax/device.c  | 43 ---
 2 files changed, 47 deletions(-)

diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
index a45612148ca0..ed04a18a35be 100644
--- a/drivers/dax/dax-private.h
+++ b/drivers/dax/dax-private.h
@@ -51,8 +51,6 @@ struct dax_region {
  * @target_node: effective numa node if dev_dax memory range is onlined
  * @dev - device core
  * @pgmap - pgmap for memmap setup / lifetime (driver owned)
- * @ref: pgmap reference count (driver owned)
- * @cmp: @ref final put completion (driver owned)
  */
 struct dev_dax {
struct dax_region *region;
@@ -60,8 +58,6 @@ struct dev_dax {
int target_node;
struct device dev;
struct dev_pagemap pgmap;
-   struct percpu_ref ref;
-   struct completion cmp;
 };
 
 static inline struct dev_dax *to_dev_dax(struct device *dev)
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 17b46c1a76b4..a9d7c90ecf1e 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -14,36 +14,6 @@
 #include "dax-private.h"
 #include "bus.h"
 
-static struct dev_dax *ref_to_dev_dax(struct percpu_ref *ref)
-{
-   return container_of(ref, struct dev_dax, ref);
-}
-
-static void dev_dax_percpu_release(struct percpu_ref *ref)
-{
-   struct dev_dax *dev_dax = ref_to_dev_dax(ref);
-
-   dev_dbg(&dev_dax->dev, "%s\n", __func__);
-   complete(&dev_dax->cmp);
-}
-
-static void dev_dax_percpu_exit(struct dev_pagemap *pgmap)
-{
-   struct dev_dax *dev_dax = container_of(pgmap, struct dev_dax, pgmap);
-
-   dev_dbg(&dev_dax->dev, "%s\n", __func__);
-   wait_for_completion(&dev_dax->cmp);
-   percpu_ref_exit(pgmap->ref);
-}
-
-static void dev_dax_percpu_kill(struct dev_pagemap *pgmap)
-{
-   struct dev_dax *dev_dax = container_of(pgmap, struct dev_dax, pgmap);
-
-   dev_dbg(&dev_dax->dev, "%s\n", __func__);
-   percpu_ref_kill(pgmap->ref);
-}
-
 static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
const char *func)
 {
@@ -441,11 +411,6 @@ static void dev_dax_kill(void *dev_dax)
kill_dev_dax(dev_dax);
 }
 
-static const struct dev_pagemap_ops dev_dax_pagemap_ops = {
-   .kill   = dev_dax_percpu_kill,
-   .cleanup= dev_dax_percpu_exit,
-};
-
 int dev_dax_probe(struct device *dev)
 {
struct dev_dax *dev_dax = to_dev_dax(dev);
@@ -463,14 +428,6 @@ int dev_dax_probe(struct device *dev)
return -EBUSY;
}
 
-   init_completion(&dev_dax->cmp);
-   rc = percpu_ref_init(&dev_dax->ref, dev_dax_percpu_release, 0,
-   GFP_KERNEL);
-   if (rc)
-   return rc;
-
-   dev_dax->pgmap.ref = &dev_dax->ref;
-   dev_dax->pgmap.ops = &dev_dax_pagemap_ops;
addr = devm_memremap_pages(dev, &dev_dax->pgmap);
if (IS_ERR(addr))
return PTR_ERR(addr);
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 11/25] memremap: add a migrate_to_ram method to struct dev_pagemap_ops

2019-06-17 Thread Christoph Hellwig
This replaces the hacky ->fault callback, which is currently directly
called from common code through a hmm specific data structure as an
exercise in layering violations.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Ralph Campbell 
---
 include/linux/hmm.h  |  6 --
 include/linux/memremap.h |  6 ++
 include/linux/swapops.h  | 15 ---
 kernel/memremap.c| 35 ---
 mm/hmm.c | 13 +
 mm/memory.c  |  9 ++---
 6 files changed, 17 insertions(+), 67 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 31e1c5347331..e64824334b85 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -694,11 +694,6 @@ struct hmm_devmem_ops {
  * chunk, as an optimization. It must, however, prioritize the faulting address
  * over all the others.
  */
-typedef vm_fault_t (*dev_page_fault_t)(struct vm_area_struct *vma,
-   unsigned long addr,
-   const struct page *page,
-   unsigned int flags,
-   pmd_t *pmdp);
 
 struct hmm_devmem {
struct completion   completion;
@@ -709,7 +704,6 @@ struct hmm_devmem {
struct dev_pagemap  pagemap;
const struct hmm_devmem_ops *ops;
struct percpu_ref   ref;
-   dev_page_fault_tpage_fault;
 };
 
 /*
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index cec02d5400f1..72a8a1a9303b 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -80,6 +80,12 @@ struct dev_pagemap_ops {
 * Wait for refcount in struct dev_pagemap to be idle and reap it.
 */
void (*cleanup)(struct dev_pagemap *pgmap);
+
+   /*
+* Used for private (un-addressable) device memory only.  Must migrate
+* the page back to a CPU accessible page.
+*/
+   vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
 };
 
 /**
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 4d961668e5fc..15bdb6fe71e5 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -129,12 +129,6 @@ static inline struct page 
*device_private_entry_to_page(swp_entry_t entry)
 {
return pfn_to_page(swp_offset(entry));
 }
-
-vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
-  unsigned long addr,
-  swp_entry_t entry,
-  unsigned int flags,
-  pmd_t *pmdp);
 #else /* CONFIG_DEVICE_PRIVATE */
 static inline swp_entry_t make_device_private_entry(struct page *page, bool 
write)
 {
@@ -164,15 +158,6 @@ static inline struct page 
*device_private_entry_to_page(swp_entry_t entry)
 {
return NULL;
 }
-
-static inline vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
-unsigned long addr,
-swp_entry_t entry,
-unsigned int flags,
-pmd_t *pmdp)
-{
-   return VM_FAULT_SIGBUS;
-}
 #endif /* CONFIG_DEVICE_PRIVATE */
 
 #ifdef CONFIG_MIGRATION
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 7272027fbdd7..5245c25b10e3 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -11,7 +11,6 @@
 #include 
 #include 
 #include 
-#include 
 
 static DEFINE_XARRAY(pgmap_array);
 #define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1)
@@ -46,36 +45,6 @@ static int dev_pagemap_get_ops(struct device *dev, struct 
dev_pagemap *pgmap)
 }
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
 
-#if IS_ENABLED(CONFIG_DEVICE_PRIVATE)
-vm_fault_t device_private_entry_fault(struct vm_area_struct *vma,
-  unsigned long addr,
-  swp_entry_t entry,
-  unsigned int flags,
-  pmd_t *pmdp)
-{
-   struct page *page = device_private_entry_to_page(entry);
-   struct hmm_devmem *devmem;
-
-   devmem = container_of(page->pgmap, typeof(*devmem), pagemap);
-
-   /*
-* The page_fault() callback must migrate page back to system memory
-* so that CPU can access it. This might fail for various reasons
-* (device issue, device was unsafely unplugged, ...). When such
-* error conditions happen, the callback must return VM_FAULT_SIGBUS.
-*
-* Note that because memory cgroup charges are accounted to the device
-* memory, this should never fail because of memory restrictions (but
-* allocation of regular system page might still fail because we are
-* out of memory).
-*
-* There is a more in-depth description of what that callback can and
-* cannot do, in include/linux/memremap.h
-*/
-   return devmem->page_fault(vma, addr, page, flags, pmdp);
-}
-#endif /* CONFIG_DEVICE_PRIVATE */
-
 static void pgmap_array_delete(struct 

[Nouveau] [PATCH 10/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages

2019-06-17 Thread Christoph Hellwig
Just check if there is a ->page_free operation set and take care of the
static key enable, as well as the put using device managed resources.
Also check that a ->page_free is provided for the pgmaps types that
require it, and check for a valid type as well while we are at it.

Note that this also fixes the fact that hmm never called
dev_pagemap_put_ops and thus would leave the slow path enabled forever,
even after a device driver unload or disable.

Signed-off-by: Christoph Hellwig 
---
 drivers/nvdimm/pmem.c | 23 +++--
 include/linux/mm.h| 10 
 kernel/memremap.c | 57 ++-
 mm/hmm.c  |  2 --
 4 files changed, 39 insertions(+), 53 deletions(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 469a0f5b3380..85364c59c607 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -342,11 +342,6 @@ static void pmem_release_disk(void *__pmem)
put_disk(pmem->disk);
 }
 
-static void pmem_release_pgmap_ops(void *__pgmap)
-{
-   dev_pagemap_put_ops();
-}
-
 static void pmem_pagemap_page_free(struct page *page, void *data)
 {
wake_up_var(&page->_refcount);
@@ -358,16 +353,6 @@ static const struct dev_pagemap_ops fsdax_pagemap_ops = {
.cleanup= pmem_pagemap_cleanup,
 };
 
-static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap)
-{
-   dev_pagemap_get_ops();
-   if (devm_add_action_or_reset(dev, pmem_release_pgmap_ops, pgmap))
-   return -ENOMEM;
-   pgmap->type = MEMORY_DEVICE_FS_DAX;
-   pgmap->ops = &fsdax_pagemap_ops;
-   return 0;
-}
-
 static int pmem_attach_disk(struct device *dev,
struct nd_namespace_common *ndns)
 {
@@ -423,8 +408,8 @@ static int pmem_attach_disk(struct device *dev,
pmem->pfn_flags = PFN_DEV;
pmem->pgmap.ref = &q->q_usage_counter;
if (is_nd_pfn(dev)) {
-   if (setup_pagemap_fsdax(dev, &pmem->pgmap))
-   return -ENOMEM;
+   pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
+   pmem->pgmap.ops = &fsdax_pagemap_ops;
addr = devm_memremap_pages(dev, &pmem->pgmap);
pfn_sb = nd_pfn->pfn_sb;
pmem->data_offset = le64_to_cpu(pfn_sb->dataoff);
@@ -436,8 +421,8 @@ static int pmem_attach_disk(struct device *dev,
} else if (pmem_should_map_pages(dev)) {
memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res));
pmem->pgmap.altmap_valid = false;
-   if (setup_pagemap_fsdax(dev, &pmem->pgmap))
-   return -ENOMEM;
+   pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
+   pmem->pgmap.ops = &fsdax_pagemap_ops;
addr = devm_memremap_pages(dev, &pmem->pgmap);
pmem->pfn_flags |= PFN_MAP;
memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0e8834ac32b7..edcf2b821647 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -921,8 +921,6 @@ static inline bool is_zone_device_page(const struct page 
*page)
 #endif
 
 #ifdef CONFIG_DEV_PAGEMAP_OPS
-void dev_pagemap_get_ops(void);
-void dev_pagemap_put_ops(void);
 void __put_devmap_managed_page(struct page *page);
 DECLARE_STATIC_KEY_FALSE(devmap_managed_key);
 static inline bool put_devmap_managed_page(struct page *page)
@@ -969,14 +967,6 @@ static inline bool is_pci_p2pdma_page(const struct page 
*page)
 #endif /* CONFIG_PCI_P2PDMA */
 
 #else /* CONFIG_DEV_PAGEMAP_OPS */
-static inline void dev_pagemap_get_ops(void)
-{
-}
-
-static inline void dev_pagemap_put_ops(void)
-{
-}
-
 static inline bool put_devmap_managed_page(struct page *page)
 {
return false;
diff --git a/kernel/memremap.c b/kernel/memremap.c
index ba7156bd52d1..7272027fbdd7 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -17,6 +17,35 @@ static DEFINE_XARRAY(pgmap_array);
 #define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1)
 #define SECTION_SIZE (1UL << PA_SECTION_SHIFT)
 
+#ifdef CONFIG_DEV_PAGEMAP_OPS
+DEFINE_STATIC_KEY_FALSE(devmap_managed_key);
+EXPORT_SYMBOL(devmap_managed_key);
+static atomic_t devmap_enable;
+
+static void dev_pagemap_put_ops(void *data)
+{
+   if (atomic_dec_and_test(&devmap_enable))
+   static_branch_disable(&devmap_managed_key);
+}
+
+static int dev_pagemap_get_ops(struct device *dev, struct dev_pagemap *pgmap)
+{
+   if (!pgmap->ops->page_free) {
+   WARN(1, "Missing page_free method\n");
+   return -EINVAL;
+   }
+
+   if (atomic_inc_return(&devmap_enable) == 1)
+   static_branch_enable(&devmap_managed_key);
+   return devm_add_action_or_reset(dev, dev_pagemap_put_ops, NULL);
+}
+#else
+static int dev_pagemap_get_ops(struct device *dev, struct dev_pagemap *pgmap)
+{
+   return -EINVAL;
+}
+#endif /* CONFIG_DEV_PAGEMAP_OPS */
+
 #if IS_ENABLED(CONFIG_DEVICE_PRIVAT

[Nouveau] [PATCH 02/25] mm: remove the struct hmm_device infrastructure

2019-06-17 Thread Christoph Hellwig
This code is a trivial wrapper around device model helpers, which
should have been integrated into the driver device model usage from
the start.  Assuming it actually had users, which it never had since
the code was added more than 1 1/2 years ago.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
Reviewed-by: John Hubbard 
---
 include/linux/hmm.h | 20 
 mm/hmm.c| 80 -
 2 files changed, 100 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index 7007123842ba..c92f353d701a 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -753,26 +753,6 @@ static inline unsigned long 
hmm_devmem_page_get_drvdata(const struct page *page)
 {
return page->hmm_data;
 }
-
-
-/*
- * struct hmm_device - fake device to hang device memory onto
- *
- * @device: device struct
- * @minor: device minor number
- */
-struct hmm_device {
-   struct device   device;
-   unsigned intminor;
-};
-
-/*
- * A device driver that wants to handle multiple devices memory through a
- * single fake device can use hmm_device to do so. This is purely a helper and
- * it is not strictly needed, in order to make use of any HMM functionality.
- */
-struct hmm_device *hmm_device_new(void *drvdata);
-void hmm_device_put(struct hmm_device *hmm_device);
 #endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */
 #else /* IS_ENABLED(CONFIG_HMM) */
 static inline void hmm_mm_destroy(struct mm_struct *mm) {}
diff --git a/mm/hmm.c b/mm/hmm.c
index 4c770a734c0d..f3350fc567ab 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1525,84 +1525,4 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct 
hmm_devmem_ops *ops,
return devmem;
 }
 EXPORT_SYMBOL_GPL(hmm_devmem_add_resource);
-
-/*
- * A device driver that wants to handle multiple devices memory through a
- * single fake device can use hmm_device to do so. This is purely a helper
- * and it is not needed to make use of any HMM functionality.
- */
-#define HMM_DEVICE_MAX 256
-
-static DECLARE_BITMAP(hmm_device_mask, HMM_DEVICE_MAX);
-static DEFINE_SPINLOCK(hmm_device_lock);
-static struct class *hmm_device_class;
-static dev_t hmm_device_devt;
-
-static void hmm_device_release(struct device *device)
-{
-   struct hmm_device *hmm_device;
-
-   hmm_device = container_of(device, struct hmm_device, device);
-   spin_lock(&hmm_device_lock);
-   clear_bit(hmm_device->minor, hmm_device_mask);
-   spin_unlock(&hmm_device_lock);
-
-   kfree(hmm_device);
-}
-
-struct hmm_device *hmm_device_new(void *drvdata)
-{
-   struct hmm_device *hmm_device;
-
-   hmm_device = kzalloc(sizeof(*hmm_device), GFP_KERNEL);
-   if (!hmm_device)
-   return ERR_PTR(-ENOMEM);
-
-   spin_lock(&hmm_device_lock);
-   hmm_device->minor = find_first_zero_bit(hmm_device_mask, 
HMM_DEVICE_MAX);
-   if (hmm_device->minor >= HMM_DEVICE_MAX) {
-   spin_unlock(&hmm_device_lock);
-   kfree(hmm_device);
-   return ERR_PTR(-EBUSY);
-   }
-   set_bit(hmm_device->minor, hmm_device_mask);
-   spin_unlock(&hmm_device_lock);
-
-   dev_set_name(&hmm_device->device, "hmm_device%d", hmm_device->minor);
-   hmm_device->device.devt = MKDEV(MAJOR(hmm_device_devt),
-   hmm_device->minor);
-   hmm_device->device.release = hmm_device_release;
-   dev_set_drvdata(&hmm_device->device, drvdata);
-   hmm_device->device.class = hmm_device_class;
-   device_initialize(&hmm_device->device);
-
-   return hmm_device;
-}
-EXPORT_SYMBOL(hmm_device_new);
-
-void hmm_device_put(struct hmm_device *hmm_device)
-{
-   put_device(&hmm_device->device);
-}
-EXPORT_SYMBOL(hmm_device_put);
-
-static int __init hmm_init(void)
-{
-   int ret;
-
-   ret = alloc_chrdev_region(&hmm_device_devt, 0,
- HMM_DEVICE_MAX,
- "hmm_device");
-   if (ret)
-   return ret;
-
-   hmm_device_class = class_create(THIS_MODULE, "hmm_device");
-   if (IS_ERR(hmm_device_class)) {
-   unregister_chrdev_region(hmm_device_devt, HMM_DEVICE_MAX);
-   return PTR_ERR(hmm_device_class);
-   }
-   return 0;
-}
-
-device_initcall(hmm_init);
 #endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 13/25] memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag

2019-06-17 Thread Christoph Hellwig
Add a flags field to struct dev_pagemap to replace the altmap_valid
boolean to be a little more extensible.  Also add a pgmap_altmap() helper
to find the optional altmap and clean up the code using the altmap using
it.

Signed-off-by: Christoph Hellwig 
---
 arch/powerpc/mm/mem.c | 10 +-
 arch/x86/mm/init_64.c |  8 ++--
 drivers/nvdimm/pfn_devs.c |  3 +--
 drivers/nvdimm/pmem.c |  1 -
 include/linux/memremap.h  | 12 +++-
 kernel/memremap.c | 26 ++
 mm/hmm.c  |  1 -
 mm/memory_hotplug.c   |  6 ++
 mm/page_alloc.c   |  5 ++---
 9 files changed, 29 insertions(+), 43 deletions(-)

diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index cba29131bccc..f774d80df025 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -131,17 +131,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size,
 {
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
-   struct page *page;
+   struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap);
int ret;
 
-   /*
-* If we have an altmap then we need to skip over any reserved PFNs
-* when querying the zone.
-*/
-   page = pfn_to_page(start_pfn);
-   if (altmap)
-   page += vmem_altmap_offset(altmap);
-
__remove_pages(page_zone(page), start_pfn, nr_pages, altmap);
 
/* Remove htab bolted mappings for this section of memory */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 693aaf28d5fe..3139e992ef9d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1211,13 +1211,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 
size,
 {
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
-   struct page *page = pfn_to_page(start_pfn);
-   struct zone *zone;
+   struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap);
+   struct zone *zone = page_zone(page);
 
-   /* With altmap the first mapped page is offset from @start */
-   if (altmap)
-   page += vmem_altmap_offset(altmap);
-   zone = page_zone(page);
__remove_pages(zone, start_pfn, nr_pages, altmap);
kernel_physical_mapping_remove(start, start + size);
 }
diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
index 01f40672507f..24924a442129 100644
--- a/drivers/nvdimm/pfn_devs.c
+++ b/drivers/nvdimm/pfn_devs.c
@@ -630,7 +630,6 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct 
dev_pagemap *pgmap)
if (offset < reserve)
return -EINVAL;
nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns);
-   pgmap->altmap_valid = false;
} else if (nd_pfn->mode == PFN_MODE_PMEM) {
nd_pfn->npfns = PFN_SECTION_ALIGN_UP((resource_size(res)
- offset) / PAGE_SIZE);
@@ -642,7 +641,7 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct 
dev_pagemap *pgmap)
memcpy(altmap, &__altmap, sizeof(*altmap));
altmap->free = PHYS_PFN(offset - reserve);
altmap->alloc = 0;
-   pgmap->altmap_valid = true;
+   pgmap->flags |= PGMAP_ALTMAP_VALID;
} else
return -ENXIO;
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 1ff4b1c4c7c3..7c3d388cf2f7 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -420,7 +420,6 @@ static int pmem_attach_disk(struct device *dev,
bb_res.start += pmem->data_offset;
} else if (pmem_should_map_pages(dev)) {
memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res));
-   pmem->pgmap.altmap_valid = false;
pmem->pgmap.type = MEMORY_DEVICE_FS_DAX;
pmem->pgmap.ops = &fsdax_pagemap_ops;
addr = devm_memremap_pages(dev, &pmem->pgmap);
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 036c637f0150..7289eb091b04 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -88,6 +88,8 @@ struct dev_pagemap_ops {
vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf);
 };
 
+#define PGMAP_ALTMAP_VALID (1 << 0)
+
 /**
  * struct dev_pagemap - metadata for ZONE_DEVICE mappings
  * @altmap: pre-allocated/reserved memory for vmemmap allocations
@@ -96,19 +98,27 @@ struct dev_pagemap_ops {
  * @dev: host device of the mapping for debug
  * @data: private data pointer for page_free()
  * @type: memory type: see MEMORY_* in memory_hotplug.h
+ * @flags: PGMAP_* flags to specify defailed behavior
  * @ops: method table
  */
 struct dev_pagemap {
struct vmem_altmap altmap;
-   bool altmap_valid;
struct resource res;
struct percpu_ref *ref;
struct device *dev;
enum memory_type type;

[Nouveau] [PATCH 06/25] mm: factor out a devm_request_free_mem_region helper

2019-06-17 Thread Christoph Hellwig
Keep the physical address allocation that hmm_add_device does with the
rest of the resource code, and allow future reuse of it without the hmm
wrapper.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
Reviewed-by: John Hubbard 
---
 include/linux/ioport.h |  2 ++
 kernel/resource.c  | 39 +++
 mm/hmm.c   | 33 -
 3 files changed, 45 insertions(+), 29 deletions(-)

diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index da0ebaec25f0..76a33ae3bf6c 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -286,6 +286,8 @@ static inline bool resource_overlaps(struct resource *r1, 
struct resource *r2)
return (r1->start <= r2->end && r1->end >= r2->start);
 }
 
+struct resource *devm_request_free_mem_region(struct device *dev,
+   struct resource *base, unsigned long size);
 
 #endif /* __ASSEMBLY__ */
 #endif /* _LINUX_IOPORT_H */
diff --git a/kernel/resource.c b/kernel/resource.c
index 158f04ec1d4f..d22423e85cf8 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1628,6 +1628,45 @@ void resource_list_free(struct list_head *head)
 }
 EXPORT_SYMBOL(resource_list_free);
 
+#ifdef CONFIG_DEVICE_PRIVATE
+/**
+ * devm_request_free_mem_region - find free region for device private memory
+ *
+ * @dev: device struct to bind the resource to
+ * @size: size in bytes of the device memory to add
+ * @base: resource tree to look in
+ *
+ * This function tries to find an empty range of physical address big enough to
+ * contain the new resource, so that it can later be hotplugged as ZONE_DEVICE
+ * memory, which in turn allocates struct pages.
+ */
+struct resource *devm_request_free_mem_region(struct device *dev,
+   struct resource *base, unsigned long size)
+{
+   resource_size_t end, addr;
+   struct resource *res;
+
+   size = ALIGN(size, 1UL << PA_SECTION_SHIFT);
+   end = min_t(unsigned long, base->end, (1UL << MAX_PHYSMEM_BITS) - 1);
+   addr = end - size + 1UL;
+
+   for (; addr > size && addr >= base->start; addr -= size) {
+   if (region_intersects(addr, size, 0, IORES_DESC_NONE) !=
+   REGION_DISJOINT)
+   continue;
+
+   res = devm_request_mem_region(dev, addr, size, dev_name(dev));
+   if (!res)
+   return ERR_PTR(-ENOMEM);
+   res->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY;
+   return res;
+   }
+
+   return ERR_PTR(-ERANGE);
+}
+EXPORT_SYMBOL_GPL(devm_request_free_mem_region);
+#endif /* CONFIG_DEVICE_PRIVATE */
+
 static int __init strict_iomem(char *str)
 {
if (strstr(str, "relaxed"))
diff --git a/mm/hmm.c b/mm/hmm.c
index 64e788bb1211..172d695dcb8b 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -25,8 +25,6 @@
 #include 
 #include 
 
-#define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT)
-
 #if IS_ENABLED(CONFIG_HMM_MIRROR)
 static const struct mmu_notifier_ops hmm_mmu_notifier_ops;
 
@@ -1405,7 +1403,6 @@ struct hmm_devmem *hmm_devmem_add(const struct 
hmm_devmem_ops *ops,
  unsigned long size)
 {
struct hmm_devmem *devmem;
-   resource_size_t addr;
void *result;
int ret;
 
@@ -1427,32 +1424,10 @@ struct hmm_devmem *hmm_devmem_add(const struct 
hmm_devmem_ops *ops,
if (ret)
return ERR_PTR(ret);
 
-   size = ALIGN(size, PA_SECTION_SIZE);
-   addr = min((unsigned long)iomem_resource.end,
-  (1UL << MAX_PHYSMEM_BITS) - 1);
-   addr = addr - size + 1UL;
-
-   /*
-* FIXME add a new helper to quickly walk resource tree and find free
-* range
-*
-* FIXME what about ioport_resource resource ?
-*/
-   for (; addr > size && addr >= iomem_resource.start; addr -= size) {
-   ret = region_intersects(addr, size, 0, IORES_DESC_NONE);
-   if (ret != REGION_DISJOINT)
-   continue;
-
-   devmem->resource = devm_request_mem_region(device, addr, size,
-  dev_name(device));
-   if (!devmem->resource)
-   return ERR_PTR(-ENOMEM);
-   break;
-   }
-   if (!devmem->resource)
-   return ERR_PTR(-ERANGE);
-
-   devmem->resource->desc = IORES_DESC_DEVICE_PRIVATE_MEMORY;
+   devmem->resource = devm_request_free_mem_region(device, &iomem_resource,
+   size);
+   if (IS_ERR(devmem->resource))
+   return ERR_CAST(devmem->resource);
devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT;
devmem->pfn_last = devmem->pfn_first +
   (resource_size(devmem->resource) >> PAGE_SHIFT);
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org

[Nouveau] [PATCH 08/25] memremap: move dev_pagemap callbacks into a separate structure

2019-06-17 Thread Christoph Hellwig
The dev_pagemap is a growing too many callbacks.  Move them into a
separate ops structure so that they are not duplicated for multiple
instances, and an attacker can't easily overwrite them.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Logan Gunthorpe 
Reviewed-by: Jason Gunthorpe 
---
 drivers/dax/device.c  | 11 ++
 drivers/dax/pmem/core.c   |  2 +-
 drivers/nvdimm/pmem.c | 19 +---
 drivers/pci/p2pdma.c  |  9 +---
 include/linux/memremap.h  | 36 +--
 kernel/memremap.c | 18 
 mm/hmm.c  | 10 ++---
 tools/testing/nvdimm/test/iomap.c |  9 
 8 files changed, 65 insertions(+), 49 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 8465d12fecba..cd483050a775 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -36,9 +36,8 @@ static void dev_dax_percpu_exit(struct percpu_ref *ref)
percpu_ref_exit(ref);
 }
 
-static void dev_dax_percpu_kill(struct percpu_ref *data)
+static void dev_dax_percpu_kill(struct percpu_ref *ref)
 {
-   struct percpu_ref *ref = data;
struct dev_dax *dev_dax = ref_to_dev_dax(ref);
 
dev_dbg(&dev_dax->dev, "%s\n", __func__);
@@ -442,6 +441,11 @@ static void dev_dax_kill(void *dev_dax)
kill_dev_dax(dev_dax);
 }
 
+static const struct dev_pagemap_ops dev_dax_pagemap_ops = {
+   .kill   = dev_dax_percpu_kill,
+   .cleanup= dev_dax_percpu_exit,
+};
+
 int dev_dax_probe(struct device *dev)
 {
struct dev_dax *dev_dax = to_dev_dax(dev);
@@ -466,8 +470,7 @@ int dev_dax_probe(struct device *dev)
return rc;
 
dev_dax->pgmap.ref = &dev_dax->ref;
-   dev_dax->pgmap.kill = dev_dax_percpu_kill;
-   dev_dax->pgmap.cleanup = dev_dax_percpu_exit;
+   dev_dax->pgmap.ops = &dev_dax_pagemap_ops;
addr = devm_memremap_pages(dev, &dev_dax->pgmap);
if (IS_ERR(addr))
return PTR_ERR(addr);
diff --git a/drivers/dax/pmem/core.c b/drivers/dax/pmem/core.c
index f9f51786d556..6eb6dfdf19bf 100644
--- a/drivers/dax/pmem/core.c
+++ b/drivers/dax/pmem/core.c
@@ -16,7 +16,7 @@ struct dev_dax *__dax_pmem_probe(struct device *dev, enum 
dev_dax_subsys subsys)
struct dev_dax *dev_dax;
struct nd_namespace_io *nsio;
struct dax_region *dax_region;
-   struct dev_pagemap pgmap = { 0 };
+   struct dev_pagemap pgmap = { };
struct nd_namespace_common *ndns;
struct nd_dax *nd_dax = to_nd_dax(dev);
struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index c4f5a808b9da..1a9986dc4dc6 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -311,7 +311,7 @@ static const struct attribute_group 
*pmem_attribute_groups[] = {
NULL,
 };
 
-static void __pmem_release_queue(struct percpu_ref *ref)
+static void pmem_pagemap_cleanup(struct percpu_ref *ref)
 {
struct request_queue *q;
 
@@ -321,10 +321,10 @@ static void __pmem_release_queue(struct percpu_ref *ref)
 
 static void pmem_release_queue(void *ref)
 {
-   __pmem_release_queue(ref);
+   pmem_pagemap_cleanup(ref);
 }
 
-static void pmem_freeze_queue(struct percpu_ref *ref)
+static void pmem_pagemap_kill(struct percpu_ref *ref)
 {
struct request_queue *q;
 
@@ -347,19 +347,24 @@ static void pmem_release_pgmap_ops(void *__pgmap)
dev_pagemap_put_ops();
 }
 
-static void fsdax_pagefree(struct page *page, void *data)
+static void pmem_pagemap_page_free(struct page *page, void *data)
 {
wake_up_var(&page->_refcount);
 }
 
+static const struct dev_pagemap_ops fsdax_pagemap_ops = {
+   .page_free  = pmem_pagemap_page_free,
+   .kill   = pmem_pagemap_kill,
+   .cleanup= pmem_pagemap_cleanup,
+};
+
 static int setup_pagemap_fsdax(struct device *dev, struct dev_pagemap *pgmap)
 {
dev_pagemap_get_ops();
if (devm_add_action_or_reset(dev, pmem_release_pgmap_ops, pgmap))
return -ENOMEM;
pgmap->type = MEMORY_DEVICE_FS_DAX;
-   pgmap->page_free = fsdax_pagefree;
-
+   pgmap->ops = &fsdax_pagemap_ops;
return 0;
 }
 
@@ -417,8 +422,6 @@ static int pmem_attach_disk(struct device *dev,
 
pmem->pfn_flags = PFN_DEV;
pmem->pgmap.ref = &q->q_usage_counter;
-   pmem->pgmap.kill = pmem_freeze_queue;
-   pmem->pgmap.cleanup = __pmem_release_queue;
if (is_nd_pfn(dev)) {
if (setup_pagemap_fsdax(dev, &pmem->pgmap))
return -ENOMEM;
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index a98126ad9c3a..e083567d26ef 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -100,7 +100,7 @@ static void pci_p2pdma_percpu_cleanup(struct percpu_ref 
*ref)
struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref);
 
wait_for_compl

[Nouveau] [PATCH 01/25] mm: remove the unused ARCH_HAS_HMM_DEVICE Kconfig option

2019-06-17 Thread Christoph Hellwig
Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 mm/Kconfig | 10 --
 1 file changed, 10 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index f0c76ba47695..0d2ba7e1f43e 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -675,16 +675,6 @@ config ARCH_HAS_HMM_MIRROR
depends on (X86_64 || PPC64)
depends on MMU && 64BIT
 
-config ARCH_HAS_HMM_DEVICE
-   bool
-   default y
-   depends on (X86_64 || PPC64)
-   depends on MEMORY_HOTPLUG
-   depends on MEMORY_HOTREMOVE
-   depends on SPARSEMEM_VMEMMAP
-   depends on ARCH_HAS_ZONE_DEVICE
-   select XARRAY_MULTI
-
 config ARCH_HAS_HMM
bool
default y
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 14/25] memremap: provide an optional internal refcount in struct dev_pagemap

2019-06-17 Thread Christoph Hellwig
Provide an internal refcounting logic if no ->ref field is provided
in the pagemap passed into devm_memremap_pages so that callers don't
have to reinvent it poorly.

Signed-off-by: Christoph Hellwig 
---
 include/linux/memremap.h  |  4 ++
 kernel/memremap.c | 64 ---
 tools/testing/nvdimm/test/iomap.c | 17 ++--
 3 files changed, 68 insertions(+), 17 deletions(-)

diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 7289eb091b04..7e0f072ddce7 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -95,6 +95,8 @@ struct dev_pagemap_ops {
  * @altmap: pre-allocated/reserved memory for vmemmap allocations
  * @res: physical address range covered by @ref
  * @ref: reference count that pins the devm_memremap_pages() mapping
+ * @internal_ref: internal reference if @ref is not provided by the caller
+ * @done: completion for @internal_ref
  * @dev: host device of the mapping for debug
  * @data: private data pointer for page_free()
  * @type: memory type: see MEMORY_* in memory_hotplug.h
@@ -105,6 +107,8 @@ struct dev_pagemap {
struct vmem_altmap altmap;
struct resource res;
struct percpu_ref *ref;
+   struct percpu_ref internal_ref;
+   struct completion done;
struct device *dev;
enum memory_type type;
unsigned int flags;
diff --git a/kernel/memremap.c b/kernel/memremap.c
index b41d98a64ebf..60693a1e8e92 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -29,7 +29,7 @@ static void dev_pagemap_put_ops(void *data)
 
 static int dev_pagemap_get_ops(struct device *dev, struct dev_pagemap *pgmap)
 {
-   if (!pgmap->ops->page_free) {
+   if (!pgmap->ops || !pgmap->ops->page_free) {
WARN(1, "Missing page_free method\n");
return -EINVAL;
}
@@ -75,6 +75,24 @@ static unsigned long pfn_next(unsigned long pfn)
 #define for_each_device_pfn(pfn, map) \
for (pfn = pfn_first(map); pfn < pfn_end(map); pfn = pfn_next(pfn))
 
+static void dev_pagemap_kill(struct dev_pagemap *pgmap)
+{
+   if (pgmap->ops && pgmap->ops->kill)
+   pgmap->ops->kill(pgmap);
+   else
+   percpu_ref_kill(pgmap->ref);
+}
+
+static void dev_pagemap_cleanup(struct dev_pagemap *pgmap)
+{
+   if (pgmap->ops && pgmap->ops->cleanup) {
+   pgmap->ops->cleanup(pgmap);
+   } else {
+   wait_for_completion(&pgmap->done);
+   percpu_ref_exit(pgmap->ref);
+   }
+}
+
 static void devm_memremap_pages_release(void *data)
 {
struct dev_pagemap *pgmap = data;
@@ -84,10 +102,10 @@ static void devm_memremap_pages_release(void *data)
unsigned long pfn;
int nid;
 
-   pgmap->ops->kill(pgmap);
+   dev_pagemap_kill(pgmap);
for_each_device_pfn(pfn, pgmap)
put_page(pfn_to_page(pfn));
-   pgmap->ops->cleanup(pgmap);
+   dev_pagemap_cleanup(pgmap);
 
/* pages are dead and unused, undo the arch mapping */
align_start = res->start & ~(SECTION_SIZE - 1);
@@ -114,20 +132,29 @@ static void devm_memremap_pages_release(void *data)
  "%s: failed to free all reserved pages\n", __func__);
 }
 
+static void dev_pagemap_percpu_release(struct percpu_ref *ref)
+{
+   struct dev_pagemap *pgmap =
+   container_of(ref, struct dev_pagemap, internal_ref);
+
+   complete(&pgmap->done);
+}
+
 /**
  * devm_memremap_pages - remap and provide memmap backing for the given 
resource
  * @dev: hosting device for @res
  * @pgmap: pointer to a struct dev_pagemap
  *
  * Notes:
- * 1/ At a minimum the res, ref and type and ops members of @pgmap must be
- *initialized by the caller before passing it to this function
+ * 1/ At a minimum the res and type members of @pgmap must be initialized
+ *by the caller before passing it to this function
  *
  * 2/ The altmap field may optionally be initialized, in which case
  *PGMAP_ALTMAP_VALID must be set in pgmap->flags.
  *
- * 3/ pgmap->ref must be 'live' on entry and will be killed and reaped
- *at devm_memremap_pages_release() time, or if this routine fails.
+ * 3/ The ref field may optionally be provided, in which pgmap->ref must be
+ *'live' on entry and will be killed and reaped at
+ *devm_memremap_pages_release() time, or if this routine fails.
  *
  * 4/ res is expected to be a host memory range that could feasibly be
  *treated as a "System RAM" range, i.e. not a device mmio range, but
@@ -178,10 +205,21 @@ void *devm_memremap_pages(struct device *dev, struct 
dev_pagemap *pgmap)
break;
}
 
-   if (!pgmap->ref || !pgmap->ops || !pgmap->ops->kill ||
-   !pgmap->ops->cleanup) {
-   WARN(1, "Missing reference count teardown definition\n");
-   return ERR_PTR(-EINVAL);
+   if (!pgmap->ref) {
+   if (pgmap->ops && (pgmap->ops->kill || pgmap->ops->cleanup))
+  

[Nouveau] [PATCH 12/25] memremap: remove the data field in struct dev_pagemap

2019-06-17 Thread Christoph Hellwig
struct dev_pagemap is always embedded into a containing structure, so
there is no need to an additional private data field.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
---
 drivers/nvdimm/pmem.c| 2 +-
 include/linux/memremap.h | 3 +--
 kernel/memremap.c| 2 +-
 mm/hmm.c | 9 +
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 85364c59c607..1ff4b1c4c7c3 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -342,7 +342,7 @@ static void pmem_release_disk(void *__pmem)
put_disk(pmem->disk);
 }
 
-static void pmem_pagemap_page_free(struct page *page, void *data)
+static void pmem_pagemap_page_free(struct page *page)
 {
wake_up_var(&page->_refcount);
 }
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 72a8a1a9303b..036c637f0150 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -69,7 +69,7 @@ struct dev_pagemap_ops {
 * reach 0 refcount unless there is a refcount bug. This allows the
 * device driver to implement its own memory management.)
 */
-   void (*page_free)(struct page *page, void *data);
+   void (*page_free)(struct page *page);
 
/*
 * Transition the refcount in struct dev_pagemap to the dead state.
@@ -104,7 +104,6 @@ struct dev_pagemap {
struct resource res;
struct percpu_ref *ref;
struct device *dev;
-   void *data;
enum memory_type type;
u64 pci_p2pdma_bus_offset;
const struct dev_pagemap_ops *ops;
diff --git a/kernel/memremap.c b/kernel/memremap.c
index 5245c25b10e3..9dd5ccdb1adb 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -379,7 +379,7 @@ void __put_devmap_managed_page(struct page *page)
 
mem_cgroup_uncharge(page);
 
-   page->pgmap->ops->page_free(page, page->pgmap->data);
+   page->pgmap->ops->page_free(page);
} else if (!count)
__put_page(page);
 }
diff --git a/mm/hmm.c b/mm/hmm.c
index 2e5642dc6b04..8a0e04bbeee6 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1365,15 +1365,17 @@ static void hmm_devmem_ref_kill(struct dev_pagemap 
*pgmap)
 
 static vm_fault_t hmm_devmem_migrate_to_ram(struct vm_fault *vmf)
 {
-   struct hmm_devmem *devmem = vmf->page->pgmap->data;
+   struct hmm_devmem *devmem =
+   container_of(vmf->page->pgmap, struct hmm_devmem, pagemap);
 
return devmem->ops->fault(devmem, vmf->vma, vmf->address, vmf->page,
vmf->flags, vmf->pmd);
 }
 
-static void hmm_devmem_free(struct page *page, void *data)
+static void hmm_devmem_free(struct page *page)
 {
-   struct hmm_devmem *devmem = data;
+   struct hmm_devmem *devmem =
+   container_of(page->pgmap, struct hmm_devmem, pagemap);
 
devmem->ops->free(devmem, page);
 }
@@ -1439,7 +1441,6 @@ struct hmm_devmem *hmm_devmem_add(const struct 
hmm_devmem_ops *ops,
devmem->pagemap.ops = &hmm_pagemap_ops;
devmem->pagemap.altmap_valid = false;
devmem->pagemap.ref = &devmem->ref;
-   devmem->pagemap.data = devmem;
 
result = devm_memremap_pages(devmem->device, &devmem->pagemap);
if (IS_ERR(result))
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 09/25] memremap: pass a struct dev_pagemap to ->kill and ->cleanup

2019-06-17 Thread Christoph Hellwig
Passing the actual typed structure leads to more understandable code
vs just passing the ref member.

Reported-by: Logan Gunthorpe 
Signed-off-by: Christoph Hellwig 
Reviewed-by: Logan Gunthorpe 
Reviewed-by: Jason Gunthorpe 
Reviewed-by: Dan Williams 
---
 drivers/dax/device.c  | 12 ++--
 drivers/nvdimm/pmem.c | 18 +-
 drivers/pci/p2pdma.c  | 11 ++-
 include/linux/memremap.h  |  4 ++--
 kernel/memremap.c |  8 
 mm/hmm.c  | 10 +-
 tools/testing/nvdimm/test/iomap.c |  4 ++--
 7 files changed, 34 insertions(+), 33 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index cd483050a775..17b46c1a76b4 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -27,21 +27,21 @@ static void dev_dax_percpu_release(struct percpu_ref *ref)
complete(&dev_dax->cmp);
 }
 
-static void dev_dax_percpu_exit(struct percpu_ref *ref)
+static void dev_dax_percpu_exit(struct dev_pagemap *pgmap)
 {
-   struct dev_dax *dev_dax = ref_to_dev_dax(ref);
+   struct dev_dax *dev_dax = container_of(pgmap, struct dev_dax, pgmap);
 
dev_dbg(&dev_dax->dev, "%s\n", __func__);
wait_for_completion(&dev_dax->cmp);
-   percpu_ref_exit(ref);
+   percpu_ref_exit(pgmap->ref);
 }
 
-static void dev_dax_percpu_kill(struct percpu_ref *ref)
+static void dev_dax_percpu_kill(struct dev_pagemap *pgmap)
 {
-   struct dev_dax *dev_dax = ref_to_dev_dax(ref);
+   struct dev_dax *dev_dax = container_of(pgmap, struct dev_dax, pgmap);
 
dev_dbg(&dev_dax->dev, "%s\n", __func__);
-   percpu_ref_kill(ref);
+   percpu_ref_kill(pgmap->ref);
 }
 
 static int check_vma(struct dev_dax *dev_dax, struct vm_area_struct *vma,
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 1a9986dc4dc6..469a0f5b3380 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -311,24 +311,24 @@ static const struct attribute_group 
*pmem_attribute_groups[] = {
NULL,
 };
 
-static void pmem_pagemap_cleanup(struct percpu_ref *ref)
+static void pmem_pagemap_cleanup(struct dev_pagemap *pgmap)
 {
-   struct request_queue *q;
+   struct request_queue *q =
+   container_of(pgmap->ref, struct request_queue, q_usage_counter);
 
-   q = container_of(ref, typeof(*q), q_usage_counter);
blk_cleanup_queue(q);
 }
 
-static void pmem_release_queue(void *ref)
+static void pmem_release_queue(void *pgmap)
 {
-   pmem_pagemap_cleanup(ref);
+   pmem_pagemap_cleanup(pgmap);
 }
 
-static void pmem_pagemap_kill(struct percpu_ref *ref)
+static void pmem_pagemap_kill(struct dev_pagemap *pgmap)
 {
-   struct request_queue *q;
+   struct request_queue *q =
+   container_of(pgmap->ref, struct request_queue, q_usage_counter);
 
-   q = container_of(ref, typeof(*q), q_usage_counter);
blk_freeze_queue_start(q);
 }
 
@@ -443,7 +443,7 @@ static int pmem_attach_disk(struct device *dev,
memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res));
} else {
if (devm_add_action_or_reset(dev, pmem_release_queue,
-   &q->q_usage_counter))
+   &pmem->pgmap))
return -ENOMEM;
addr = devm_memremap(dev, pmem->phys_addr,
pmem->size, ARCH_MEMREMAP_PMEM);
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index e083567d26ef..48a88158e46a 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -90,17 +90,18 @@ static void pci_p2pdma_percpu_release(struct percpu_ref 
*ref)
complete(&p2p_pgmap->ref_done);
 }
 
-static void pci_p2pdma_percpu_kill(struct percpu_ref *ref)
+static void pci_p2pdma_percpu_kill(struct dev_pagemap *pgmap)
 {
-   percpu_ref_kill(ref);
+   percpu_ref_kill(pgmap->ref);
 }
 
-static void pci_p2pdma_percpu_cleanup(struct percpu_ref *ref)
+static void pci_p2pdma_percpu_cleanup(struct dev_pagemap *pgmap)
 {
-   struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref);
+   struct p2pdma_pagemap *p2p_pgmap =
+   container_of(pgmap, struct p2pdma_pagemap, pgmap);
 
wait_for_completion(&p2p_pgmap->ref_done);
-   percpu_ref_exit(ref);
+   percpu_ref_exit(&p2p_pgmap->ref);
 }
 
 static void pci_p2pdma_release(void *data)
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 1cdcfd595770..cec02d5400f1 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -74,12 +74,12 @@ struct dev_pagemap_ops {
/*
 * Transition the refcount in struct dev_pagemap to the dead state.
 */
-   void (*kill)(struct percpu_ref *ref);
+   void (*kill)(struct dev_pagemap *pgmap);
 
/*
 * Wait for refcount in struct dev_pagemap to be idle and reap it.
 */
-   void (*cleanup)(struct percpu_ref *ref);
+   void (*cl

[Nouveau] [PATCH 04/25] mm: don't clear ->mapping in hmm_devmem_free

2019-06-17 Thread Christoph Hellwig
->mapping isn't even used by HMM users, and the field at the same offset
in the zone_device part of the union is declared as pad.  (Which btw is
rather confusing, as DAX uses ->pgmap and ->mapping from two different
sides of the union, but DAX doesn't use hmm_devmem_free).

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
Reviewed-by: John Hubbard 
---
 mm/hmm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index dc251c51803a..64e788bb1211 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1380,8 +1380,6 @@ static void hmm_devmem_free(struct page *page, void *data)
 {
struct hmm_devmem *devmem = data;
 
-   page->mapping = NULL;
-
devmem->ops->free(devmem, page);
 }
 
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 05/25] mm: export alloc_pages_vma

2019-06-17 Thread Christoph Hellwig
nouveau is currently using this through an odd hmm wrapper, and I plan
to switch it to the real thing later in this series.

Signed-off-by: Christoph Hellwig 
Reviewed-by: John Hubbard 
---
 mm/mempolicy.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 01600d80ae01..f9023b5fba37 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2098,6 +2098,7 @@ alloc_pages_vma(gfp_t gfp, int order, struct 
vm_area_struct *vma,
 out:
return page;
 }
+EXPORT_SYMBOL_GPL(alloc_pages_vma);
 
 /**
  * alloc_pages_current - Allocate pages.
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 07/25] memremap: validate the pagemap type passed to devm_memremap_pages

2019-06-17 Thread Christoph Hellwig
Most pgmap types are only supported when certain config options are
enabled.  Check for a type that is valid for the current configuration
before setting up the pagemap.

Signed-off-by: Christoph Hellwig 
---
 kernel/memremap.c | 27 +++
 1 file changed, 27 insertions(+)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 6e1970719dc2..6a2dd31a6250 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -157,6 +157,33 @@ void *devm_memremap_pages(struct device *dev, struct 
dev_pagemap *pgmap)
pgprot_t pgprot = PAGE_KERNEL;
int error, nid, is_ram;
 
+   switch (pgmap->type) {
+   case MEMORY_DEVICE_PRIVATE:
+   if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) {
+   WARN(1, "Device private memory not supported\n");
+   return ERR_PTR(-EINVAL);
+   }
+   break;
+   case MEMORY_DEVICE_PUBLIC:
+   if (!IS_ENABLED(CONFIG_DEVICE_PUBLIC)) {
+   WARN(1, "Device public memory not supported\n");
+   return ERR_PTR(-EINVAL);
+   }
+   break;
+   case MEMORY_DEVICE_FS_DAX:
+   if (!IS_ENABLED(CONFIG_ZONE_DEVICE) ||
+   IS_ENABLED(CONFIG_FS_DAX_LIMITED)) {
+   WARN(1, "File system DAX not supported\n");
+   return ERR_PTR(-EINVAL);
+   }
+   break;
+   case MEMORY_DEVICE_PCI_P2PDMA:
+   break;
+   default:
+   WARN(1, "Invalid pgmap type %d\n", pgmap->type);
+   break;
+   }
+
if (!pgmap->ref || !pgmap->kill || !pgmap->cleanup) {
WARN(1, "Missing reference count teardown definition\n");
return ERR_PTR(-EINVAL);
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

[Nouveau] [PATCH 03/25] mm: remove hmm_devmem_add_resource

2019-06-17 Thread Christoph Hellwig
This function has never been used since it was first added to the kernel
more than a year and a half ago, and if we ever grow a consumer of the
MEMORY_DEVICE_PUBLIC infrastructure it can easily use devm_memremap_pages
directly.

Signed-off-by: Christoph Hellwig 
Reviewed-by: Jason Gunthorpe 
Reviewed-by: John Hubbard 
---
 include/linux/hmm.h |  3 ---
 mm/hmm.c| 50 -
 2 files changed, 53 deletions(-)

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index c92f353d701a..31e1c5347331 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -724,9 +724,6 @@ struct hmm_devmem {
 struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops,
  struct device *device,
  unsigned long size);
-struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
-  struct device *device,
-  struct resource *res);
 
 /*
  * hmm_devmem_page_set_drvdata - set per-page driver data field
diff --git a/mm/hmm.c b/mm/hmm.c
index f3350fc567ab..dc251c51803a 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -1475,54 +1475,4 @@ struct hmm_devmem *hmm_devmem_add(const struct 
hmm_devmem_ops *ops,
return devmem;
 }
 EXPORT_SYMBOL_GPL(hmm_devmem_add);
-
-struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops,
-  struct device *device,
-  struct resource *res)
-{
-   struct hmm_devmem *devmem;
-   void *result;
-   int ret;
-
-   if (res->desc != IORES_DESC_DEVICE_PUBLIC_MEMORY)
-   return ERR_PTR(-EINVAL);
-
-   dev_pagemap_get_ops();
-
-   devmem = devm_kzalloc(device, sizeof(*devmem), GFP_KERNEL);
-   if (!devmem)
-   return ERR_PTR(-ENOMEM);
-
-   init_completion(&devmem->completion);
-   devmem->pfn_first = -1UL;
-   devmem->pfn_last = -1UL;
-   devmem->resource = res;
-   devmem->device = device;
-   devmem->ops = ops;
-
-   ret = percpu_ref_init(&devmem->ref, &hmm_devmem_ref_release,
- 0, GFP_KERNEL);
-   if (ret)
-   return ERR_PTR(ret);
-
-   devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT;
-   devmem->pfn_last = devmem->pfn_first +
-  (resource_size(devmem->resource) >> PAGE_SHIFT);
-   devmem->page_fault = hmm_devmem_fault;
-
-   devmem->pagemap.type = MEMORY_DEVICE_PUBLIC;
-   devmem->pagemap.res = *devmem->resource;
-   devmem->pagemap.page_free = hmm_devmem_free;
-   devmem->pagemap.altmap_valid = false;
-   devmem->pagemap.ref = &devmem->ref;
-   devmem->pagemap.data = devmem;
-   devmem->pagemap.kill = hmm_devmem_ref_kill;
-   devmem->pagemap.cleanup = hmm_devmem_ref_exit;
-
-   result = devm_memremap_pages(devmem->device, &devmem->pagemap);
-   if (IS_ERR(result))
-   return result;
-   return devmem;
-}
-EXPORT_SYMBOL_GPL(hmm_devmem_add_resource);
 #endif /* CONFIG_DEVICE_PRIVATE || CONFIG_DEVICE_PUBLIC */
-- 
2.20.1

___
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

dev_pagemap related cleanups v2

2019-06-17 Thread Christoph Hellwig
Hi Dan, Jérôme and Jason,

below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code

Note: this series is on top of the rdma/hmm branch + the dev_pagemap
releas fix series from Dan that went into 5.2-rc5.

Git tree:

git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup.2

Gitweb:


http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/hmm-devmem-cleanup.2

Changes since v1:
 - rebase
 - also switch p2pdma to the internal refcount
 - add type checking for pgmap->type
 - rename the migrate method to migrate_to_ram
 - cleanup the altmap_valid flag
 - various tidbits from the reviews