Hi Nick,

Le 10/06/2019 à 05:08, Nicholas Piggin a écrit :
__ioremap_at error handling is wonky, it requires caller to clean up
after it. Implement a helper that does the map and error cleanup and
remove the requirement from the caller.

Signed-off-by: Nicholas Piggin <npig...@gmail.com>
---

This series is a different approach to the problem, using the generic
ioremap_page_range directly which reduces added code, and moves
the radix specific code into radix files. Thanks to Christophe for
pointing out various problems with the previous patch.

  arch/powerpc/mm/pgtable_64.c | 27 ++++++++++++++++++++-------
  1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index d2d976ff8a0e..6bd3660388aa 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c

[...]

@@ -182,8 +197,6 @@ void __iomem * __ioremap_caller(phys_addr_t addr, unsigned 
long size,
area->phys_addr = paligned;
                ret = __ioremap_at(paligned, area->addr, size, prot);
-               if (!ret)
-                       vunmap(area->addr);

AFAICS, ioremap_range() calls unmap_kernel_range() in the error case,
but I can't see that that function does the vunmap(), does it ?. If not, who frees the area allocated by __get_vm_area_caller() ?

Christophe

Reply via email to