Cleanup what we partially added in case vmemmap_populate() fails. For
vmem, this is already handled by vmem_add_mapping().

Cc: Heiko Carstens <heiko.carst...@de.ibm.com>
Cc: Vasily Gorbik <g...@linux.ibm.com>
Cc: Christian Borntraeger <borntrae...@de.ibm.com>
Cc: Gerald Schaefer <gerald.schae...@de.ibm.com>
Signed-off-by: David Hildenbrand <da...@redhat.com>
---
 arch/s390/mm/vmem.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index 43fe1e2eb90ea..be32a38bb91fd 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -332,8 +332,13 @@ static void vmem_remove_range(unsigned long start, 
unsigned long size)
 int __meminit vmemmap_populate(unsigned long start, unsigned long end, int 
node,
                struct vmem_altmap *altmap)
 {
+       int ret;
+
        /* We don't care about the node, just use NUMA_NO_NODE on allocations */
-       return add_pagetable(start, end, false);
+       ret = add_pagetable(start, end, false);
+       if (ret)
+               remove_pagetable(start, end, false);
+       return ret;
 }
 
 void vmemmap_free(unsigned long start, unsigned long end,
-- 
2.26.2

Reply via email to