From: Mika Penttilä <mika.pentt...@nextfour.com> This makes the caller set_memory_xx() consistent with x86.
arm64 part is rebased on 4.5.0-rc1 with Ard's patch lkml.kernel.org/g/<1453125665-26627-1-git-send-email-ard.biesheu...@linaro.org> applied. Signed-off-by: Mika Penttilä mika.pentt...@nextfour.com Reviewed-by: Laura Abbott <labb...@redhat.com> Acked-by: David Rientjes <rient...@google.com> --- arch/arm/mm/pageattr.c | 3 +++ arch/arm64/mm/pageattr.c | 3 +++ 2 files changed, 6 insertions(+) diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c index cf30daf..d19b1ad 100644 --- a/arch/arm/mm/pageattr.c +++ b/arch/arm/mm/pageattr.c @@ -49,6 +49,9 @@ static int change_memory_common(unsigned long addr, int numpages, WARN_ON_ONCE(1); } + if (!numpages) + return 0; + if (start < MODULES_VADDR || start >= MODULES_END) return -EINVAL; diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 1360a02..b582fc2 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -53,6 +53,9 @@ static int change_memory_common(unsigned long addr, int numpages, WARN_ON_ONCE(1); } + if (!numpages) + return 0; + /* * Kernel VA mappings are always live, and splitting live section * mappings into page mappings may cause TLB conflicts. This means -- 1.9.1