On Wed, 03 Mar 2021 11:29:34 +0000,
Will Deacon <w...@kernel.org> wrote:
> 
> On Wed, Mar 03, 2021 at 10:42:25AM +0800, Jia He wrote:
> > If the start addr is not aligned with the granule size of that level.
> > loop step size should be adjusted to boundary instead of simple
> > kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> > the chance to be walked through.
> > E.g. Assume the unmap range [data->addr, data->end] is
> > [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.
> > And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> > pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> > should be adjusted to 0xff00c00000 instead of 0xff00cb2000.
> > 
> > Without this fix, userspace "segment fault" error can be easily
> > triggered by running simple gVisor runsc cases on an Ampere Altra
> > server:
> >     docker run --runtime=runsc -it --rm  ubuntu /bin/bash
> > 
> > In container:
> >     for i in `seq 1 100`;do ls;done
> > 
> > Reported-by: Howard Zhang <howard.zh...@arm.com>
> > Signed-off-by: Jia He <justin...@arm.com>
> > ---
> >  arch/arm64/kvm/hyp/pgtable.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> > index bdf8e55ed308..4d99d07c610c 100644
> > --- a/arch/arm64/kvm/hyp/pgtable.c
> > +++ b/arch/arm64/kvm/hyp/pgtable.c
> > @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct 
> > kvm_pgtable_walk_data *data,
> >             goto out;
> >  
> >     if (!table) {
> > +           data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> >             data->addr += kvm_granule_size(level);
> 
> Can you replace both of these lines with:
> 
>       data->addr = ALIGN(data->addr, kvm_granule_size(level));
> 
> instead?

Seems like a good option. I also took the liberty to rewrite the
commit message in an effort to make it a bit clearer.

Jia, please let me know if you are OK with these cosmetic changes.


Thanks,

        M.

>From e0524b41a71e0f17d6dc8f197e421e677d584e72 Mon Sep 17 00:00:00 2001
From: Jia He <justin...@arm.com>
Date: Wed, 3 Mar 2021 10:42:25 +0800
Subject: [PATCH] KVM: arm64: Fix range alignment when walking page tables

When walking the page tables at a given level, and if the start
address for the range isn't aligned for that level, we propagate
the misalignment on each iteration at that level.

This results in the walker ignoring a number of entries (depending
on the original misalignment) on each subsequent iteration.

Properly aligning the address at the before the next iteration
addresses the issue.

Cc: sta...@vger.kernel.org
Reported-by: Howard Zhang <howard.zh...@arm.com>
Signed-off-by: Jia He <justin...@arm.com>
Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker 
infrastructure")
[maz: rewrite commit message]
Signed-off-by: Marc Zyngier <m...@kernel.org>
Link: https://lore.kernel.org/r/20210303024225.2591-1-justin...@arm.com
---
 arch/arm64/kvm/hyp/pgtable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
index 4d177ce1d536..124cd2f93020 100644
--- a/arch/arm64/kvm/hyp/pgtable.c
+++ b/arch/arm64/kvm/hyp/pgtable.c
@@ -223,7 +223,7 @@ static inline int __kvm_pgtable_visit(struct 
kvm_pgtable_walk_data *data,
                goto out;
 
        if (!table) {
-               data->addr += kvm_granule_size(level);
+               data->addr = ALIGN(data->addr, kvm_granule_size(level));
                goto out;
        }
 
-- 
2.30.0


-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to