Re: [PATCH 2/5] KVM: arm64: Avoid mapping size adjustment on permission fault

2021-07-23 Thread Marc Zyngier
On Fri, 23 Jul 2021 16:55:39 +0100,
Alexandru Elisei  wrote:
> 
> Hi Marc,
> 
> On 7/17/21 10:55 AM, Marc Zyngier wrote:
> > Since we only support PMD-sized mappings for THP, getting
> > a permission fault on a level that results in a mapping
> > being larger than PAGE_SIZE is a sure indication that we have
> > already upgraded our mapping to a PMD.
> >
> > In this case, there is no need to try and parse userspace page
> > tables, as the fault information already tells us everything.
> >
> > Signed-off-by: Marc Zyngier 
> > ---
> >  arch/arm64/kvm/mmu.c | 11 ---
> >  1 file changed, 8 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index db6314b93e99..c036a480ca27 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1088,9 +1088,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> > phys_addr_t fault_ipa,
> >  * If we are not forced to use page mapping, check if we are
> >  * backed by a THP and thus use block mapping if possible.
> >  */
> > -   if (vma_pagesize == PAGE_SIZE && !(force_pte || device))
> > -   vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva,
> > -  &pfn, &fault_ipa);
> > +   if (vma_pagesize == PAGE_SIZE && !force_pte) {
> 
> Looks like now it's possible to call transparent_hugepage_adjust()
> for devices (if fault_status != FSC_PERM). Commit 2aa53d68cee6
> ("KVM: arm64: Try stage2 block mapping for host device MMIO") makes
> a good case for the !device check. Was the check removed
> unintentionally?

That's what stupid bugs are made of. I must have resolved a rebase
conflict the wrong way, and lost this crucial bit. Thanks for spotting
this! Now fixed.

> 
> > +   if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE)
> > +   vma_pagesize = fault_granule;
> > +   else
> > +   vma_pagesize = transparent_hugepage_adjust(kvm, memslot,
> > +  hva, &pfn,
> > +  &fault_ipa);
> > +   }
> 
> This change makes sense to me - we can only get stage 2 permission
> faults on a leaf entry since stage 2 tables don't have the
> APTable/XNTable/PXNTable bits. The biggest block mapping size that
> we support at stage 2 is PMD size (from
> transparent_hugepage_adjust()), therefore if fault_granule is larger
> than PAGE_SIZE, then it must be PMD_SIZE.

Yup, exactly.

Thanks again,

M.

-- 
Without deviation from the norm, progress is not possible.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 2/5] KVM: arm64: Avoid mapping size adjustment on permission fault

2021-07-23 Thread Alexandru Elisei
Hi Marc,

On 7/17/21 10:55 AM, Marc Zyngier wrote:
> Since we only support PMD-sized mappings for THP, getting
> a permission fault on a level that results in a mapping
> being larger than PAGE_SIZE is a sure indication that we have
> already upgraded our mapping to a PMD.
>
> In this case, there is no need to try and parse userspace page
> tables, as the fault information already tells us everything.
>
> Signed-off-by: Marc Zyngier 
> ---
>  arch/arm64/kvm/mmu.c | 11 ---
>  1 file changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index db6314b93e99..c036a480ca27 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1088,9 +1088,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>* If we are not forced to use page mapping, check if we are
>* backed by a THP and thus use block mapping if possible.
>*/
> - if (vma_pagesize == PAGE_SIZE && !(force_pte || device))
> - vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva,
> -&pfn, &fault_ipa);
> + if (vma_pagesize == PAGE_SIZE && !force_pte) {

Looks like now it's possible to call transparent_hugepage_adjust() for devices 
(if
fault_status != FSC_PERM). Commit 2aa53d68cee6 ("KVM: arm64: Try stage2 block
mapping for host device MMIO") makes a good case for the !device check. Was the
check removed unintentionally?

> + if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE)
> + vma_pagesize = fault_granule;
> + else
> + vma_pagesize = transparent_hugepage_adjust(kvm, memslot,
> +hva, &pfn,
> +&fault_ipa);
> + }

This change makes sense to me - we can only get stage 2 permission faults on a
leaf entry since stage 2 tables don't have the APTable/XNTable/PXNTable bits. 
The
biggest block mapping size that we support at stage 2 is PMD size (from
transparent_hugepage_adjust()), therefore if fault_granule is larger than
PAGE_SIZE, then it must be PMD_SIZE.

Thanks,

Alex

>  
>   if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
>   /* Check the VMM hasn't introduced a new VM_SHARED VMA */
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 2/5] KVM: arm64: Avoid mapping size adjustment on permission fault

2021-07-17 Thread Marc Zyngier
Since we only support PMD-sized mappings for THP, getting
a permission fault on a level that results in a mapping
being larger than PAGE_SIZE is a sure indication that we have
already upgraded our mapping to a PMD.

In this case, there is no need to try and parse userspace page
tables, as the fault information already tells us everything.

Signed-off-by: Marc Zyngier 
---
 arch/arm64/kvm/mmu.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index db6314b93e99..c036a480ca27 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1088,9 +1088,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
phys_addr_t fault_ipa,
 * If we are not forced to use page mapping, check if we are
 * backed by a THP and thus use block mapping if possible.
 */
-   if (vma_pagesize == PAGE_SIZE && !(force_pte || device))
-   vma_pagesize = transparent_hugepage_adjust(kvm, memslot, hva,
-  &pfn, &fault_ipa);
+   if (vma_pagesize == PAGE_SIZE && !force_pte) {
+   if (fault_status == FSC_PERM && fault_granule > PAGE_SIZE)
+   vma_pagesize = fault_granule;
+   else
+   vma_pagesize = transparent_hugepage_adjust(kvm, memslot,
+  hva, &pfn,
+  &fault_ipa);
+   }
 
if (fault_status != FSC_PERM && !device && kvm_has_mte(kvm)) {
/* Check the VMM hasn't introduced a new VM_SHARED VMA */
-- 
2.30.2

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm