Re: [PATCH] KVM: arm64: fix ptrauth ID register masking logic

2019-05-02 Thread Dave Martin
On Wed, May 01, 2019 at 05:20:49PM +0100, Marc Zyngier wrote:
> On 01/05/2019 17:10, Kristina Martsenko wrote:
> > When a VCPU doesn't have pointer auth, we want to hide all four pointer
> > auth ID register fields from the guest, not just one of them.
> > 
> > Fixes: 384b40caa8af ("KVM: arm/arm64: Context-switch ptrauth registers")
> > Reported-by: Andrew Murray 
> > Fsck-up-by: Marc Zyngier 
> > Signed-off-by: Kristina Martsenko 
> > ---
> >  arch/arm64/kvm/sys_regs.c | 8 
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 9d02643bc601..857b226bcdde 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1088,10 +1088,10 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > if (id == SYS_ID_AA64PFR0_EL1 && !vcpu_has_sve(vcpu)) {
> > val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
> > -   val &= ~(0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > -   (0xfUL << ID_AA64ISAR1_GPI_SHIFT);
> > +   val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_GPA_SHIFT) |
> > +(0xfUL << ID_AA64ISAR1_GPI_SHIFT));
> > }
> >  
> > return val;
> > 
> 
> Applied and pushed to -next. Thanks Andrew for reporting it, and
> Kristina for putting me right!

I was worried this was my mistake... but it looks like my original
suggstion did have the extra ().

Anyway, FWIW,

Acked-by: Dave Martin 
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v5 5/5] KVM: arm/arm64: support chained PMU counters

2019-05-02 Thread Julien Thierry
Hi Andrew,

I have a few remarks. I don't think it's anything major, nor that the
approach need to be changed.

On 01/05/2019 17:31, Andrew Murray wrote:
> ARMv8 provides support for chained PMU counters, where an event type
> of 0x001E is set for odd-numbered counters, the event counter will
> increment by one for each overflow of the preceding even-numbered
> counter. Let's emulate this in KVM by creating a 64 bit perf counter
> when a user chains two emulated counters together.
> 
> For chained events we only support generating an overflow interrupt
> on the high counter. We use the attributes of the low counter to
> determine the attributes of the perf event.
> 
> Suggested-by: Marc Zyngier 
> Signed-off-by: Andrew Murray 
> ---
>  include/kvm/arm_pmu.h |   2 +
>  virt/kvm/arm/pmu.c| 242 --
>  2 files changed, 214 insertions(+), 30 deletions(-)
> 
> diff --git a/include/kvm/arm_pmu.h b/include/kvm/arm_pmu.h
> index b73f31baca52..8b434745500a 100644
> --- a/include/kvm/arm_pmu.h
> +++ b/include/kvm/arm_pmu.h
> @@ -22,6 +22,7 @@
>  #include 
>  
>  #define ARMV8_PMU_CYCLE_IDX  (ARMV8_PMU_MAX_COUNTERS - 1)
> +#define ARMV8_PMU_MAX_COUNTER_PAIRS  ((ARMV8_PMU_MAX_COUNTERS + 1) >> 1)
>  
>  #ifdef CONFIG_KVM_ARM_PMU
>  
> @@ -34,6 +35,7 @@ struct kvm_pmc {
>  struct kvm_pmu {
>   int irq_num;
>   struct kvm_pmc pmc[ARMV8_PMU_MAX_COUNTERS];
> + DECLARE_BITMAP(chained, ARMV8_PMU_MAX_COUNTER_PAIRS);
>   bool ready;
>   bool created;
>   bool irq_level;
> diff --git a/virt/kvm/arm/pmu.c b/virt/kvm/arm/pmu.c
> index ae1e886d4a1a..91f098ca6b28 100644
> --- a/virt/kvm/arm/pmu.c
> +++ b/virt/kvm/arm/pmu.c
> @@ -25,28 +25,130 @@
>  #include 
>  
>  static void kvm_pmu_create_perf_event(struct kvm_vcpu *vcpu, u64 select_idx);
> +
> +#define PERF_ATTR_CFG1_KVM_PMU_CHAINED 0x1
> +
> +static struct kvm_vcpu *kvm_pmc_to_vcpu(struct kvm_pmc *pmc)> +{
> + struct kvm_pmu *pmu;
> + struct kvm_vcpu_arch *vcpu_arch;
> +
> + pmc -= pmc->idx;
> + pmu = container_of(pmc, struct kvm_pmu, pmc[0]);

Nit: The only caller of this seems to only require access to the pmu
that the pmc belongs to. So we could change the function and stop here.


> + vcpu_arch = container_of(pmu, struct kvm_vcpu_arch, pmu);
> + return container_of(vcpu_arch, struct kvm_vcpu, arch);
> +}
> +
>  /**
> - * kvm_pmu_get_counter_value - get PMU counter value
> + * kvm_pmu_pmc_is_chained - determine if the pmc is chained
> + * @pmc: The PMU counter pointer
> + */
> +static bool kvm_pmu_pmc_is_chained(struct kvm_pmc *pmc)
> +{
> + struct kvm_vcpu *vcpu = kvm_pmc_to_vcpu(pmc);
> +
> + return test_bit(pmc->idx >> 1, vcpu->arch.pmu.chained);
> +}
> +
> +/**
> + * kvm_pmu_pmc_is_high_counter - determine if select_idx is a high/low 
> counter
> + * @select_idx: The counter index
> + */
> +static bool kvm_pmu_pmc_is_high_counter(u64 select_idx)
> +{
> + return select_idx & 0x1;
> +}
> +
> +/**
> + * kvm_pmu_get_canonical_pmc - obtain the canonical pmc
> + * @pmc: The PMU counter pointer
> + *
> + * When a pair of PMCs are chained together we use the low counter 
> (canonical)
> + * to hold the underlying perf event.
> + */
> +static struct kvm_pmc *kvm_pmu_get_canonical_pmc(struct kvm_pmc *pmc)
> +{
> + if (kvm_pmu_pmc_is_chained(pmc) && (pmc->idx & 1))

Nit: It's a bit odd that you have that function right above that tests
if an index is high counter but don't use it here.

> + return pmc - 1;
> +
> + return pmc;
> +}
> +
> +/**
> + * kvm_pmu_event_is_chained - determine if the event type is chain
>   * @vcpu: The vcpu pointer
>   * @select_idx: The counter index
>   */
> -u64 kvm_pmu_get_counter_value(struct kvm_vcpu *vcpu, u64 select_idx)
> +static bool kvm_pmu_event_is_chained(struct kvm_vcpu *vcpu, u64 select_idx)

Nit: I found myself easily confused between kvm_pmu_event_is_chained()
and kvm_pmu_pmc_is_chained(). Naming is also a bit odd because
kvm_pmu_event_is_chained() doesn't operate on a perf_event.

Could we name it kvm_pmu_idx_has_chain_evtype() or
kvm_pmu_cnt_in_chain_evtype() ?

>  {
> - u64 counter, reg, enabled, running;
> - struct kvm_pmu *pmu = &vcpu->arch.pmu;
> - struct kvm_pmc *pmc = &pmu->pmc[select_idx];
> + u64 eventsel, reg;
>  
> - reg = (select_idx == ARMV8_PMU_CYCLE_IDX)
> -   ? PMCCNTR_EL0 : PMEVCNTR0_EL0 + select_idx;
> - counter = __vcpu_sys_reg(vcpu, reg);
> + select_idx |= 0x1;
> +
> + if (select_idx == ARMV8_PMU_CYCLE_IDX)
> + return false;
> +
> + reg = PMEVTYPER0_EL0 + select_idx;
> + eventsel = __vcpu_sys_reg(vcpu, reg) & ARMV8_PMU_EVTYPE_EVENT;
> +
> + return armv8pmu_evtype_is_chain(eventsel);
> +}
> +
> +/**
> + * kvm_pmu_get_pair_counter_value - get PMU counter value
> + * @vcpu: The vcpu pointer
> + * @pmc: The PMU counter pointer
> + */
> +static u64 kvm_pmu_get_pair_counter_value(struct kvm_vcpu *vcpu,
> + 

Re: [PATCH v7 05/23] iommu: Introduce cache_invalidate API

2019-05-02 Thread Jean-Philippe Brucker
On 02/05/2019 07:58, Auger Eric wrote:
> Hi Jean-Philippe,
> 
> On 5/1/19 12:38 PM, Jean-Philippe Brucker wrote:
>> On 08/04/2019 13:18, Eric Auger wrote:
>>> +int iommu_cache_invalidate(struct iommu_domain *domain, struct device *dev,
>>> +  struct iommu_cache_invalidate_info *inv_info)
>>> +{
>>> +   int ret = 0;
>>> +
>>> +   if (unlikely(!domain->ops->cache_invalidate))
>>> +   return -ENODEV;
>>> +
>>> +   ret = domain->ops->cache_invalidate(domain, dev, inv_info);
>>> +
>>> +   return ret;
>>
>> Nit: you don't really need ret
>>
>> The UAPI looks good to me, so
>>
>> Reviewed-by: Jean-Philippe Brucker 
> Just to make sure, do you accept changes proposed by Jacob in
> https://lkml.org/lkml/2019/4/29/659 ie.
> - the addition of NR_IOMMU_INVAL_GRANU in enum iommu_inv_granularity and
> - the addition of NR_IOMMU_CACHE_TYPE

Ah sorry, I forgot about that, I'll review the next version. Yes they
can be useful (maybe call them IOMMU_INV_GRANU_NR and
IOMMU_CACHE_INV_TYPE_NR?). I guess it's legal to export in UAPI values
that will change over time, as VFIO also does it in its enums.

Thanks,
Jean
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH AUTOSEL 5.0 29/98] KVM: arm/arm64: Enforce PTE mappings at stage2 when needed

2019-05-02 Thread Sasha Levin

On Tue, Apr 23, 2019 at 10:27:26AM +0100, Suzuki K Poulose wrote:

Hi Sasha,

On 04/22/2019 08:40 PM, Sasha Levin wrote:

From: Suzuki K Poulose 

[ Upstream commit a80868f398554842b14d07060012c06efb57c456 ]

commit 6794ad5443a2118 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings")
made the checks to skip huge mappings, stricter. However it introduced
a bug where we still use huge mappings, ignoring the flag to
use PTE mappings, by not reseting the vma_pagesize to PAGE_SIZE.

Also, the checks do not cover the PUD huge pages, that was
under review during the same period. This patch fixes both
the issues.

Fixes : 6794ad5443a2118 ("KVM: arm/arm64: Fix unintended stage 2 PMD mappings")
Reported-by: Zenghui Yu 
Cc: Zenghui Yu 
Cc: Christoffer Dall 
Signed-off-by: Suzuki K Poulose 
Signed-off-by: Marc Zyngier 
Signed-off-by: Sasha Levin (Microsoft) 


Please be aware that we need a follow up fix for this patch to fix the
problem for THP backed memory.

http://lists.infradead.org/pipermail/linux-arm-kernel/2019-April/645324.html


It should appear upstream soon.


Since it's not upstream yet, I'll drop this patch for now and queue it
for a later release.

--
Thanks,
Sasha
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH v7 05/23] iommu: Introduce cache_invalidate API

2019-05-02 Thread Jacob Pan
On Thu, 2 May 2019 11:53:34 +0100
Jean-Philippe Brucker  wrote:

> On 02/05/2019 07:58, Auger Eric wrote:
> > Hi Jean-Philippe,
> > 
> > On 5/1/19 12:38 PM, Jean-Philippe Brucker wrote:  
> >> On 08/04/2019 13:18, Eric Auger wrote:  
> >>> +int iommu_cache_invalidate(struct iommu_domain *domain, struct
> >>> device *dev,
> >>> +struct iommu_cache_invalidate_info
> >>> *inv_info) +{
> >>> + int ret = 0;
> >>> +
> >>> + if (unlikely(!domain->ops->cache_invalidate))
> >>> + return -ENODEV;
> >>> +
> >>> + ret = domain->ops->cache_invalidate(domain, dev,
> >>> inv_info); +
> >>> + return ret;  
> >>
> >> Nit: you don't really need ret
> >>
> >> The UAPI looks good to me, so
> >>
> >> Reviewed-by: Jean-Philippe Brucker
> >>   
> > Just to make sure, do you accept changes proposed by Jacob in
> > https://lkml.org/lkml/2019/4/29/659 ie.
> > - the addition of NR_IOMMU_INVAL_GRANU in enum
> > iommu_inv_granularity and
> > - the addition of NR_IOMMU_CACHE_TYPE  
> 
> Ah sorry, I forgot about that, I'll review the next version. Yes they
> can be useful (maybe call them IOMMU_INV_GRANU_NR and
> IOMMU_CACHE_INV_TYPE_NR?). I guess it's legal to export in UAPI values
> that will change over time, as VFIO also does it in its enums.
> 
I am fine with the names. Maybe you can put this patch in your sva/api
branch once you reviewed it? Having a common branch for common code
makes life so much easier.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm