Re: [PATCH 0/3] KVM: arm64: Normalize cache configuration

2022-12-13 Thread Oliver Upton
Hi Akihiko,

On Sun, Dec 11, 2022 at 02:16:57PM +0900, Akihiko Odaki wrote:
> Before this change, the cache configuration of the physical CPU was
> exposed to vcpus. This is problematic because the cache configuration a
> vcpu sees varies when it migrates between vcpus with different cache
> configurations.
> 
> Fabricate cache configuration from arm64_ftr_reg_ctrel0.sys_val, which
> holds the CTR_EL0 value the userspace sees regardless of which physical
> CPU it resides on.
> 
> HCR_TID2 is now always set as it is troublesome to detect the difference
> of cache configurations among physical CPUs.
> 
> CSSELR_EL1 is now held in the memory instead of the corresponding
> phyisccal register as the fabricated cache configuration may have a
> cache level which does not exist in the physical CPU, and setting the
> physical CSSELR_EL1 for the level results in an UNKNOWN behavior.
> 
> CLIDR_EL1 and CCSIDR_EL1 are now writable from the userspace so that
> the VMM can restore the values saved with the old kernel.
> 
> Akihiko Odaki (3):
>   arm64/sysreg: Add CCSIDR2_EL1
>   arm64/cache: Move CLIDR macro definitions
>   KVM: arm64: Normalize cache configuration

Next time you do a respin can you please bump the version number? I.e.
the next version should be v3.

Additionally, it is tremendously helpful to reviewers if you can provide
(1) a summary of what has changed in the current revision and (2) a
lore.kernel.org link to the last series you mailed out.

--
Thanks,
Oliver
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 06/14] KVM: selftests: Rename UNAME_M to ARCH_DIR, fill explicitly for x86

2022-12-13 Thread Sean Christopherson
+David

On Tue, Dec 13, 2022, Sean Christopherson wrote:
> Rename UNAME_M to ARCH_DIR and explicitly set it directly for x86.  At
> this point, the name of the arch directory really doesn't have anything
> to do with `uname -m`, and UNAME_M is unnecessarily confusing given that
> its purpose is purely to identify the arch specific directory.
> 
> Signed-off-by: Sean Christopherson 
> ---
> -# No change necessary for x86_64
> -UNAME_M := $(shell uname -m)
> -
> -# Set UNAME_M for arm64 compile/install to work
> -ifeq ($(ARCH),arm64)
> - UNAME_M := aarch64
> -endif
> -# Set UNAME_M s390x compile/install to work
> -ifeq ($(ARCH),s390)
> - UNAME_M := s390x
> -endif
> -# Set UNAME_M riscv compile/install to work
> -ifeq ($(ARCH),riscv)
> - UNAME_M := riscv
> +ifeq ($(ARCH),x86)

As discovered by by David, this breaks doing "ARCH=x86_64 make", which is an
allowed/supported variant in the kernel proper, so this needs to be:

  ifneq (,$(filter $(ARCH),x86 x86_64))

or alternatively

  ifeq ($(ARCH),x86_64)
  ARCH := x86
  endif

Hmm, unless there's a reason to keep ARCH=x86_64, the latter appears to be the
better option as lib.mak doesn't play nice with x86_64 either, e.g. `ARCH=x86_64
LLVM=1 make` fails.  That's arguably a lib.mak bug, but it's trivial to handle
in KVM's makefile so forcing lib.mak to handle both seems unnecessary.

I'll also add a comment to call out that $(ARCH) follows the kernel's 
terminology
for arch/*, whereas for whatever reason KVM selftests effectively uses `uname 
-m`
terminology.

One last thought/question, what do y'all think about renaming directories to
follow the kernel proper?  I.e. aarch64=>arm64, s390x=>s390, and x86_64=>x86.
Then $(ARCH_DIR) would go away.  The churn would be unfortunate, but it would be
nice to align with arch/ and tools/arch/.

> + ARCH_DIR := x86_64
> +else ifeq ($(ARCH),arm64)
> + ARCH_DIR := aarch64
> +else ifeq ($(ARCH),s390)
> + ARCH_DIR := s390x
> +else ifeq ($(ARCH),riscv)
> + ARCH_DIR := riscv
> +else
> +$(error Unknown architecture '$(ARCH)')
>  endif
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 1/1] KVM: arm64: PMU: Fix PMCR_EL0 reset value

2022-12-13 Thread Ricardo Koller
On Fri, Dec 09, 2022 at 05:58:31PM +, Oliver Upton wrote:
> On Fri, Dec 09, 2022 at 04:44:46PM +, James Clark wrote:
> > ARMV8_PMU_PMCR_N_MASK is an unshifted value which results in the wrong
> > reset value for PMCR_EL0, so shift it to fix it.
> 
> That's just mean. *_MASK tends to be a shifted mask, although it would
> appear that asm/perf_event.h does not follow this convention. Fixing
> that would be nice (as I'm sure somebody else will get burned by this),
> but for the sake of an immediate fix:
>

Even kvm-unit-tests does this:

arm/pmu.c:
#define PMU_PMCR_N_SHIFT   11
#define PMU_PMCR_N_MASK0x1f


> > This fixes the following error when running qemu:
> > 
> >   $ qemu-system-aarch64 -cpu host -machine type=virt,accel=kvm -kernel ...
> > 
> >   target/arm/helper.c:1813: pmevcntr_rawwrite: Assertion `counter < 
> > pmu_num_counters(env)' failed.
> > 
> > Fixes: 292e8f149476 ("KVM: arm64: PMU: Simplify PMCR_EL0 reset handling")
> > Signed-off-by: James Clark 
> 
> Reviewed-by: Oliver Upton 
> 
> --
> Thanks,
> Oliver
> 
> > ---
> >  arch/arm64/kvm/sys_regs.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index d5ee52d6bf73..c6cbfe6b854b 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -646,7 +646,7 @@ static void reset_pmcr(struct kvm_vcpu *vcpu, const 
> > struct sys_reg_desc *r)
> > return;
> >  
> > /* Only preserve PMCR_EL0.N, and reset the rest to 0 */
> > -   pmcr = read_sysreg(pmcr_el0) & ARMV8_PMU_PMCR_N_MASK;
> > +   pmcr = read_sysreg(pmcr_el0) & (ARMV8_PMU_PMCR_N_MASK << 
> > ARMV8_PMU_PMCR_N_SHIFT);
> > if (!kvm_supports_32bit_el0())
> > pmcr |= ARMV8_PMU_PMCR_LC;
> >  
> > -- 
> > 2.25.1
> > 
> ___
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [kvm-unit-tests PATCH 3/3] arm: pmu: Add tests for 64-bit overflows

2022-12-13 Thread Ricardo Koller
On Tue, Dec 13, 2022 at 05:03:40PM +, Alexandru Elisei wrote:
> Hi,
> 
> Checked that all places where ALL_SET/PRE_OVERFLOW were used are now taking
> into account the fact that counters are programmed to be 64bit.
> 
> In the case of 64bit counters, the printf format specifier is %ld, which
> means that ALL_SET_64 and PRE_OVERFLOW_64 are now displayed as negative
> numbers. For example:
> 
> INFO: pmu: pmu-sw-incr: 32-bit: SW_INCR counter #0 has value 4294967280
> PASS: pmu: pmu-sw-incr: 32-bit: PWSYNC does not increment if PMCR.E is unset
> [..]
> INFO: pmu: pmu-sw-incr: 64-bit: SW_INCR counter #0 has value -16
> PASS: pmu: pmu-sw-incr: 64-bit: PWSYNC does not increment if PMCR.E is unset
> 

Argh, I didn't notice this. I think this would be more useful if it
printed %llx in all cases.

> I was thinking that the format specifiers could be changed to unsigned
> long.  The counters only increment, they don't decrement, and I can't think
> how printing them as signed could be useful.
> 
> One more comment below.
> 
> On Fri, Dec 02, 2022 at 04:55:27AM +, Ricardo Koller wrote:
> > Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> > and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> > supported on PMUv3p5.
> > 
> > Note that chained tests do not implement "overflow_at_64bits == true".
> > That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> > details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> > 0487H.a, J1.1.1 "aarch64/debug").
> > 
> > Signed-off-by: Ricardo Koller 
> > ---
> >  arm/pmu.c | 91 ---
> >  1 file changed, 60 insertions(+), 31 deletions(-)
> > 
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index 59e5bfe..3cb563b 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -28,6 +28,7 @@
> >  #define PMU_PMCR_X (1 << 4)
> >  #define PMU_PMCR_DP(1 << 5)
> >  #define PMU_PMCR_LC(1 << 6)
> > +#define PMU_PMCR_LP(1 << 7)
> >  #define PMU_PMCR_N_SHIFT   11
> >  #define PMU_PMCR_N_MASK0x1f
> >  #define PMU_PMCR_ID_SHIFT  16
> > @@ -55,10 +56,15 @@
> >  #define EXT_COMMON_EVENTS_HIGH 0x403F
> >  
> >  #define ALL_SET0xULL
> > +#define ALL_SET_64 0xULL
> >  #define ALL_CLEAR  0xULL
> >  #define PRE_OVERFLOW   0xFFF0ULL
> > +#define PRE_OVERFLOW_640xFFF0ULL
> >  #define PRE_OVERFLOW2  0xFFDCULL
> >  
> > +#define PRE_OVERFLOW_AT(_64b)  (_64b ? PRE_OVERFLOW_64 : PRE_OVERFLOW)
> > +#define ALL_SET_AT(_64b)   (_64b ? ALL_SET_64 : ALL_SET)
> > +
> >  #define PMU_PPI23
> >  
> >  struct pmu {
> > @@ -429,8 +435,10 @@ static bool satisfy_prerequisites(uint32_t *events, 
> > unsigned int nb_events,
> >  static void test_basic_event_count(bool overflow_at_64bits)
> >  {
> > uint32_t implemented_counter_mask, non_implemented_counter_mask;
> > -   uint32_t counter_mask;
> > +   uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits);
> > +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> > uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> > +   uint32_t counter_mask;
> >  
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events),
> >overflow_at_64bits))
> > @@ -452,13 +460,13 @@ static void test_basic_event_count(bool 
> > overflow_at_64bits)
> >  * clear cycle and all event counters and allow counter enablement
> >  * through PMCNTENSET. LC is RES1.
> >  */
> > -   set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> > +   set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
> > isb();
> > -   report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset 
> > counters");
> > +   report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: 
> > reset counters");
> >  
> > /* Preset counter #0 to pre overflow value to trigger an overflow */
> > -   write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> > -   report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> > +   write_regn_el0(pmevcntr, 0, pre_overflow);
> > +   report(read_regn_el0(pmevcntr, 0) == pre_overflow,
> > "counter #0 preset to pre-overflow value");
> > report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
> >  
> > @@ -511,6 +519,8 @@ static void test_mem_access(bool overflow_at_64bits)
> >  {
> > void *addr = malloc(PAGE_SIZE);
> > uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> > +   uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits);
> > +   uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
> >  
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events),
> >overflow_at_64bits))
> > @@ -522,7 +532,7 @@ static void test_mem_access(bool overflow_at_64bits)
> > write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTY

Re: [kvm-unit-tests PATCH 1/3] arm: pmu: Fix overflow checks for PMUv3p5 long counters

2022-12-13 Thread Ricardo Koller
On Tue, Dec 13, 2022 at 04:43:38PM +, Alexandru Elisei wrote:
> Hi,
> 
> On Tue, Dec 13, 2022 at 08:21:24AM -0800, Ricardo Koller wrote:
> > On Tue, Dec 13, 2022 at 12:36:14PM +, Alexandru Elisei wrote:
> > > Hi,
> > > 
> > > Some more comments below.
> > > 
> > > On Fri, Dec 02, 2022 at 04:55:25AM +, Ricardo Koller wrote:
> > > > PMUv3p5 uses 64-bit counters irrespective of whether the PMU is 
> > > > configured
> > > > for overflowing at 32 or 64-bits. The consequence is that tests that 
> > > > check
> > > > the counter values after overflowing should not assume that values will 
> > > > be
> > > > wrapped around 32-bits: they overflow into the other half of the 64-bit
> > > > counters on PMUv3p5.
> > > > 
> > > > Fix tests by correctly checking overflowing-counters against the 
> > > > expected
> > > > 64-bit value.
> > > > 
> > > > Signed-off-by: Ricardo Koller 
> > > > ---
> > > >  arm/pmu.c | 29 ++---
> > > >  1 file changed, 18 insertions(+), 11 deletions(-)
> > > > 
> > > > diff --git a/arm/pmu.c b/arm/pmu.c
> > > > index cd47b14..eeac984 100644
> > > > --- a/arm/pmu.c
> > > > +++ b/arm/pmu.c
> > > > @@ -54,10 +54,10 @@
> > > >  #define EXT_COMMON_EVENTS_LOW  0x4000
> > > >  #define EXT_COMMON_EVENTS_HIGH 0x403F
> > > >  
> > > > -#define ALL_SET0x
> > > > -#define ALL_CLEAR  0x0
> > > > -#define PRE_OVERFLOW   0xFFF0
> > > > -#define PRE_OVERFLOW2  0xFFDC
> > > > +#define ALL_SET0xULL
> > > > +#define ALL_CLEAR  0xULL
> > > > +#define PRE_OVERFLOW   0xFFF0ULL
> > > > +#define PRE_OVERFLOW2  0xFFDCULL
> > > >  
> > > >  #define PMU_PPI23
> > > >  
> > > > @@ -538,6 +538,7 @@ static void test_mem_access(void)
> > > >  static void test_sw_incr(void)
> > > >  {
> > > > uint32_t events[] = {SW_INCR, SW_INCR};
> > > > +   uint64_t cntr0;
> > > > int i;
> > > >  
> > > > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > > > @@ -572,9 +573,9 @@ static void test_sw_incr(void)
> > > > write_sysreg(0x3, pmswinc_el0);
> > > >  
> > > > isb();
> > > > -   report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 
> > > > 100 SW_INCR");
> > > > -   report(read_regn_el0(pmevcntr, 1)  == 100,
> > > > -   "counter #0 after + 100 SW_INCR");
> > > > +   cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW 
> > > > + 100;
> > > > +   report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 
> > > > 100 SW_INCR");
> > > > +   report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 
> > > > 100 SW_INCR");
> > > > report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
> > > > read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 
> > > > 1));
> > > > report(read_sysreg(pmovsclr_el0) == 0x1,
> > > > @@ -584,6 +585,7 @@ static void test_sw_incr(void)
> > > >  static void test_chained_counters(void)
> > > >  {
> > > > uint32_t events[] = {CPU_CYCLES, CHAIN};
> > > > +   uint64_t cntr1;
> > > >  
> > > > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > > > return;
> > > > @@ -618,13 +620,16 @@ static void test_chained_counters(void)
> > > >  
> > > > precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> > > > report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > > > -   report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> > > > +   cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> > > > +   report(read_regn_el0(pmevcntr, 1) == cntr1, "CHAIN counter #1 
> > > > wrapped");
> > > 
> > > It looks to me like the intention of the test was to check that the 
> > > counter
> > > programmed with the CHAIN event wraps, judging from the report message.
> > > 
> > 
> > Ah, right. Yeah, that message is confusing. It should be the short
> > version of "Inrementing at 32-bits resulted in the right value".
> > 
> > > I think it would be interesting to keep that by programming counter #1 
> > > with
> > > ~0ULL when PMUv3p5 (maybe call it ALL_SET64?) and test the counter value
> > > against 0.
> > 
> > The last commit adds tests using ALL_SET64.  Tests can be run in two
> > modes: overflow_at_64bits and not.  However, this test,
> > test_chained_counters(), and all other chained tests only use the
> > !overflow_at_64bits mode (even after the last commit). The reason is
> > that there are no CHAIN events when overflowing at 64-bits (more details
> > in the commit message). But, don't worry, there are lots of tests that
> > check wrapping at 64-bits (overflow_at_64bits=true).
> 
> What I was suggesting is this (change on top of this series, not on top of
> this patch, to get access to ALL_SET_AT):

Ooh, I see, I agree: it would be better

Re: [kvm-unit-tests PATCH 3/3] arm: pmu: Add tests for 64-bit overflows

2022-12-13 Thread Alexandru Elisei
Hi,

Checked that all places where ALL_SET/PRE_OVERFLOW were used are now taking
into account the fact that counters are programmed to be 64bit.

In the case of 64bit counters, the printf format specifier is %ld, which
means that ALL_SET_64 and PRE_OVERFLOW_64 are now displayed as negative
numbers. For example:

INFO: pmu: pmu-sw-incr: 32-bit: SW_INCR counter #0 has value 4294967280
PASS: pmu: pmu-sw-incr: 32-bit: PWSYNC does not increment if PMCR.E is unset
[..]
INFO: pmu: pmu-sw-incr: 64-bit: SW_INCR counter #0 has value -16
PASS: pmu: pmu-sw-incr: 64-bit: PWSYNC does not increment if PMCR.E is unset

I was thinking that the format specifiers could be changed to unsigned
long.  The counters only increment, they don't decrement, and I can't think
how printing them as signed could be useful.

One more comment below.

On Fri, Dec 02, 2022 at 04:55:27AM +, Ricardo Koller wrote:
> Modify all tests checking overflows to support both 32 (PMCR_EL0.LP == 0)
> and 64-bit overflows (PMCR_EL0.LP == 1). 64-bit overflows are only
> supported on PMUv3p5.
> 
> Note that chained tests do not implement "overflow_at_64bits == true".
> That's because there are no CHAIN events when "PMCR_EL0.LP == 1" (for more
> details see AArch64.IncrementEventCounter() pseudocode in the ARM ARM DDI
> 0487H.a, J1.1.1 "aarch64/debug").
> 
> Signed-off-by: Ricardo Koller 
> ---
>  arm/pmu.c | 91 ---
>  1 file changed, 60 insertions(+), 31 deletions(-)
> 
> diff --git a/arm/pmu.c b/arm/pmu.c
> index 59e5bfe..3cb563b 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -28,6 +28,7 @@
>  #define PMU_PMCR_X (1 << 4)
>  #define PMU_PMCR_DP(1 << 5)
>  #define PMU_PMCR_LC(1 << 6)
> +#define PMU_PMCR_LP(1 << 7)
>  #define PMU_PMCR_N_SHIFT   11
>  #define PMU_PMCR_N_MASK0x1f
>  #define PMU_PMCR_ID_SHIFT  16
> @@ -55,10 +56,15 @@
>  #define EXT_COMMON_EVENTS_HIGH   0x403F
>  
>  #define ALL_SET  0xULL
> +#define ALL_SET_64   0xULL
>  #define ALL_CLEAR0xULL
>  #define PRE_OVERFLOW 0xFFF0ULL
> +#define PRE_OVERFLOW_64  0xFFF0ULL
>  #define PRE_OVERFLOW20xFFDCULL
>  
> +#define PRE_OVERFLOW_AT(_64b)(_64b ? PRE_OVERFLOW_64 : PRE_OVERFLOW)
> +#define ALL_SET_AT(_64b) (_64b ? ALL_SET_64 : ALL_SET)
> +
>  #define PMU_PPI  23
>  
>  struct pmu {
> @@ -429,8 +435,10 @@ static bool satisfy_prerequisites(uint32_t *events, 
> unsigned int nb_events,
>  static void test_basic_event_count(bool overflow_at_64bits)
>  {
>   uint32_t implemented_counter_mask, non_implemented_counter_mask;
> - uint32_t counter_mask;
> + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits);
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>   uint32_t events[] = {CPU_CYCLES, INST_RETIRED};
> + uint32_t counter_mask;
>  
>   if (!satisfy_prerequisites(events, ARRAY_SIZE(events),
>  overflow_at_64bits))
> @@ -452,13 +460,13 @@ static void test_basic_event_count(bool 
> overflow_at_64bits)
>* clear cycle and all event counters and allow counter enablement
>* through PMCNTENSET. LC is RES1.
>*/
> - set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P);
> + set_pmcr(pmu.pmcr_ro | PMU_PMCR_LC | PMU_PMCR_C | PMU_PMCR_P | pmcr_lp);
>   isb();
> - report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC), "pmcr: reset 
> counters");
> + report(get_pmcr() == (pmu.pmcr_ro | PMU_PMCR_LC | pmcr_lp), "pmcr: 
> reset counters");
>  
>   /* Preset counter #0 to pre overflow value to trigger an overflow */
> - write_regn_el0(pmevcntr, 0, PRE_OVERFLOW);
> - report(read_regn_el0(pmevcntr, 0) == PRE_OVERFLOW,
> + write_regn_el0(pmevcntr, 0, pre_overflow);
> + report(read_regn_el0(pmevcntr, 0) == pre_overflow,
>   "counter #0 preset to pre-overflow value");
>   report(!read_regn_el0(pmevcntr, 1), "counter #1 is 0");
>  
> @@ -511,6 +519,8 @@ static void test_mem_access(bool overflow_at_64bits)
>  {
>   void *addr = malloc(PAGE_SIZE);
>   uint32_t events[] = {MEM_ACCESS, MEM_ACCESS};
> + uint64_t pre_overflow = PRE_OVERFLOW_AT(overflow_at_64bits);
> + uint64_t pmcr_lp = overflow_at_64bits ? PMU_PMCR_LP : 0;
>  
>   if (!satisfy_prerequisites(events, ARRAY_SIZE(events),
>  overflow_at_64bits))
> @@ -522,7 +532,7 @@ static void test_mem_access(bool overflow_at_64bits)
>   write_regn_el0(pmevtyper, 1, MEM_ACCESS | PMEVTYPER_EXCLUDE_EL0);
>   write_sysreg_s(0x3, PMCNTENSET_EL0);
>   isb();
> - mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E);
> + mem_access_loop(addr, 20, pmu.pmcr_ro | PMU_PMCR_E | pmcr_lp);
>   report_info("counter #0 is %ld (MEM_ACCESS)", read_regn_el0(pmevcntr, 
> 0));
>   repor

Re: [kvm-unit-tests PATCH 1/3] arm: pmu: Fix overflow checks for PMUv3p5 long counters

2022-12-13 Thread Alexandru Elisei
Hi,

On Tue, Dec 13, 2022 at 08:21:24AM -0800, Ricardo Koller wrote:
> On Tue, Dec 13, 2022 at 12:36:14PM +, Alexandru Elisei wrote:
> > Hi,
> > 
> > Some more comments below.
> > 
> > On Fri, Dec 02, 2022 at 04:55:25AM +, Ricardo Koller wrote:
> > > PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> > > for overflowing at 32 or 64-bits. The consequence is that tests that check
> > > the counter values after overflowing should not assume that values will be
> > > wrapped around 32-bits: they overflow into the other half of the 64-bit
> > > counters on PMUv3p5.
> > > 
> > > Fix tests by correctly checking overflowing-counters against the expected
> > > 64-bit value.
> > > 
> > > Signed-off-by: Ricardo Koller 
> > > ---
> > >  arm/pmu.c | 29 ++---
> > >  1 file changed, 18 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/arm/pmu.c b/arm/pmu.c
> > > index cd47b14..eeac984 100644
> > > --- a/arm/pmu.c
> > > +++ b/arm/pmu.c
> > > @@ -54,10 +54,10 @@
> > >  #define EXT_COMMON_EVENTS_LOW0x4000
> > >  #define EXT_COMMON_EVENTS_HIGH   0x403F
> > >  
> > > -#define ALL_SET  0x
> > > -#define ALL_CLEAR0x0
> > > -#define PRE_OVERFLOW 0xFFF0
> > > -#define PRE_OVERFLOW20xFFDC
> > > +#define ALL_SET  0xULL
> > > +#define ALL_CLEAR0xULL
> > > +#define PRE_OVERFLOW 0xFFF0ULL
> > > +#define PRE_OVERFLOW20xFFDCULL
> > >  
> > >  #define PMU_PPI  23
> > >  
> > > @@ -538,6 +538,7 @@ static void test_mem_access(void)
> > >  static void test_sw_incr(void)
> > >  {
> > >   uint32_t events[] = {SW_INCR, SW_INCR};
> > > + uint64_t cntr0;
> > >   int i;
> > >  
> > >   if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > > @@ -572,9 +573,9 @@ static void test_sw_incr(void)
> > >   write_sysreg(0x3, pmswinc_el0);
> > >  
> > >   isb();
> > > - report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 100 
> > > SW_INCR");
> > > - report(read_regn_el0(pmevcntr, 1)  == 100,
> > > - "counter #0 after + 100 SW_INCR");
> > > + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100;
> > > + report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 
> > > SW_INCR");
> > > + report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 
> > > SW_INCR");
> > >   report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
> > >   read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
> > >   report(read_sysreg(pmovsclr_el0) == 0x1,
> > > @@ -584,6 +585,7 @@ static void test_sw_incr(void)
> > >  static void test_chained_counters(void)
> > >  {
> > >   uint32_t events[] = {CPU_CYCLES, CHAIN};
> > > + uint64_t cntr1;
> > >  
> > >   if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > >   return;
> > > @@ -618,13 +620,16 @@ static void test_chained_counters(void)
> > >  
> > >   precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> > >   report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > > - report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> > > + cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> > > + report(read_regn_el0(pmevcntr, 1) == cntr1, "CHAIN counter #1 wrapped");
> > 
> > It looks to me like the intention of the test was to check that the counter
> > programmed with the CHAIN event wraps, judging from the report message.
> > 
> 
> Ah, right. Yeah, that message is confusing. It should be the short
> version of "Inrementing at 32-bits resulted in the right value".
> 
> > I think it would be interesting to keep that by programming counter #1 with
> > ~0ULL when PMUv3p5 (maybe call it ALL_SET64?) and test the counter value
> > against 0.
> 
> The last commit adds tests using ALL_SET64.  Tests can be run in two
> modes: overflow_at_64bits and not.  However, this test,
> test_chained_counters(), and all other chained tests only use the
> !overflow_at_64bits mode (even after the last commit). The reason is
> that there are no CHAIN events when overflowing at 64-bits (more details
> in the commit message). But, don't worry, there are lots of tests that
> check wrapping at 64-bits (overflow_at_64bits=true).

What I was suggesting is this (change on top of this series, not on top of
this patch, to get access to ALL_SET_AT):

diff --git a/arm/pmu.c b/arm/pmu.c
index 3cb563b1abe4..fd7f20fc6c52 100644
--- a/arm/pmu.c
+++ b/arm/pmu.c
@@ -607,7 +607,6 @@ static void test_sw_incr(bool overflow_at_64bits)
 static void test_chained_counters(bool overflow_at_64bits)
 {
uint32_t events[] = {CPU_CYCLES, CHAIN};
-   uint64_t cntr1;

if (!satisfy_prerequisites(events, ARRAY_SIZE(events),
   overflow_at_64bits))
@@ -639,12 +638,11 @@ static void test_chained_counters(bool overflow_at_64bits)
report(read_

Re: [kvm-unit-tests PATCH 1/3] arm: pmu: Fix overflow checks for PMUv3p5 long counters

2022-12-13 Thread Ricardo Koller
On Tue, Dec 13, 2022 at 12:36:14PM +, Alexandru Elisei wrote:
> Hi,
> 
> Some more comments below.
> 
> On Fri, Dec 02, 2022 at 04:55:25AM +, Ricardo Koller wrote:
> > PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> > for overflowing at 32 or 64-bits. The consequence is that tests that check
> > the counter values after overflowing should not assume that values will be
> > wrapped around 32-bits: they overflow into the other half of the 64-bit
> > counters on PMUv3p5.
> > 
> > Fix tests by correctly checking overflowing-counters against the expected
> > 64-bit value.
> > 
> > Signed-off-by: Ricardo Koller 
> > ---
> >  arm/pmu.c | 29 ++---
> >  1 file changed, 18 insertions(+), 11 deletions(-)
> > 
> > diff --git a/arm/pmu.c b/arm/pmu.c
> > index cd47b14..eeac984 100644
> > --- a/arm/pmu.c
> > +++ b/arm/pmu.c
> > @@ -54,10 +54,10 @@
> >  #define EXT_COMMON_EVENTS_LOW  0x4000
> >  #define EXT_COMMON_EVENTS_HIGH 0x403F
> >  
> > -#define ALL_SET0x
> > -#define ALL_CLEAR  0x0
> > -#define PRE_OVERFLOW   0xFFF0
> > -#define PRE_OVERFLOW2  0xFFDC
> > +#define ALL_SET0xULL
> > +#define ALL_CLEAR  0xULL
> > +#define PRE_OVERFLOW   0xFFF0ULL
> > +#define PRE_OVERFLOW2  0xFFDCULL
> >  
> >  #define PMU_PPI23
> >  
> > @@ -538,6 +538,7 @@ static void test_mem_access(void)
> >  static void test_sw_incr(void)
> >  {
> > uint32_t events[] = {SW_INCR, SW_INCR};
> > +   uint64_t cntr0;
> > int i;
> >  
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > @@ -572,9 +573,9 @@ static void test_sw_incr(void)
> > write_sysreg(0x3, pmswinc_el0);
> >  
> > isb();
> > -   report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 100 
> > SW_INCR");
> > -   report(read_regn_el0(pmevcntr, 1)  == 100,
> > -   "counter #0 after + 100 SW_INCR");
> > +   cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100;
> > +   report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 
> > SW_INCR");
> > +   report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 
> > SW_INCR");
> > report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
> > read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
> > report(read_sysreg(pmovsclr_el0) == 0x1,
> > @@ -584,6 +585,7 @@ static void test_sw_incr(void)
> >  static void test_chained_counters(void)
> >  {
> > uint32_t events[] = {CPU_CYCLES, CHAIN};
> > +   uint64_t cntr1;
> >  
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > return;
> > @@ -618,13 +620,16 @@ static void test_chained_counters(void)
> >  
> > precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
> > report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> > -   report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> > +   cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> > +   report(read_regn_el0(pmevcntr, 1) == cntr1, "CHAIN counter #1 wrapped");
> 
> It looks to me like the intention of the test was to check that the counter
> programmed with the CHAIN event wraps, judging from the report message.
> 

Ah, right. Yeah, that message is confusing. It should be the short
version of "Inrementing at 32-bits resulted in the right value".

> I think it would be interesting to keep that by programming counter #1 with
> ~0ULL when PMUv3p5 (maybe call it ALL_SET64?) and test the counter value
> against 0.

The last commit adds tests using ALL_SET64.  Tests can be run in two
modes: overflow_at_64bits and not.  However, this test,
test_chained_counters(), and all other chained tests only use the
!overflow_at_64bits mode (even after the last commit). The reason is
that there are no CHAIN events when overflowing at 64-bits (more details
in the commit message). But, don't worry, there are lots of tests that
check wrapping at 64-bits (overflow_at_64bits=true).

> Alternatively, the report message can be modified, and "wrapped"
> replaced with "incremented" (or something like that), to avoid confusion.
> 
> > +
> > report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd 
> > counters");
> >  }
> >  
> >  static void test_chained_sw_incr(void)
> >  {
> > uint32_t events[] = {SW_INCR, CHAIN};
> > +   uint64_t cntr0, cntr1;
> > int i;
> >  
> > if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> > @@ -665,10 +670,12 @@ static void test_chained_sw_incr(void)
> > write_sysreg(0x1, pmswinc_el0);
> >  
> > isb();
> > +   cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> > +   cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100;
> > report((read_sysreg(pmovsclr_el0) == 0x3) &&
> > -   (read_regn_el0(pmevcntr, 1) == 0) &&
> >

Re: [kvm-unit-tests PATCH 1/3] arm: pmu: Fix overflow checks for PMUv3p5 long counters

2022-12-13 Thread Alexandru Elisei
Hi,

Some more comments below.

On Fri, Dec 02, 2022 at 04:55:25AM +, Ricardo Koller wrote:
> PMUv3p5 uses 64-bit counters irrespective of whether the PMU is configured
> for overflowing at 32 or 64-bits. The consequence is that tests that check
> the counter values after overflowing should not assume that values will be
> wrapped around 32-bits: they overflow into the other half of the 64-bit
> counters on PMUv3p5.
> 
> Fix tests by correctly checking overflowing-counters against the expected
> 64-bit value.
> 
> Signed-off-by: Ricardo Koller 
> ---
>  arm/pmu.c | 29 ++---
>  1 file changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/arm/pmu.c b/arm/pmu.c
> index cd47b14..eeac984 100644
> --- a/arm/pmu.c
> +++ b/arm/pmu.c
> @@ -54,10 +54,10 @@
>  #define EXT_COMMON_EVENTS_LOW0x4000
>  #define EXT_COMMON_EVENTS_HIGH   0x403F
>  
> -#define ALL_SET  0x
> -#define ALL_CLEAR0x0
> -#define PRE_OVERFLOW 0xFFF0
> -#define PRE_OVERFLOW20xFFDC
> +#define ALL_SET  0xULL
> +#define ALL_CLEAR0xULL
> +#define PRE_OVERFLOW 0xFFF0ULL
> +#define PRE_OVERFLOW20xFFDCULL
>  
>  #define PMU_PPI  23
>  
> @@ -538,6 +538,7 @@ static void test_mem_access(void)
>  static void test_sw_incr(void)
>  {
>   uint32_t events[] = {SW_INCR, SW_INCR};
> + uint64_t cntr0;
>   int i;
>  
>   if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> @@ -572,9 +573,9 @@ static void test_sw_incr(void)
>   write_sysreg(0x3, pmswinc_el0);
>  
>   isb();
> - report(read_regn_el0(pmevcntr, 0)  == 84, "counter #1 after + 100 
> SW_INCR");
> - report(read_regn_el0(pmevcntr, 1)  == 100,
> - "counter #0 after + 100 SW_INCR");
> + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100;
> + report(read_regn_el0(pmevcntr, 0) == cntr0, "counter #0 after + 100 
> SW_INCR");
> + report(read_regn_el0(pmevcntr, 1) == 100, "counter #1 after + 100 
> SW_INCR");
>   report_info("counter values after 100 SW_INCR #0=%ld #1=%ld",
>   read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>   report(read_sysreg(pmovsclr_el0) == 0x1,
> @@ -584,6 +585,7 @@ static void test_sw_incr(void)
>  static void test_chained_counters(void)
>  {
>   uint32_t events[] = {CPU_CYCLES, CHAIN};
> + uint64_t cntr1;
>  
>   if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
>   return;
> @@ -618,13 +620,16 @@ static void test_chained_counters(void)
>  
>   precise_instrs_loop(22, pmu.pmcr_ro | PMU_PMCR_E);
>   report_info("overflow reg = 0x%lx", read_sysreg(pmovsclr_el0));
> - report(!read_regn_el0(pmevcntr, 1), "CHAIN counter #1 wrapped");
> + cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> + report(read_regn_el0(pmevcntr, 1) == cntr1, "CHAIN counter #1 wrapped");

It looks to me like the intention of the test was to check that the counter
programmed with the CHAIN event wraps, judging from the report message.

I think it would be interesting to keep that by programming counter #1 with
~0ULL when PMUv3p5 (maybe call it ALL_SET64?) and test the counter value
against 0. Alternatively, the report message can be modified, and "wrapped"
replaced with "incremented" (or something like that), to avoid confusion.

> +
>   report(read_sysreg(pmovsclr_el0) == 0x3, "overflow on even and odd 
> counters");
>  }
>  
>  static void test_chained_sw_incr(void)
>  {
>   uint32_t events[] = {SW_INCR, CHAIN};
> + uint64_t cntr0, cntr1;
>   int i;
>  
>   if (!satisfy_prerequisites(events, ARRAY_SIZE(events)))
> @@ -665,10 +670,12 @@ static void test_chained_sw_incr(void)
>   write_sysreg(0x1, pmswinc_el0);
>  
>   isb();
> + cntr0 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 0 : ALL_SET + 1;
> + cntr1 = (pmu.version < ID_DFR0_PMU_V3_8_5) ? 84 : PRE_OVERFLOW + 100;
>   report((read_sysreg(pmovsclr_el0) == 0x3) &&
> - (read_regn_el0(pmevcntr, 1) == 0) &&
> - (read_regn_el0(pmevcntr, 0) == 84),
> - "expected overflows and values after 100 SW_INCR/CHAIN");
> +(read_regn_el0(pmevcntr, 1) == cntr0) &&
> +(read_regn_el0(pmevcntr, 0) == cntr1),

This is hard to parse, it would be better if counter 0 is compared against
cntr0 and counter 1 against cntr1.

Also, same suggestion here, looks like the test wants to check that the
odd-numbered counter wraps around when counting the CHAIN event.

Thanks,
Alex

> +"expected overflows and values after 100 SW_INCR/CHAIN");
>   report_info("overflow=0x%lx, #0=%ld #1=%ld", read_sysreg(pmovsclr_el0),
>   read_regn_el0(pmevcntr, 0), read_regn_el0(pmevcntr, 1));
>  }
> -- 
> 2.39.0.rc0.267.gcb52ba06e7-goog
> 
__

Re: [PATCH 01/14] KVM: selftests: Define literal to asm constraint in aarch64 as unsigned long

2022-12-13 Thread Philippe Mathieu-Daudé

On 13/12/22 01:16, Sean Christopherson wrote:

Define a literal '0' asm input constraint to aarch64/page_fault_test's
guest_cas() as an unsigned long to make clang happy.

   tools/testing/selftests/kvm/aarch64/page_fault_test.c:120:16: error:
 value size does not match register size specified by the constraint
 and modifier [-Werror,-Wasm-operand-widths]
:: "r" (0), "r" (TEST_DATA), "r" (guest_test_memory));
^
   tools/testing/selftests/kvm/aarch64/page_fault_test.c:119:15: note:
 use constraint modifier "w"
"casal %0, %1, [%2]\n"
   ^~
   %w0

Fixes: 35c581015712 ("KVM: selftests: aarch64: Add aarch64/page_fault_test")
Cc: Ricardo Koller 
Signed-off-by: Sean Christopherson 
---
  tools/testing/selftests/kvm/aarch64/page_fault_test.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)


Reviewed-by: Philippe Mathieu-Daudé 

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 07/14] KVM: selftests: Use proper function prototypes in probing code

2022-12-13 Thread Philippe Mathieu-Daudé

On 13/12/22 01:16, Sean Christopherson wrote:

Make the main() functions in the probing code proper prototypes so that
compiling the probing code with more strict flags won't generate false
negatives.

   :1:5: error: function declaration isn’t a prototype 
[-Werror=strict-prototypes]

Signed-off-by: Sean Christopherson 
---
  tools/testing/selftests/kvm/Makefile | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)


Reviewed-by: Philippe Mathieu-Daudé 


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 05/14] KVM: selftests: Fix a typo in x86-64's kvm_get_cpu_address_width()

2022-12-13 Thread Philippe Mathieu-Daudé

On 13/12/22 01:16, Sean Christopherson wrote:

Fix a == vs. = typo in kvm_get_cpu_address_width() that results in
@pa_bits being left unset if the CPU doesn't support enumerating its
MAX_PHY_ADDR.  Flagged by clang's unusued-value warning.

lib/x86_64/processor.c:1034:51: warning: expression result unused 
[-Wunused-value]
 *pa_bits == kvm_cpu_has(X86_FEATURE_PAE) ? 36 : 32;

Fixes: 3bd396353d18 ("KVM: selftests: Add X86_FEATURE_PAE and use it calc "fallback" 
MAXPHYADDR")
Signed-off-by: Sean Christopherson 
---
  tools/testing/selftests/kvm/lib/x86_64/processor.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/testing/selftests/kvm/lib/x86_64/processor.c 
b/tools/testing/selftests/kvm/lib/x86_64/processor.c
index c4d368d56cfe..acfa1d01e7df 100644
--- a/tools/testing/selftests/kvm/lib/x86_64/processor.c
+++ b/tools/testing/selftests/kvm/lib/x86_64/processor.c
@@ -1031,7 +1031,7 @@ bool is_amd_cpu(void)
  void kvm_get_cpu_address_width(unsigned int *pa_bits, unsigned int *va_bits)
  {
if (!kvm_cpu_has_p(X86_PROPERTY_MAX_PHY_ADDR)) {
-   *pa_bits == kvm_cpu_has(X86_FEATURE_PAE) ? 36 : 32;
+   *pa_bits = kvm_cpu_has(X86_FEATURE_PAE) ? 36 : 32;


:)

Reviewed-by: Philippe Mathieu-Daudé 


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 03/14] KVM: selftests: Fix divide-by-zero bug in memslot_perf_test

2022-12-13 Thread Philippe Mathieu-Daudé

On 13/12/22 01:16, Sean Christopherson wrote:

Check that the number of pages per slot is non-zero in get_max_slots()
prior to computing the remaining number of pages.  clang generates code
that uses an actual DIV for calculating the remaining, which causes a #DE
if the total number of pages is less than the number of slots.

   traps: memslot_perf_te[97611] trap divide error ip:4030c4 sp:7ffd18ae58f0
  error:0 in memslot_perf_test[401000+cb000]

Fixes: a69170c65acd ("KVM: selftests: memslot_perf_test: Report optimal memory 
slots")
Signed-off-by: Sean Christopherson 
---
  tools/testing/selftests/kvm/memslot_perf_test.c | 3 +++
  1 file changed, 3 insertions(+)


Reviewed-by: Philippe Mathieu-Daudé 


___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm