> On 5 Jul 2019, at 15:14, Paolo Bonzini <pbonz...@redhat.com> wrote:
> 
> kvm-unit-tests were adjusted to match bare metal behavior, but KVM
> itself was not doing what bare metal does; fix that.
> 
> Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
> ---
> arch/x86/kvm/lapic.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
> index d6ca5c4f29f1..2e4470f2685a 100644
> --- a/arch/x86/kvm/lapic.c
> +++ b/arch/x86/kvm/lapic.c
> @@ -1318,7 +1318,7 @@ int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 
> offset, int len,
>       unsigned char alignment = offset & 0xf;
>       u32 result;
>       /* this bitmask has a bit cleared for each reserved register */
> -     static const u64 rmask = 0x43ff01ffffffe70cULL;
> +     u64 rmask = 0x43ff01ffffffe70cULL;

Why not rename this to “used_bits_mask” and calculate it properly by macros?
It seems a lot nicer than having a pre-calculated magic.

-Liran

> 
>       if ((alignment + len) > 4) {
>               apic_debug("KVM_APIC_READ: alignment error %x %d\n",
> @@ -1326,6 +1326,10 @@ int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 
> offset, int len,
>               return 1;
>       }
> 
> +     /* ARBPRI is also reserved on x2APIC */
> +     if (apic_x2apic_mode(apic))
> +             rmask &= ~(1 << (APIC_ARBPRI >> 4));
> +
>       if (offset > 0x3f0 || !(rmask & (1ULL << (offset >> 4)))) {
>               apic_debug("KVM_APIC_READ: read reserved register %x\n",
>                          offset);
> -- 
> 1.8.3.1
> 

Reply via email to