Vitaly, On Mon, Oct 26, 2020 at 09:49:16AM +0100, Vitaly Kuznetsov wrote: > Currently, KVM doesn't provide an API to make atomic updates to memmap when > the change touches more than one memory slot, e.g. in case we'd like to > punch a hole in an existing slot. > > Reports are that multi-CPU Q35 VMs booted with OVMF sometimes print something > like > > !!!! X64 Exception Type - 0E(#PF - Page-Fault) CPU Apic ID - 00000003 !!!! > ExceptionData - 0000000000000010 I:1 R:0 U:0 W:0 P:0 PK:0 SS:0 SGX:0 > RIP - 000000007E35FAB6, CS - 0000000000000038, RFLAGS - 0000000000010006 > RAX - 0000000000000000, RCX - 000000007E3598F2, RDX - 00000000078BFBFF > ... > > The problem seems to be that TSEG manipulations on one vCPU are not atomic > from other vCPUs views. In particular, here's the strace: > > Initial creation of the 'problematic' slot: > > 10085 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, > guest_phys_addr=0x100000, > memory_size=2146435072, userspace_addr=0x7fb89bf00000}) = 0 > > ... and then the update (caused by e.g. mch_update_smram()) later: > > 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, > guest_phys_addr=0x100000, > memory_size=0, userspace_addr=0x7fb89bf00000}) = 0 > 10090 ioctl(13, KVM_SET_USER_MEMORY_REGION, {slot=6, flags=0, > guest_phys_addr=0x100000, > memory_size=2129657856, userspace_addr=0x7fb89bf00000}) = 0 > > In case KVM has to handle any event on a different vCPU in between these > two calls the #PF will get triggered.
A pure question: Why a #PF? Is it injected into the guest? My understanding (which could be wrong) is that all thing should start with a vcpu page fault onto the removed range, then when kvm finds that the memory accessed is not within a valid memslot (since we're adding it back but not yet), it'll become an user exit back to QEMU assuming it's an MMIO access. Or am I wrong somewhere? Thanks, -- Peter Xu