Re: general protection fault in __apic_accept_irq
On Thu, 5 Sep 2019 at 16:53, syzbot wrote: > > Hello, > > syzbot found the following crash on: > > HEAD commit:3b47fd5c Merge tag 'nfs-for-5.3-4' of git://git.linux-nfs... > git tree: upstream > console output: https://syzkaller.appspot.com/x/log.txt?x=124af12a60 > kernel config: https://syzkaller.appspot.com/x/.config?x=144488c6c6c6d2b6 > dashboard link: https://syzkaller.appspot.com/bug?extid=dff25ee91f0c7d5c1695 > compiler: clang version 9.0.0 (/home/glider/llvm/clang > 80fee25776c2fb61e74c1ecb1a523375c2500b69) > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1095467660 > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1752fe0a60 > > The bug was bisected to: > > commit 0aa67255f54df192d29aec7ac6abb1249d45bda7 > Author: Vitaly Kuznetsov > Date: Mon Nov 26 15:47:29 2018 + > > x86/hyper-v: move synic/stimer control structures definitions to > hyperv-tlfs.h > > bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=156128c160 > console output: https://syzkaller.appspot.com/x/log.txt?x=136128c160 > > IMPORTANT: if you fix the bug, please add the following tag to the commit: > Reported-by: syzbot+dff25ee91f0c7d5c1...@syzkaller.appspotmail.com > Fixes: 0aa67255f54d ("x86/hyper-v: move synic/stimer control structures > definitions to hyperv-tlfs.h") > > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > 0x94 > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > 0x48c > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > 0x4ac > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4005 data > 0x1520 > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4006 data > 0x15d4 > kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4007 data > 0x15c4 > kasan: CONFIG_KASAN_INLINE enabled > kasan: GPF could be caused by NULL-ptr deref or user memory access > general protection fault: [#1] PREEMPT SMP KASAN > CPU: 0 PID: 9347 Comm: syz-executor665 Not tainted 5.3.0-rc7+ #0 > Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS > Google 01/01/2011 > RIP: 0010:__apic_accept_irq+0x46/0x740 arch/x86/kvm/lapic.c:1029 Thanks for the report, I found the root cause, will send a patch soon. > Code: 89 55 cc 41 89 f4 48 89 fb 49 bd 00 00 00 00 00 fc ff df e8 5c c9 5d > 00 48 89 5d c0 4c 8d b3 98 00 00 00 4d 89 f7 49 c1 ef 03 <43> 80 3c 2f 00 > 74 08 4c 89 f7 e8 6b c4 96 00 49 8b 06 48 89 45 d0 > RSP: 0018:88808a30f9b0 EFLAGS: 00010202 > RAX: 8115c384 RBX: RCX: 8880977f2140 > RDX: RSI: RDI: > RBP: 88808a30fa10 R08: R09: > R10: ed1011461f64 R11: R12: > R13: dc00 R14: 0098 R15: 0013 > FS: 55e35880() GS:8880aea0() knlGS: > CS: 0010 DS: ES: CR0: 80050033 > CR2: CR3: 8f96d000 CR4: 001426f0 > DR0: DR1: DR2: > DR3: DR6: fffe0ff0 DR7: 0400 > Call Trace: > kvm_apic_set_irq+0xb4/0x140 arch/x86/kvm/lapic.c:558 > stimer_notify_direct arch/x86/kvm/hyperv.c:648 [inline] > stimer_expiration arch/x86/kvm/hyperv.c:659 [inline] > kvm_hv_process_stimers+0x594/0x1650 arch/x86/kvm/hyperv.c:686 > vcpu_enter_guest+0x2b2a/0x54b0 arch/x86/kvm/x86.c:7896 > vcpu_run+0x393/0xd40 arch/x86/kvm/x86.c:8152 > kvm_arch_vcpu_ioctl_run+0x636/0x900 arch/x86/kvm/x86.c:8360 > kvm_vcpu_ioctl+0x6cf/0xaf0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2765 > do_vfs_ioctl+0x744/0x1730 fs/ioctl.c:46 > ksys_ioctl fs/ioctl.c:713 [inline] > __do_sys_ioctl fs/ioctl.c:720 [inline] > __se_sys_ioctl fs/ioctl.c:718 [inline] > __x64_sys_ioctl+0xe3/0x120 fs/ioctl.c:718 > do_syscall_64+0xfe/0x140 arch/x86/entry/common.c:296 > entry_SYSCALL_64_after_hwframe+0x49/0xbe > RIP: 0033:0x442a19 > Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7 > 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff > ff 0f 83 1b 0c fc ff c3 66 2e 0f 1f 84 00 00 00 00 > RSP: 002b:7ffca3d2a208 EFLAGS: 0246 ORIG_RAX: 0010 > RAX: ffda RBX: 004002c8 RCX: 00442a19 > RDX: RSI: ae80 RDI: 0005 > RBP: 006cd018 R08: 004002c8 R09: 004002c8 > R10: 004002c8 R11: 0246 R12: 00403ac0 > R13: 00403b50 R14: R15: > Modules linked in: > ---[ end trace 8515c4c18eb55117 ]--- > RIP: 0010:__apic_accept_irq+0x46/0x740 arch/x86/kvm/lapic.c:1029 > Code: 89 55 cc 41 89 f4 48 89 fb 49 bd 00 00 00 00 00 fc ff df e8 5c c9 5d > 00 48 89 5d c0 4c 8d b3 98 00 00 00 4d 89 f7 49 c1 ef 03 <43> 80 3c 2f 00 > 74 08 4c
Re: general protection fault in __apic_accept_irq
On Thu, 5 Sep 2019 at 21:11, Vitaly Kuznetsov wrote: > > Wanpeng Li writes: > > > On Thu, 5 Sep 2019 at 16:53, syzbot > > wrote: > >> > >> Hello, > >> > >> syzbot found the following crash on: > >> > >> HEAD commit:3b47fd5c Merge tag 'nfs-for-5.3-4' of > >> git://git.linux-nfs... > >> git tree: upstream > >> console output: https://syzkaller.appspot.com/x/log.txt?x=124af12a60 > >> kernel config: https://syzkaller.appspot.com/x/.config?x=144488c6c6c6d2b6 > >> dashboard link: > >> https://syzkaller.appspot.com/bug?extid=dff25ee91f0c7d5c1695 > >> compiler: clang version 9.0.0 (/home/glider/llvm/clang > >> 80fee25776c2fb61e74c1ecb1a523375c2500b69) > >> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1095467660 > >> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1752fe0a60 > >> > >> The bug was bisected to: > >> > >> commit 0aa67255f54df192d29aec7ac6abb1249d45bda7 > >> Author: Vitaly Kuznetsov > >> Date: Mon Nov 26 15:47:29 2018 + > >> > >> x86/hyper-v: move synic/stimer control structures definitions to > >> hyperv-tlfs.h > >> > >> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=156128c160 > >> console output: https://syzkaller.appspot.com/x/log.txt?x=136128c160 > >> > >> IMPORTANT: if you fix the bug, please add the following tag to the commit: > >> Reported-by: syzbot+dff25ee91f0c7d5c1...@syzkaller.appspotmail.com > >> Fixes: 0aa67255f54d ("x86/hyper-v: move synic/stimer control structures > >> definitions to hyperv-tlfs.h") > >> > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > >> 0x94 > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > >> 0x48c > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4004 data > >> 0x4ac > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4005 data > >> 0x1520 > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4006 data > >> 0x15d4 > >> kvm [9347]: vcpu0, guest rIP: 0xcc Hyper-V uhandled wrmsr: 0x4007 data > >> 0x15c4 > >> kasan: CONFIG_KASAN_INLINE enabled > >> kasan: GPF could be caused by NULL-ptr deref or user memory access > >> general protection fault: [#1] PREEMPT SMP KASAN > >> CPU: 0 PID: 9347 Comm: syz-executor665 Not tainted 5.3.0-rc7+ #0 > >> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS > >> Google 01/01/2011 > >> RIP: 0010:__apic_accept_irq+0x46/0x740 arch/x86/kvm/lapic.c:1029 > > > > Thanks for the report, I found the root cause, will send a patch soon. > > > > I'm really interested in how any issue can be caused by 0aa67255f54d as > we just moved some definitions from a c file to a common header... (ok, > we did more than that, some structures gained '__packed' but it all > still seems legitimate to me and I can't recall any problems with > genuine Hyper-V...) Yes, the bisect is false positive, we can focus on fixing the bug. Wanpeng ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v10 0/9] Hyper-V: paravirtualized remote TLB flushing and hypercall improvements
2017-08-03 0:09 GMT+08:00 Vitaly Kuznetsov : > Changes since v9: > - Rebase to 4.13-rc3. > - Drop PATCH1 as it was already taken by Greg to char-misc tree. There're no > functional dependencies on this patch so the series can go through a > different tree > (and it actually belongs to x86 if I got Ingo's comment right). > - Add in missing void return type in PATCH1 [Colin King, Ingo Molnar, Greg KH] > - A few minor fixes in what is now PATCH7: add pr_fmt, tiny style fix in > hyperv_flush_tlb_others() [Andy Shevchenko] > - Fix "error: implicit declaration of function 'virt_to_phys'" in PATCH2 > reported by kbuild test robot (#include ) > - Add Steven's 'Reviewed-by:' to PATCH9. > > Original description: > > Hyper-V supports hypercalls for doing local and remote TLB flushing and > gives its guests hints when using hypercall is preferred. While doing > hypercalls for local TLB flushes is probably not practical (and is not > being suggested by modern Hyper-V versions) remote TLB flush with a > hypercall brings significant improvement. > > To test the series I wrote a special 'TLB trasher': on a 16 vCPU guest I > was creating 32 threads which were doing 10 mmap/munmaps each on some > big file. Here are the results: > > Before: > # time ./pthread_mmap ./randfile > real3m33.118s > user0m3.698s > sys 3m16.624s > > After: > # time ./pthread_mmap ./randfile > real2m19.920s > user0m2.662s > sys 2m9.948s > > This series brings a number of small improvements along the way: fast > hypercall implementation and using it for event signaling, rep hypercalls > implementation, hyperv tracing subsystem (which only traces the newly added > remote TLB flush for now). > Hi Vitaly, Could you attach your benchmark? I'm interested in to try the implementation in paravirt kvm. Regards, Wanpeng Li ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v10 0/9] Hyper-V: paravirtualized remote TLB flushing and hypercall improvements
2017-11-06 17:14 GMT+08:00 Vitaly Kuznetsov : > Wanpeng Li writes: > >> 2017-08-03 0:09 GMT+08:00 Vitaly Kuznetsov : >>> Changes since v9: >>> - Rebase to 4.13-rc3. >>> - Drop PATCH1 as it was already taken by Greg to char-misc tree. There're no >>> functional dependencies on this patch so the series can go through a >>> different tree >>> (and it actually belongs to x86 if I got Ingo's comment right). >>> - Add in missing void return type in PATCH1 [Colin King, Ingo Molnar, Greg >>> KH] >>> - A few minor fixes in what is now PATCH7: add pr_fmt, tiny style fix in >>> hyperv_flush_tlb_others() [Andy Shevchenko] >>> - Fix "error: implicit declaration of function 'virt_to_phys'" in PATCH2 >>> reported by kbuild test robot (#include ) >>> - Add Steven's 'Reviewed-by:' to PATCH9. >>> >>> Original description: >>> >>> Hyper-V supports hypercalls for doing local and remote TLB flushing and >>> gives its guests hints when using hypercall is preferred. While doing >>> hypercalls for local TLB flushes is probably not practical (and is not >>> being suggested by modern Hyper-V versions) remote TLB flush with a >>> hypercall brings significant improvement. >>> >>> To test the series I wrote a special 'TLB trasher': on a 16 vCPU guest I >>> was creating 32 threads which were doing 10 mmap/munmaps each on some >>> big file. Here are the results: >>> >>> Before: >>> # time ./pthread_mmap ./randfile >>> real3m33.118s >>> user0m3.698s >>> sys 3m16.624s >>> >>> After: >>> # time ./pthread_mmap ./randfile >>> real2m19.920s >>> user0m2.662s >>> sys 2m9.948s >>> >>> This series brings a number of small improvements along the way: fast >>> hypercall implementation and using it for event signaling, rep hypercalls >>> implementation, hyperv tracing subsystem (which only traces the newly added >>> remote TLB flush for now). >>> >> >> Hi Vitaly, >> >> Could you attach your benchmark? I'm interested in to try the >> implementation in paravirt kvm. >> > > Oh, this would be cool) I briefly discussed the idea with Radim (one of > KVM maintainers) during the last KVM Forum and he wasn't opposed to the > idea. Need to talk to Paolo too. Good thing is that we have everything I talk with Paolo today and he points this feature to me, so I believe he likes it. :) In addition, https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs I search Hypervisor Top Level Functional Specification v5.0b.pdf document but didn't find a section introduce the Hyper-V: paravirtualized remote TLB flushing and hypercall stuff, could you point out? Regards, Wanpeng Li > in place for guests now (HAVE_RCU_TABLE_FREE is enabled globaly on x86). > > Please see the microbenchmark attached. Adjust defines in the beginning > to match your needs. It is not anything smart, basically just a TLB > trasher. > > In theory, the best result is achived when we're overcommiting the host > by running multiple vCPUs on each pCPU. In this case PV tlb flush avoids > touching vCPUs which are not scheduled and avoid the wait on the main > CPU. > > -- > Vitaly > ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH v10 0/9] Hyper-V: paravirtualized remote TLB flushing and hypercall improvements
2017-11-06 18:10 GMT+08:00 Vitaly Kuznetsov : > Wanpeng Li writes: > >> 2017-11-06 17:14 GMT+08:00 Vitaly Kuznetsov : >>> Wanpeng Li writes: >>> >>>> 2017-08-03 0:09 GMT+08:00 Vitaly Kuznetsov : >>>>> Changes since v9: >>>>> - Rebase to 4.13-rc3. >>>>> - Drop PATCH1 as it was already taken by Greg to char-misc tree. There're >>>>> no >>>>> functional dependencies on this patch so the series can go through a >>>>> different tree >>>>> (and it actually belongs to x86 if I got Ingo's comment right). >>>>> - Add in missing void return type in PATCH1 [Colin King, Ingo Molnar, >>>>> Greg KH] >>>>> - A few minor fixes in what is now PATCH7: add pr_fmt, tiny style fix in >>>>> hyperv_flush_tlb_others() [Andy Shevchenko] >>>>> - Fix "error: implicit declaration of function 'virt_to_phys'" in PATCH2 >>>>> reported by kbuild test robot (#include ) >>>>> - Add Steven's 'Reviewed-by:' to PATCH9. >>>>> >>>>> Original description: >>>>> >>>>> Hyper-V supports hypercalls for doing local and remote TLB flushing and >>>>> gives its guests hints when using hypercall is preferred. While doing >>>>> hypercalls for local TLB flushes is probably not practical (and is not >>>>> being suggested by modern Hyper-V versions) remote TLB flush with a >>>>> hypercall brings significant improvement. >>>>> >>>>> To test the series I wrote a special 'TLB trasher': on a 16 vCPU guest I >>>>> was creating 32 threads which were doing 10 mmap/munmaps each on some >>>>> big file. Here are the results: >>>>> >>>>> Before: >>>>> # time ./pthread_mmap ./randfile >>>>> real3m33.118s >>>>> user0m3.698s >>>>> sys 3m16.624s >>>>> >>>>> After: >>>>> # time ./pthread_mmap ./randfile >>>>> real2m19.920s >>>>> user0m2.662s >>>>> sys 2m9.948s >>>>> >>>>> This series brings a number of small improvements along the way: fast >>>>> hypercall implementation and using it for event signaling, rep hypercalls >>>>> implementation, hyperv tracing subsystem (which only traces the newly >>>>> added >>>>> remote TLB flush for now). >>>>> >>>> >>>> Hi Vitaly, >>>> >>>> Could you attach your benchmark? I'm interested in to try the >>>> implementation in paravirt kvm. >>>> >>> >>> Oh, this would be cool) I briefly discussed the idea with Radim (one of >>> KVM maintainers) during the last KVM Forum and he wasn't opposed to the >>> idea. Need to talk to Paolo too. Good thing is that we have everything >> >> I talk with Paolo today and he points this feature to me, so I believe >> he likes it. :) In addition, >> https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs >> I search Hypervisor Top Level Functional Specification v5.0b.pdf >> document but didn't find a section introduce the Hyper-V: >> paravirtualized remote TLB flushing and hypercall stuff, could you >> point out? >> > > It's there, search for > HvFlushVirtualAddressSpace/HvFlushVirtualAddressSpaceEx and > HvFlushVirtualAddressList/HvFlushVirtualAddressListEx. Got it, thanks. Regards, Wanpeng Li ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH 0/4] x86/hyper-v: optimize PV IPIs
Hi Vitaly, (fix my reply mess this time) On Sat, 23 Jun 2018 at 01:09, Vitaly Kuznetsov wrote: > > When reviewing my "x86/hyper-v: use cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_ > {LIST,SPACE} hypercalls when possible" patch Michael suggested to apply the > same idea to PV IPIs. Here we go! > > Despite what Hyper-V TLFS says about HVCALL_SEND_IPI hypercall, it can > actually be 'fast' (passing parameters through registers). Use that too. > > This series can collide with my "KVM: x86: hyperv: PV IPI support for > Windows guests" series as I rename ipi_arg_non_ex/ipi_arg_ex structures > there. Depending on which one gets in first we may need to do tiny > adjustments. As hyperv PV TLB flush has already been merged, is there any other obvious multicast IPIs scenarios? qemu supports interrupt remapping since two years ago, I think windows guest can switch to cluster mode after entering x2APIC, so sending IPI per cluster. In addition, you can also post the benchmark result for this PV IPI optimization, although it also fixes the bug which you mentioned above. I can post one variant for Linux guest PV IPI if it also makes sense. :) Regards, Wanpeng Li ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH 0/4] x86/hyper-v: optimize PV IPIs
On Wed, 27 Jun 2018 at 17:25, Vitaly Kuznetsov wrote: > > Wanpeng Li writes: > > > Hi Vitaly, (fix my reply mess this time) > > On Sat, 23 Jun 2018 at 01:09, Vitaly Kuznetsov wrote: > >> > >> When reviewing my "x86/hyper-v: use cheaper HVCALL_FLUSH_VIRTUAL_ADDRESS_ > >> {LIST,SPACE} hypercalls when possible" patch Michael suggested to apply the > >> same idea to PV IPIs. Here we go! > >> > >> Despite what Hyper-V TLFS says about HVCALL_SEND_IPI hypercall, it can > >> actually be 'fast' (passing parameters through registers). Use that too. > >> > >> This series can collide with my "KVM: x86: hyperv: PV IPI support for > >> Windows guests" series as I rename ipi_arg_non_ex/ipi_arg_ex structures > >> there. Depending on which one gets in first we may need to do tiny > >> adjustments. > > > > As hyperv PV TLB flush has already been merged, is there any other > > obvious multicast IPIs scenarios? qemu supports interrupt remapping > > since two years ago, I think windows guest can switch to cluster mode > > after entering x2APIC, so sending IPI per cluster. > >I got confused, which of my patch series are you actually looking at? >:-) Yeah, actually originally I want to reply the thread which you sent out to kvm ml "KVM: x86: hyperv: PV IPI support for Windows guests" and miss to reply this one since the subject is similar. > When we manifest ourselves as Hyper-V Windows 'forgets' about x2apic > mode: Hyper-V has a concept of 'Synthetic interrupt controller' - an > xapic extension which we also support in KVM. I don't really know any > obvious scenarios for mass IPIs in Windows besides TLB flush but I'm > worried they may exist. Without PV IPIs any such attempt will likely > lead to a crash. > > In general, I do care more about completeness and correctness of our > Hyper-V emulation at this point: Windows is only being tested on 'real' > Hyper-Vs so when we emulate a subset of enlightenments we're on our own > when something is not working. It is also very helpfult for > Linux-on-Hyper-V depelopment as we can see how Windows-on-Hyper-v > behaves :-) > > > In addition, you > > can also post the benchmark result for this PV IPI optimization, > > although it also fixes the bug which you mentioned above. > > I'd love to get to know how to trigger mass IPIs in Windows so a > benchmark can be performed... I also not sure about windows. I use https://lkml.org/lkml/2017/12/19/141 as a linux kernel module to evaluate broadcast IPI performance in the linux guest laster year. :) > > > I can post one variant for Linux guest PV IPI if it also makes > > sense. :) > > With x2apic support I'm actually not sure. Maybe configurations with > a very large number of vCPUs and IPIs going to > 256 vCPUs can benefit > from a 'single hypercall' solution. Each cluster of x2apic cluster mode can just support 16 unique logical IDs, so I think linux guest can also get benefit as long as VM has > 16 vCPUs. I will cook patches to evaluate it. :) Regards, Wanpeng Li ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel
Re: [PATCH V3 0/4] KVM/x86/hyper-V: Introduce PV guest address space mapping flush support
On Fri, 20 Jul 2018 at 16:32, Paolo Bonzini wrote: > > On 20/07/2018 05:58, KY Srinivasan wrote: > > > > > >> -Original Message- > >> From: Tianyu Lan > >> Sent: Thursday, July 19, 2018 1:40 AM > >> Cc: Tianyu Lan ; de...@linuxdriverproject.org; > >> Haiyang Zhang ; h...@zytor.com; > >> k...@vger.kernel.org; KY Srinivasan ; linux- > >> ker...@vger.kernel.org; mi...@redhat.com; pbonz...@redhat.com; > >> rkrc...@redhat.com; Stephen Hemminger ; > >> t...@linutronix.de; x...@kernel.org; Michael Kelley (EOSG) > >> ; vkuzn...@redhat.com > >> Subject: [PATCH V3 0/4] KVM/x86/hyper-V: Introduce PV guest address > >> space mapping flush support > >> > >> Hyper-V provides a para-virtualization hypercall > >> HvFlushGuestPhysicalAddressSpace > >> to flush nested VM address space mapping in l1 hypervisor and it's to > >> reduce > >> overhead > >> of flushing ept tlb among vcpus. The tradition way is to send IPIs to all > >> affected > >> vcpus and executes INVEPT on each vcpus. It will trigger several vmexits > >> for > >> IPI and > >> INVEPT emulation. The pv hypercall can help to flush specified ept table > >> on all > >> vcpus > >> via one single hypercall. > >> > >> Change since v2: > >>- Make ept_pointers_match as tristate "check", "match" and > >> "mismatch". > >>Set "check" in vmx_set_cr3(), check all ept table pointers in > >> hv_remote_flush_tlb() > >>and call hypercall when all ept pointers are same. > >>- Rename kvm_arch_hv_flush_remote_tlb with > >> kvm_arch_flush_remote_tlb and > >>Rename kvm_x86_ops->hv_tlb_remote_flush with kvm_x86_ops- > >>> tlb_remote_flush > >>- Fix issue that ignore updating tlbs_dirty during calling > >> kvm_arch_flush_remote_tlbs() > >>- Merge patch "KVM/VMX: Add identical ept table pointer check" and > >>patch "KVM/x86: Add tlb_remote_flush callback support for vmx" > >> > >> Change since v1: > >>- Fix compilation error for non-x86 platform. > >>- Use ept_pointers_match to check condition of identical ept > >> table pointer and get ept pointer from struct > >> vcpu_vmx->ept_pointer. > >>- Add hyperv_nested_flush_guest_mapping ftrace support > >> > >> > >> > >> Lan Tianyu (4): > >> X86/Hyper-V: Add flush HvFlushGuestPhysicalAddressSpace hypercall > >> support > >> X86/Hyper-V: Add hyperv_nested_flush_guest_mapping ftrace support > >> KVM: Add tlb remote flush callback in kvm_x86_ops. > >> KVM/x86: Add tlb_remote_flush callback support for vmx > >> > >> arch/x86/hyperv/Makefile| 2 +- > >> arch/x86/hyperv/nested.c| 67 > >> ++ > >> arch/x86/include/asm/hyperv-tlfs.h | 8 + > >> arch/x86/include/asm/kvm_host.h | 11 ++ > >> arch/x86/include/asm/mshyperv.h | 2 ++ > >> arch/x86/include/asm/trace/hyperv.h | 14 > >> arch/x86/kvm/vmx.c | 72 > >> - > >> include/linux/kvm_host.h| 7 > >> virt/kvm/kvm_main.c | 3 +- > >> 9 files changed, 183 insertions(+), 3 deletions(-) > >> create mode 100644 arch/x86/hyperv/nested.c > > > > Acked-by: K. Y. Srinivasan > > Queued, thanks! My CONFIG_HYPERV is disabled, there is a warning when compiling kvm/queue. warning: ‘hv_remote_flush_tlb’ defined but not used [-Wunused-function] static int hv_remote_flush_tlb(struct kvm *kvm) Regards, Wanpeng Li ___ devel mailing list de...@linuxdriverproject.org http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel