On Sat, Nov 08, 2014 at 08:44:42AM -0800, Andy Lutomirski wrote:
> On Sat, Nov 8, 2014 at 8:00 AM, Andy Lutomirski <l...@amacapital.net> wrote:
> > On Nov 8, 2014 4:01 AM, "Gleb Natapov" <g...@kernel.org> wrote:
> >>
> >> On Fri, Nov 07, 2014 at 09:59:55AM -0800, Andy Lutomirski wrote:
> >> > On Thu, Nov 6, 2014 at 11:17 PM, Paolo Bonzini <pbonz...@redhat.com> 
> >> > wrote:
> >> > >
> >> > >
> >> > > On 07/11/2014 07:27, Andy Lutomirski wrote:
> >> > >> Is there an easy benchmark that's sensitive to the time it takes to
> >> > >> round-trip from userspace to guest and back to userspace?  I think I
> >> > >> may have a big speedup.
> >> > >
> >> > > The simplest is vmexit.flat from
> >> > > git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git
> >> > >
> >> > > Run it with "x86/run x86/vmexit.flat" and look at the inl_from_qemu
> >> > > benchmark.
> >> >
> >> > Thanks!
> >> >
> >> > That test case is slower than I expected.  I think my change is likely
> >> > to save somewhat under 100ns, which is only a couple percent.  I'll
> >> > look for more impressive improvements.
> >> >
> >> > On a barely related note, in the process of poking around with this
> >> > test, I noticed:
> >> >
> >> >     /* On ept, can't emulate nx, and must switch nx atomically */
> >> >     if (enable_ept && ((vmx->vcpu.arch.efer ^ host_efer) & EFER_NX)) {
> >> >         guest_efer = vmx->vcpu.arch.efer;
> >> >         if (!(guest_efer & EFER_LMA))
> >> >             guest_efer &= ~EFER_LME;
> >> >         add_atomic_switch_msr(vmx, MSR_EFER, guest_efer, host_efer);
> >> >         return false;
> >> >     }
> >> >
> >> >     return true;
> >> >
> >> > This heuristic seems wrong to me.  wrmsr is serializing and therefore
> >> > extremely slow, whereas I imagine that, on CPUs that support it,
> >> > atomically switching EFER ought to be reasonably fast.
> >> >
> >> > Indeed, changing vmexit.c to disable NX (thereby forcing atomic EFER
> >> > switching, and having no other relevant effect that I've thought of)
> >> > speeds up inl_from_qemu by ~30% on Sandy Bridge.  Would it make sense
> >> > to always use atomic EFER switching, at least when
> >> > cpu_has_load_ia32_efer?
> >> >
> >> The idea behind current logic is that we want to avoid writing an MSR
> >> at all for lightweight exists (those that do not exit to userspace). So
> >> if NX bit is the same for host and guest we can avoid writing EFER on
> >> exit and run with guest's EFER in the kernel. But if userspace exit is
> >> required only then we write host's MSR back, only if guest and host MSRs
> >> are different of course. What bit should be restored on userspace exit
> >> in vmexit tests? Is it SCE? What if you set it instead of unsetting NXE?
> >
> > I don't understand.  AFAICT there are really only two cases: EFER
> > switched atomically using the best available mechanism on the host
> > CPU, or EFER switched on userspace exit.  I think there's a
> > theoretical third possibility: if the guest and host EFER match, then
> > EFER doesn't need to be switched at all, but this doesn't seem to be
> > implemented.
> 
> I got this part wrong.  It looks like the user return notifier is
> smart enough not to set EFER at all if the guest and host values
> match.  Indeed, with stock KVM, if I modify vmexit.c to have exactly
> the same EFER as the host (NX and SCE both set), then it runs quickly.
> But I get almost exactly the same performance if NX is clear, which is
> the case where the built-in entry/exit switching is used.
> 
What's the performance difference?

> Admittedly, most guests probably do match the host, so this effect may
> be rare in practice.  But possibly the code should be changed either
> the way I patched it (always use the built-in switching if available)
> or to only do it if the guest and host EFER values differ.  ISTM that,
> on modern CPUs, switching EFER on return to userspace is always a big
> loss.
We should be careful to not optimise for a wrong case. In common case
userspace exits are extremely rare. Try to trace common workloads with
Linux guest. Windows as a guest has its share of userspace exists, but
this is due to the lack of PV timer support (was it fixed already?).
So if switching EFER has measurable overhead doing it on each exit is a
net loss.

> 
> If neither change is made, then maybe the test should change to set
> SCE so that it isn't so misleadingly slow.
>
The purpose of vmexit test is to show us various overheads, so why not
measure EFER switch overhead by having two tests one with equal EFER
another with different EFER, instead of hiding it.


--
                        Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to