On Tue, Jan 30, 2018 at 06:36:20PM -0500, Paolo Bonzini wrote:
> On 30/01/2018 04:00, David Woodhouse wrote:
> > I believe Ashok sent you a change which made us do IBPB on *every*
> > vmexit; I don't think we need that. It's currently done in vcpu_load()
> > which means we'll definitely have done it between running one vCPU and
> > the next, and when vCPUs are pinned we basically never need to do it.
> > 
> > We know that VMM (e.g. qemu) userspace could be vulnerable to attacks
> > from guest ring 3, because there is no flush between the vmexit and the
> > host kernel "returning" to the userspace thread. Doing a full IBPB on
> > *every* vmexit would protect from that, but it's overkill. If that's
> > the reason, let's come up with something better.
> 
> Certainly not every vmexit!  But doing it on every userspace vmexit and
> every sched_out would not be *that* bad.

Right.. agreed. We discussed the different scenarios that doing IBPB
on VMexit would help, and decided its really not required on every exit. 

One obvious case is when there is a VMexit and return back to Qemu
process (witout a real context switch) do we need that to be 
protected from any poisoned BTB from guest?

If Qemu is protected by !dumpable/retpoline that should give that gaurantee.
We do VM->VM IBPB at vmload() time that should provide that gaurantee.

Cheers,
Ashok

> 
> We try really hard to avoid userspace vmexits for everything remotely
> critical to performance (the main exception that's left is the PMTIMER
> I/O port, that Windows likes to access quite a lot), so they shouldn't
> happen that often.

Reply via email to