On 04/03/2016 08:12, Lan Tianyu wrote:
> > >   /*
> > > -  * wmb: make sure everyone sees our modifications to the page tables
> > > -  * rmb: make sure we see changes to vcpu->mode
> > 
> > You want to leave the comment explaining the memory barriers and tell that
> > kvm_flush_remote_tlbs() contains the smp_mb().
> 
> That sounds more reasonable. Will update. Thanks.

In fact, the reason for kvm_flush_remote_tlbs()'s barrier is exactly
what was in this comment.  So you can:

1) add a comment to kvm_flush_remote_tlbs like:

        /*
         * We want to publish modifications to the page tables before reading
         * mode.  Pairs with a memory barrier in arch-specific code.
         * - x86: smp_mb__after_srcu_read_unlock in vcpu_enter_guest.
         * - powerpc: smp_mb in kvmppc_prepare_to_enter.
         */

2) add a comment to vcpu_enter_guest and kvmppc_prepare_to_enter, saying
that the memory barrier also orders the write to mode from any reads
to the page tables done while the VCPU is running.  In other words, on
entry a single memory barrier achieves two purposes (write ->mode before
reading requests, write ->mode before reading page tables).

The same should be true in kvm_flush_remote_tlbs().  So you may investigate
removing the barrier from kvm_flush_remote_tlbs, because
kvm_make_all_cpus_request already includes a memory barrier.  Like
Thomas suggested, leave a comment in kvm_flush_remote_tlbs(),
saying which memory barrier you are relying on and for what.

And finally, the memory barrier in kvm_make_all_cpus_request can become
smp_mb__after_atomic, which is free on x86.

Of course, all this should be done in at least three separate patches.

Thanks!

Paolo

Reply via email to