Marcelo Tosatti wrote:
Get rid of kvm->lock dependency on coalesced_mmio methods. Use an atomic variable instead to guarantee only one vcpu is batching
data into the ring at a given time.

Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>

Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
===================================================================
--- kvm-irqlock.orig/virt/kvm/coalesced_mmio.c
+++ kvm-irqlock/virt/kvm/coalesced_mmio.c
@@ -26,9 +26,12 @@ static int coalesced_mmio_in_range(struc
        if (!is_write)
                return 0;
- /* kvm->lock is taken by the caller and must be not released before
-         * dev.read/write
-         */
+       /*
+        * Some other vcpu might be batching data into the ring,
+        * fallback to userspace. Ordering not our problem.
+        */
+       if (!atomic_add_unless(&dev->in_use, 1, 1))
+               return 0;

Ordering with simultaneous writes is indeed not our problem, but the ring may contain ordered writes (even by the same vcpu!)

Suggest our own lock here. in_use is basically a homemade lock, better to use the factory made ones which come with a warranty.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to