Hi, > -----Original Message----- > From: Michael S. Tsirkin [mailto:m...@redhat.com] > Sent: Monday, May 12, 2014 6:31 PM > > vhost does everything under a VQ lock. > I think RCU for VHOST_SET_MEM_TABLE can be replaced with > taking and freeing the VQ lock. > > Does the below solve the problem for you > (warning: untested, sorry, busy with other bugs right now)? > > > Signed-off-by: Michael S. Tsirkin <m...@redhat.com> > > --- > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index 78987e4..df2e3eb 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -593,6 +593,7 @@ static long vhost_set_memory(struct vhost_dev *d, > struct vhost_memory __user *m) > { > struct vhost_memory mem, *newmem, *oldmem; > unsigned long size = offsetof(struct vhost_memory, regions); > + int i; > > if (copy_from_user(&mem, m, size)) > return -EFAULT; > @@ -619,7 +620,14 @@ static long vhost_set_memory(struct vhost_dev *d, > struct vhost_memory __user *m) > oldmem = rcu_dereference_protected(d->memory, > lockdep_is_held(&d->mutex)); > rcu_assign_pointer(d->memory, newmem); > - synchronize_rcu(); > + > + /* All memory accesses are done under some VQ mutex. > + * So below is a faster equivalent of synchronize_rcu() > + */ > + for (i = 0; i < dev->nvqs; ++i) { > + mutex_lock(d->vqs[idx]->mutex); > + mutex_unlock(d->vqs[idx]->mutex); > + } > kfree(oldmem); > return 0; > }
Thanks for your advice, I suppose getting mutexes should generally be faster than waiting for CPU context switches. And I think d->mutex should also be synchronized since somewhere gets only this mutex directly and not vq mutexes. Is this right? I'll try this approach, thanks. Best regards, -Gonglei