On Sun, Mar 03, 2013 at 11:35:27AM +0200, Abel Gordon wrote: > > > Stefan Hajnoczi <stefa...@gmail.com> wrote on 01/03/2013 12:54:54 PM: > > > On Thu, Feb 28, 2013 at 08:20:08PM +0200, Abel Gordon wrote: > > > Stefan Hajnoczi <stefa...@gmail.com> wrote on 28/02/2013 04:43:04 PM: > > > > I think extending and tuning the existing mechanisms is the way to > go. > > > > I don't see obvious advantages other than reducing context switches. > > > > > > Maybe it is worth checking... > > > We did experiments using vhost-net and vhost-blk. We measured and > compared > > > the traditional model (kernel thread per VM/virtual device) to the > > > shared-thread model with fine-grained I/O scheduling (single kernel > thread > > > used to serve multiple VMs). We noticed improvements up-to 2.5x > > > in throughput and almost half the latency when running up-to 14 VMs. > > > > Can you post patches? > > We will publish the code soon but note the patches are for vhost > kernel back-end and not for the qemu user-space back-end.
Yes, this is one of the fields where the asynchronous interface of vhost could be helpful, abstracting the threading model away from the application. The main challenge with sharing threads is handling things like cpu limits and swap access. They normally send the current thread to sleep and let another thread run, but if VMs share a thread, this becomes a problem. Could be solvable by detecting such conditions and moving other VMs to per-VM threads, but I haven't seen such patches yet. > > Also, I wonder if you have time to do a presentation/discussion session > > so we can get the ball rolling and more people exposed to your approach. > > There is a weekly QEMU Community Call which we can use as the forum. > > Sure. I'll send you a separate email to schedule the > presentation/discussion. >