On Sun, Mar 3, 2013 at 10:35 AM, Abel Gordon <ab...@il.ibm.com> wrote:
>
>
> Stefan Hajnoczi <stefa...@gmail.com>  wrote on 01/03/2013 12:54:54 PM:
>
>> On Thu, Feb 28, 2013 at 08:20:08PM +0200, Abel Gordon wrote:
>> > Stefan Hajnoczi <stefa...@gmail.com> wrote on 28/02/2013 04:43:04 PM:
>> > > I think extending and tuning the existing mechanisms is the way to
> go.
>> > > I don't see obvious advantages other than reducing context switches.
>> >
>> > Maybe it is worth checking...
>> > We did experiments using vhost-net and vhost-blk. We measured and
> compared
>> > the traditional model (kernel thread per VM/virtual device) to the
>> > shared-thread model with fine-grained I/O scheduling (single kernel
> thread
>> > used to serve multiple VMs). We noticed improvements up-to 2.5x
>> > in throughput and almost half the latency when running up-to 14 VMs.
>>
>> Can you post patches?
>
> We will publish the code soon but note the patches are for vhost
> kernel back-end and not for the qemu user-space back-end.

That's fine.  The only difference the codebase makes is which mailing list:

 * qemu-devel@nongnu.org - QEMU userspace
 * k...@vger.kernel.org - kvm kernel module
 * virtualizat...@lists.linuxfoundation.org - broader scope Linux
kernel virtualization (vhost, virtio, hyperv drivers, etc)

Stefan

Reply via email to