On 12/06/2011 11:42 PM, Sridhar Samudrala wrote:
On 12/6/2011 5:15 AM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wang<jasow...@redhat.com>  wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wang<jasow...@redhat.com> wrote:
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon, Dec 5, 2011 at 8:59 AM, Jason Wang<jasow...@redhat.com>
  wrote:
The vcpus are just threads and may not be bound to physical CPUs, so
what is the big picture here?  Is the guest even in the position to
set the best queue mappings today?

Not sure it could publish the best mapping but the idea is to make sure the packets of a flow were handled by the same guest vcpu and may be the same
vhost thread in order to eliminate the packet reordering and lock
contention. But this assumption does not take the bouncing of vhost or vcpu
threads which would also affect the result.
Okay, this is why I'd like to know what the big picture here is.  What
solution are you proposing?  How are we going to have everything from
guest application, guest kernel, host threads, and host NIC driver
play along so we get the right steering up the entire stack.  I think
there needs to be an answer to that before changing virtio-net to add
any steering mechanism.


Yes. Also the current model of a vhost thread per VM's interface doesn't help with packet steering
all the way from the guest to the host physical NIC.

I think we need to have vhost thread(s) per-CPU that can handle packets to/from physical NIC's TX/RX queues. Currently we have a single vhost thread for a VM's i/f that handles all the packets from
various flows coming from a multi-queue physical NIC.

Even if we have per-cpu workthread, only one socket is used to queue the packet then, so a multiple queue(sockets) tap/macvtap is still needed.

Thanks
Sridhar


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to