On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest without modification.
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
diff --git a/include/linux/miscdevice.h b/include/linux/miscdevice.h
index 0521177..781a8bb 100644
--- a/include/linux/miscdevice.h
+++ b/include/linux/miscdevice.h
@@ -30,6 +30,7 @@
#define HPET_MINOR 228
Michael S. Tsirkin wrote:
On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce virtualization
overhead for virtio by removing system calls on data
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
This implements vhost: a kernel-level backend for virtio,
The main motivation for this work is to reduce
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
snip
1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for testing, you can set promisc mode, but that is
bad for performance)
Are
On Wednesday 12 August 2009, Gregory Haskins wrote:
Are you saying SRIOV is a requirement, and I can either program the
SRIOV adapter with a mac or use promis? Or are you saying I can use
SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).
SRIOV is not a requirement.
Michael S. Tsirkin wrote:
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest
On Wed, Aug 12, 2009 at 08:41:31AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
snip
1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for
On Tuesday 11 August 2009, Paul Congdon (UC Davis) wrote:
The patch from Eric Biederman to allow macvlan to bridge between
its slave ports is at
http://kerneltrap.org/mailarchive/linux-netdev/2009/3/9/5125774
Looking through the discussions here, it does not seem as if a
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant: You don't need to
do synchronize_rcu() because you can flush the workqueue instead to
ensure that all readers have completed.
Yes.
But if thats true, to me, the
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
eth0 - br0-|
veth2 - veth3 - vhost - guest 2
Heh, you
On Wed, Aug 12, 2009 at 03:40:44PM +0200, Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
eth0 - br0-|
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant: You don't need to
do synchronize_rcu() because you can flush the workqueue instead to
ensure that all readers have completed.
Yes.
But if thats
On Wed, Aug 12, 2009 at 09:41:35AM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant: You don't need to
do synchronize_rcu() because you can flush the workqueue instead
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
eth0 - br0-|
veth2 - veth3 - vhost - guest 2
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 - veth1 - vhost - guest 1
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant: You don't need to
do
Subject: Re: [PATCH][RFC] net/bridge: add basic VEPA support
On Tuesday 11 August 2009, Paul Congdon (UC Davis) wrote:
The patch from Eric Biederman to allow macvlan to bridge between
its slave ports is at
http://kerneltrap.org/mailarchive/linux-netdev/2009/3/9/5125774
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant: You don't need to
do synchronize_rcu() because you can flush the workqueue instead to
ensure that all
On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand
On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009
On Wed, Aug 12, 2009 at 06:51:54PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like
veth0 -
On Wed, Aug 12, 2009 at 12:13:43PM -0400, Gregory Haskins wrote:
Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
Arnd Bergmann wrote:
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
If I understand it correctly, you can at least connect a
On Monday 10 August 2009, Michael S. Tsirkin wrote:
+struct workqueue_struct *vhost_workqueue;
[nitpicking] This could be static.
+/* The virtqueue structure describes a queue attached to a device. */
+struct vhost_virtqueue {
+ struct vhost_dev *dev;
+
+ /* The actual ring of
On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
On Monday 10 August 2009, Michael S. Tsirkin wrote:
+struct workqueue_struct *vhost_workqueue;
[nitpicking] This could be static.
Good catch. Thanks!
+/* The virtqueue structure describes a queue attached to a device. */
On Mon, Aug 10, 2009 at 03:51:12PM -0500, Anthony Liguori wrote:
Michael S. Tsirkin wrote:
This adds support for vhost-net virtio kernel backend.
This is RFC, but works without issues for me.
Still needs to be split up, tested and benchmarked properly,
but posting it here in case people
On Wed, Aug 12, 2009 at 10:19:22AM -0700, Ira W. Snyder wrote:
On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
On Monday 10 August 2009, Michael S. Tsirkin wrote:
+struct workqueue_struct *vhost_workqueue;
[nitpicking] This could be static.
+/* The virtqueue
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
We discussed this before, and I still think this could be directly derived
from struct virtqueue, in the same way that vring_virtqueue is derived from
struct virtqueue.
I
However, as I've mentioned repeatedly, the reason I won't merge
virtio-serial is that it duplicates functionality with virtio-console.
If the two are converged, I'm happy to merge it. I'm not opposed to
having more functionality.
I strongly agree.
Paul
Michael S. Tsirkin wrote:
On
Why bother switching to userspace for migration? Can't you just have
get/set ioctls for the state?
I have these. But live migration requires dirty page logging.
I do not want to implement it in kernel.
Is it really that difficult? I think it
Michael S. Tsirkin wrote:
We discussed this before, and I still think this could be directly derived
from struct virtqueue, in the same way that vring_virtqueue is derived from
struct virtqueue.
I prefer keeping it simple. Much of abstraction in virtio is due to the
fact that it needs
Arnd Bergmann wrote:
As I pointed out earlier, most code in virtio net is asymmetrical: guest
provides buffers, host consumes them. Possibly, one could use virtio
rings in a symmetrical way, but support of existing guest virtio net
means there's almost no shared code.
The trick is to
On Mon, Aug 10, 2009 at 09:53:40PM +0300, Michael S. Tsirkin wrote:
What it is: vhost net is a character device that can be used to reduce
the number of system calls involved in virtio networking.
Existing virtio net code is used in the guest without modification.
There's similarity with
Michael S. Tsirkin wrote:
This adds support for vhost-net virtio kernel backend.
This is RFC, but works without issues for me.
Still needs to be split up, tested and benchmarked properly,
but posting it here in case people want to test drive
the kernel bits I posted.
This has a large
35 matches
Mail list logo