Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
  What it is: vhost net is a character device that can be used to reduce
  the number of system calls involved in virtio networking.
  Existing virtio net code is used in the guest without modification.
  
  There's similarity with vringfd, with some differences and reduced scope
  - uses eventfd for signalling
  - structures can be moved around in memory at any time (good for migration)
  - support memory table and not just an offset (needed for kvm)
  
  common virtio related code has been put in a separate file vhost.c and
  can be made into a separate module if/when more backends appear.  I used
  Rusty's lguest.c as the source for developing this part : this supplied
  me with witty comments I wouldn't be able to write myself.
  
  What it is not: vhost net is not a bus, and not a generic new system
  call. No assumptions are made on how guest performs hypercalls.
  Userspace hypervisors are supported as well as kvm.
  
  How it works: Basically, we connect virtio frontend (configured by
  userspace) to a backend. The backend could be a network device, or a
  tun-like device. In this version I only support raw socket as a backend,
  which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
  also configured by userspace, including vlan/mac etc.
  
  Status:
  This works for me, and I haven't see any crashes.
  I have not run any benchmarks yet, compared to userspace, I expect to
  see improved latency (as I save up to 4 system calls per packet) but not
  bandwidth/CPU (as TSO and interrupt mitigation are not supported).
  
  Features that I plan to look at in the future:
  - TSO
  - interrupt mitigation
  - zero copy
 
 Only a quick review for now.  Will look closer later.
 
 (see inline)
 
  
  Signed-off-by: Michael S. Tsirkin m...@redhat.com
  
  v2
  ---
   MAINTAINERS|   10 +
   arch/x86/kvm/Kconfig   |1 +
   drivers/Makefile   |1 +
   drivers/vhost/Kconfig  |   11 +
   drivers/vhost/Makefile |2 +
   drivers/vhost/net.c|  411 +++
   drivers/vhost/vhost.c  |  663 
  
   drivers/vhost/vhost.h  |  108 +++
   include/linux/Kbuild   |1 +
   include/linux/miscdevice.h |1 +
   include/linux/vhost.h  |  100 +++
   11 files changed, 1309 insertions(+), 0 deletions(-)
   create mode 100644 drivers/vhost/Kconfig
   create mode 100644 drivers/vhost/Makefile
   create mode 100644 drivers/vhost/net.c
   create mode 100644 drivers/vhost/vhost.c
   create mode 100644 drivers/vhost/vhost.h
   create mode 100644 include/linux/vhost.h
  
  diff --git a/MAINTAINERS b/MAINTAINERS
  index ebc2691..eb0c1da 100644
  --- a/MAINTAINERS
  +++ b/MAINTAINERS
  @@ -6312,6 +6312,16 @@ S:   Maintained
   F: Documentation/filesystems/vfat.txt
   F: fs/fat/
   
  +VIRTIO HOST (VHOST)
  +P: Michael S. Tsirkin
  +M: m...@redhat.com
  +L: k...@vger.kernel.org
  +L: virtualizat...@lists.osdl.org
  +L: net...@vger.kernel.org
  +S: Maintained
  +F: drivers/vhost/
  +F: include/linux/vhost.h
  +
   VIA RHINE NETWORK DRIVER
   P: Roger Luethi
   M: r...@hellgate.ch
  diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
  index b84e571..94f44d9 100644
  --- a/arch/x86/kvm/Kconfig
  +++ b/arch/x86/kvm/Kconfig
  @@ -64,6 +64,7 @@ config KVM_AMD
   
   # OK, it's a little counter-intuitive to do this, but it puts it neatly 
  under
   # the virtualization menu.
  +source drivers/vhost/Kconfig
   source drivers/lguest/Kconfig
   source drivers/virtio/Kconfig
   
  diff --git a/drivers/Makefile b/drivers/Makefile
  index bc4205d..1551ae1 100644
  --- a/drivers/Makefile
  +++ b/drivers/Makefile
  @@ -105,6 +105,7 @@ obj-$(CONFIG_HID)   += hid/
   obj-$(CONFIG_PPC_PS3)  += ps3/
   obj-$(CONFIG_OF)   += of/
   obj-$(CONFIG_SSB)  += ssb/
  +obj-$(CONFIG_VHOST_NET)+= vhost/
   obj-$(CONFIG_VIRTIO)   += virtio/
   obj-$(CONFIG_VLYNQ)+= vlynq/
   obj-$(CONFIG_STAGING)  += staging/
  diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
  new file mode 100644
  index 000..d955406
  --- /dev/null
  +++ b/drivers/vhost/Kconfig
  @@ -0,0 +1,11 @@
  +config VHOST_NET
  +   tristate Host kernel accelerator for virtio net
  +   depends on NET  EVENTFD
  +   ---help---
  + This kernel module can be loaded in host kernel to accelerate
  + guest networking with virtio_net. Not to be confused with virtio_net
  + module itself which needs to be loaded in guest kernel.
  +
  + To compile this driver as a module, choose M here: the module will
  + be called vhost_net.
  +
  diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
  new file mode 100644
  index 000..72dd020
  --- /dev/null
  +++ b/drivers/vhost/Makefile
  @@ -0,0 +1,2 @@
  +obj-$(CONFIG_VHOST_NET) += vhost_net.o
 

Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
  diff --git a/include/linux/miscdevice.h b/include/linux/miscdevice.h
  index 0521177..781a8bb 100644
  --- a/include/linux/miscdevice.h
  +++ b/include/linux/miscdevice.h
  @@ -30,6 +30,7 @@
   #define HPET_MINOR 228
   #define FUSE_MINOR 229
   #define KVM_MINOR  232
  +#define VHOST_NET_MINOR233
 
 Would recommend using DYNAMIC-MINOR.

Good idea. Thanks!

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
 This implements vhost: a kernel-level backend for virtio,
 The main motivation for this work is to reduce virtualization
 overhead for virtio by removing system calls on data path,
 without guest changes. For virtio-net, this removes up to
 4 system calls per packet: vm exit for kick, reentry for kick,
 iothread wakeup for packet, interrupt injection for packet.

 Some more detailed description attached to the patch itself.

 The patches are against 2.6.31-rc4.  I'd like them to go into linux-next
 and down the road 2.6.32 if possible.  Please comment.
 I will add this series to my benchmark run in the next day or so.  Any
 specific instructions on how to set it up and run?

 Regards,
 -Greg

 
 1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for testing, you can set promisc mode, but that is
bad for performance)

Are you saying SRIOV is a requirement, and I can either program the
SRIOV adapter with a mac or use promis?  Or are you saying I can use
SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).


 2. disable tso,gso,lro with ethtool

Out of curiosity, wouldnt you only need to disable LRO on the adapter,
since the other two (IIUC) are transmit path and are therefore
influenced by the skb's you generate in vhost?


 3. add vhost=ethX

You mean via ip link I assume?

Regards,
-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
  On Tue, Aug 11, 2009 at 07:49:37PM -0400, Gregory Haskins wrote:
  Michael S. Tsirkin wrote:
  This implements vhost: a kernel-level backend for virtio,
  The main motivation for this work is to reduce virtualization
  overhead for virtio by removing system calls on data path,
  without guest changes. For virtio-net, this removes up to
  4 system calls per packet: vm exit for kick, reentry for kick,
  iothread wakeup for packet, interrupt injection for packet.
 
  Some more detailed description attached to the patch itself.
 
  The patches are against 2.6.31-rc4.  I'd like them to go into linux-next
  and down the road 2.6.32 if possible.  Please comment.
  I will add this series to my benchmark run in the next day or so.  Any
  specific instructions on how to set it up and run?
 
  Regards,
  -Greg
 
  
  1. use a dedicated network interface with SRIOV, program mac to match
 that of guest (for testing, you can set promisc mode, but that is
 bad for performance)
 
 Are you saying SRIOV is a requirement, and I can either program the
 SRIOV adapter with a mac or use promis?  Or are you saying I can use
 SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).

SRIOV is not a requirement. And you can also use a dedicated
nic+programmed mac if you are so inclined.

  2. disable tso,gso,lro with ethtool
 
 Out of curiosity, wouldnt you only need to disable LRO on the adapter,
 since the other two (IIUC) are transmit path and are therefore
 influenced by the skb's you generate in vhost?

Hmm, makes sense. I'll check this and let you know.

 
  3. add vhost=ethX
 
 You mean via ip link I assume?

No, that's a new flag for virtio in qemu:

-net nic,model=virtio,vhost=veth0

 Regards,
 -Greg
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:

snip


 1. use a dedicated network interface with SRIOV, program mac to match
that of guest (for testing, you can set promisc mode, but that is
bad for performance)

 Are you saying SRIOV is a requirement, and I can either program the
 SRIOV adapter with a mac or use promis?  Or are you saying I can use
 SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).
 
 SRIOV is not a requirement. And you can also use a dedicated
 nic+programmed mac if you are so inclined.

Makes sense.  Got it.

I was going to add guest-to-guest to the test matrix, but I assume that
is not supported with vhost unless you have something like a VEPA
enabled bridge?

snip

 3. add vhost=ethX
 You mean via ip link I assume?
 
 No, that's a new flag for virtio in qemu:
 
 -net nic,model=virtio,vhost=veth0

Ah, ok.  Even better.

Thanks!
-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Arnd Bergmann
On Wednesday 12 August 2009, Gregory Haskins wrote:
  Are you saying SRIOV is a requirement, and I can either program the
  SRIOV adapter with a mac or use promis?  Or are you saying I can use
  SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).
  
  SRIOV is not a requirement. And you can also use a dedicated
  nic+programmed mac if you are so inclined.
 
 Makes sense.  Got it.
 
 I was going to add guest-to-guest to the test matrix, but I assume that
 is not supported with vhost unless you have something like a VEPA
 enabled bridge?
 

If I understand it correctly, you can at least connect a veth pair
to a bridge, right? Something like

   veth0 - veth1 - vhost - guest 1 
eth0 - br0-|
   veth2 - veth3 - vhost - guest 2
  
It's a bit more complicated than it need to be, but should work fine.

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 On Tue, Aug 11, 2009 at 08:06:02PM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
 What it is: vhost net is a character device that can be used to reduce
 the number of system calls involved in virtio networking.
 Existing virtio net code is used in the guest without modification.

 There's similarity with vringfd, with some differences and reduced scope
 - uses eventfd for signalling
 - structures can be moved around in memory at any time (good for migration)
 - support memory table and not just an offset (needed for kvm)

 common virtio related code has been put in a separate file vhost.c and
 can be made into a separate module if/when more backends appear.  I used
 Rusty's lguest.c as the source for developing this part : this supplied
 me with witty comments I wouldn't be able to write myself.

 What it is not: vhost net is not a bus, and not a generic new system
 call. No assumptions are made on how guest performs hypercalls.
 Userspace hypervisors are supported as well as kvm.

 How it works: Basically, we connect virtio frontend (configured by
 userspace) to a backend. The backend could be a network device, or a
 tun-like device. In this version I only support raw socket as a backend,
 which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
 also configured by userspace, including vlan/mac etc.

 Status:
 This works for me, and I haven't see any crashes.
 I have not run any benchmarks yet, compared to userspace, I expect to
 see improved latency (as I save up to 4 system calls per packet) but not
 bandwidth/CPU (as TSO and interrupt mitigation are not supported).

 Features that I plan to look at in the future:
 - TSO
 - interrupt mitigation
 - zero copy
 Only a quick review for now.  Will look closer later.

 (see inline)

 Signed-off-by: Michael S. Tsirkin m...@redhat.com

 v2
 ---
  MAINTAINERS|   10 +
  arch/x86/kvm/Kconfig   |1 +
  drivers/Makefile   |1 +
  drivers/vhost/Kconfig  |   11 +
  drivers/vhost/Makefile |2 +
  drivers/vhost/net.c|  411 +++
  drivers/vhost/vhost.c  |  663 
 
  drivers/vhost/vhost.h  |  108 +++
  include/linux/Kbuild   |1 +
  include/linux/miscdevice.h |1 +
  include/linux/vhost.h  |  100 +++
  11 files changed, 1309 insertions(+), 0 deletions(-)
  create mode 100644 drivers/vhost/Kconfig
  create mode 100644 drivers/vhost/Makefile
  create mode 100644 drivers/vhost/net.c
  create mode 100644 drivers/vhost/vhost.c
  create mode 100644 drivers/vhost/vhost.h
  create mode 100644 include/linux/vhost.h

 diff --git a/MAINTAINERS b/MAINTAINERS
 index ebc2691..eb0c1da 100644
 --- a/MAINTAINERS
 +++ b/MAINTAINERS
 @@ -6312,6 +6312,16 @@ S:   Maintained
  F: Documentation/filesystems/vfat.txt
  F: fs/fat/
  
 +VIRTIO HOST (VHOST)
 +P: Michael S. Tsirkin
 +M: m...@redhat.com
 +L: k...@vger.kernel.org
 +L: virtualizat...@lists.osdl.org
 +L: net...@vger.kernel.org
 +S: Maintained
 +F: drivers/vhost/
 +F: include/linux/vhost.h
 +
  VIA RHINE NETWORK DRIVER
  P: Roger Luethi
  M: r...@hellgate.ch
 diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
 index b84e571..94f44d9 100644
 --- a/arch/x86/kvm/Kconfig
 +++ b/arch/x86/kvm/Kconfig
 @@ -64,6 +64,7 @@ config KVM_AMD
  
  # OK, it's a little counter-intuitive to do this, but it puts it neatly 
 under
  # the virtualization menu.
 +source drivers/vhost/Kconfig
  source drivers/lguest/Kconfig
  source drivers/virtio/Kconfig
  
 diff --git a/drivers/Makefile b/drivers/Makefile
 index bc4205d..1551ae1 100644
 --- a/drivers/Makefile
 +++ b/drivers/Makefile
 @@ -105,6 +105,7 @@ obj-$(CONFIG_HID)   += hid/
  obj-$(CONFIG_PPC_PS3)  += ps3/
  obj-$(CONFIG_OF)   += of/
  obj-$(CONFIG_SSB)  += ssb/
 +obj-$(CONFIG_VHOST_NET)+= vhost/
  obj-$(CONFIG_VIRTIO)   += virtio/
  obj-$(CONFIG_VLYNQ)+= vlynq/
  obj-$(CONFIG_STAGING)  += staging/
 diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
 new file mode 100644
 index 000..d955406
 --- /dev/null
 +++ b/drivers/vhost/Kconfig
 @@ -0,0 +1,11 @@
 +config VHOST_NET
 +   tristate Host kernel accelerator for virtio net
 +   depends on NET  EVENTFD
 +   ---help---
 + This kernel module can be loaded in host kernel to accelerate
 + guest networking with virtio_net. Not to be confused with virtio_net
 + module itself which needs to be loaded in guest kernel.
 +
 + To compile this driver as a module, choose M here: the module will
 + be called vhost_net.
 +
 diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
 new file mode 100644
 index 000..72dd020
 --- /dev/null
 +++ b/drivers/vhost/Makefile
 @@ -0,0 +1,2 @@
 +obj-$(CONFIG_VHOST_NET) += vhost_net.o
 +vhost_net-y := vhost.o net.o
 diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
 new file mode 100644
 index 

Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 08:41:31AM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
  On Wed, Aug 12, 2009 at 07:56:05AM -0400, Gregory Haskins wrote:
  Michael S. Tsirkin wrote:
 
 snip
 
 
  1. use a dedicated network interface with SRIOV, program mac to match
 that of guest (for testing, you can set promisc mode, but that is
 bad for performance)
 
  Are you saying SRIOV is a requirement, and I can either program the
  SRIOV adapter with a mac or use promis?  Or are you saying I can use
  SRIOV+programmed mac OR a regular nic + promisc (with a perf penalty).
  
  SRIOV is not a requirement. And you can also use a dedicated
  nic+programmed mac if you are so inclined.
 
 Makes sense.  Got it.
 
 I was going to add guest-to-guest to the test matrix, but I assume that
 is not supported with vhost unless you have something like a VEPA
 enabled bridge?
 
 snip

Presumably you mean on the same host?  There were also some patches to
enable local guest to guest for macvlan, that would be a nice
software-only solution.  For back to back, I just tried over veth, seems
to work fine.

  3. add vhost=ethX
  You mean via ip link I assume?
  
  No, that's a new flag for virtio in qemu:
  
  -net nic,model=virtio,vhost=veth0
 
 Ah, ok.  Even better.
 
 Thanks!
 -Greg
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-12 Thread Arnd Bergmann
On Tuesday 11 August 2009, Paul Congdon (UC Davis) wrote:
  
   The patch from Eric Biederman to allow macvlan to bridge between
   its slave ports is at
  
   http://kerneltrap.org/mailarchive/linux-netdev/2009/3/9/5125774
  
  Looking through the discussions here, it does not seem as if a decision
  was made to integrate those patches, because they would make the
  macvlan interface behave too much like a bridge. 

Right, that question is still open, and dont't see it as very important
right now, as long as we can still use it for VEPA.

  Also, it seems as if there was still a problem with doing
  multicast/broadcast delivery when enabling local VM-to-VM
  communication. Is that solved by now?

Not yet, but I guess it comes as a natural extension when I fix
multicast/broadcast delivery from the reflective relay for VEPA.

The logic that I would use there is:

broadcast from a dowstream port:
 if (bridge_mode(source_port)) {
  forward_to_upstream(frame);
  for_each_downstream(port) {
/* deliver to all bridge ports except self, do
   not deliver to any VEPA port. */
if (bridge_mode(port)  port != source_port) {
   forward_to_downstream(frame, port);
}
  }
 } else {
  forward_to_upstream(frame);
 }


broadcast from the upstream port
 if (bridge_mode(frame.source)) {
  /* comes from a port in bridge mode, so has already been
 delivered to all other bridge ports */
  for_each_downstream(port) {
if (!bridge_mode(port)) {
 forward_to_downstream(frame, port);
}
  }
 } else if (vepa_mode(frame.source)) {
  /* comes from VEPA port, so need to deliver to all
 bridge and all vepa ports except self */
  for_each_downstream(port) {
if (port != frame.source)
   forward_to_downstream(frame, port);
 } else {
  /* external source, so flood to everyone */
  for_each_downstream(port) {
forward_to_downstream(frame, port);
 }

For multicast, we can do the same, or optionally add a per-port filter
as you mentioned, if it becomes a bottleneck. 

Do you think this addresses the problem, or did I miss something important?

 Also, is there a solution, or plans for a solution, to address macvtap
 interfaces that are set to 'promiscuous' mode?  It would seem fairly easy to
 support this for interfaces that are simply trying to listen to the port
 (e.g. Wireshark). 

If you want to use tcpdump or wireshark on all ports simulateously in a pure
VEPA, you can still attach it to the 'lowerdev', e.g. eth0 or eth0.2 (for 
macvlan
nested in vlan).
If we allow bridge ports, we might want to extend the local delivery
to also go through all the hooks of the external port, so that you can
attach packet sockets there.

 If the port was being used by something like a firewall
 then the VEPA filtering doesn't work too well.

Not sure what you mean. Are you talking about a firewall separating the guests
from the outside, between the VEPA and the reflective relay, or a firewall 
between
the guests in case of local delivery?

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
 I think I understand what your comment above meant:  You don't need to
 do synchronize_rcu() because you can flush the workqueue instead to
 ensure that all readers have completed.

Yes.

  But if thats true, to me, the
 rcu_dereference itself is gratuitous,

Here's a thesis on what rcu_dereference does (besides documentation):

reader does this

A: sock = n-sock
B: use *sock

Say writer does this:

C: newsock = allocate socket
D: initialize(newsock)
E: n-sock = newsock
F: flush


On Alpha, reads could be reordered.  So, on smp, command A could get
data from point F, and command B - from point D (uninitialized, from
cache).  IOW, you get fresh pointer but stale data.
So we need to stick a barrier in there.

 and that pointer is *not* actually
 RCU protected (nor does it need to be).

Heh, if readers are lockless and writer does init/update/sync,
this to me spells rcu.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Arnd Bergmann
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
  If I understand it correctly, you can at least connect a veth pair
  to a bridge, right? Something like
  
 veth0 - veth1 - vhost - guest 1 
  eth0 - br0-|
 veth2 - veth3 - vhost - guest 2
 
 Heh, you don't need a bridge in this picture:
 
 guest 1 - vhost - veth0 - veth1 - vhost guest 2

Sure, but the setup I described is the one that I would expect
to see in practice because it gives you external connectivity.

Measuring two guests communicating over a veth pair is
interesting for finding the bottlenecks, but of little
practical relevance.

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 03:40:44PM +0200, Arnd Bergmann wrote:
 On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
   If I understand it correctly, you can at least connect a veth pair
   to a bridge, right? Something like
   
  veth0 - veth1 - vhost - guest 1 
   eth0 - br0-|
  veth2 - veth3 - vhost - guest 2
  
  Heh, you don't need a bridge in this picture:
  
  guest 1 - vhost - veth0 - veth1 - vhost guest 2
 
 Sure, but the setup I described is the one that I would expect
 to see in practice because it gives you external connectivity.
 
 Measuring two guests communicating over a veth pair is
 interesting for finding the bottlenecks, but of little
 practical relevance.
 
   Arnd 

Oh, hopefully macvlan will soon allow that.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
 I think I understand what your comment above meant:  You don't need to
 do synchronize_rcu() because you can flush the workqueue instead to
 ensure that all readers have completed.
 
 Yes.
 
  But if thats true, to me, the
 rcu_dereference itself is gratuitous,
 
 Here's a thesis on what rcu_dereference does (besides documentation):
 
 reader does this
 
   A: sock = n-sock
   B: use *sock
 
 Say writer does this:
 
   C: newsock = allocate socket
   D: initialize(newsock)
   E: n-sock = newsock
   F: flush
 
 
 On Alpha, reads could be reordered.  So, on smp, command A could get
 data from point F, and command B - from point D (uninitialized, from
 cache).  IOW, you get fresh pointer but stale data.
 So we need to stick a barrier in there.

Yes, that is understood.  Perhaps you should just use a normal barrier,
however.  (Or at least a comment that says I am just using this for its
barrier).

 
 and that pointer is *not* actually
 RCU protected (nor does it need to be).
 
 Heh, if readers are lockless and writer does init/update/sync,
 this to me spells rcu.

More correctly: it smells like RCU, but its not. ;)  It's rcu-like,
but you are not really using the rcu facilities.  I think anyone that
knows RCU and reads your code will likely be scratching their heads as well.

Its probably not a big deal, as I understand your code now.  Just a
suggestion to help clarify it.

Regards,
-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 09:41:35AM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
  On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
  I think I understand what your comment above meant:  You don't need to
  do synchronize_rcu() because you can flush the workqueue instead to
  ensure that all readers have completed.
  
  Yes.
  
   But if thats true, to me, the
  rcu_dereference itself is gratuitous,
  
  Here's a thesis on what rcu_dereference does (besides documentation):
  
  reader does this
  
  A: sock = n-sock
  B: use *sock
  
  Say writer does this:
  
  C: newsock = allocate socket
  D: initialize(newsock)
  E: n-sock = newsock
  F: flush
  
  
  On Alpha, reads could be reordered.  So, on smp, command A could get
  data from point F, and command B - from point D (uninitialized, from
  cache).  IOW, you get fresh pointer but stale data.
  So we need to stick a barrier in there.
 
 Yes, that is understood.  Perhaps you should just use a normal barrier,
 however.  (Or at least a comment that says I am just using this for its
 barrier).
 
  
  and that pointer is *not* actually
  RCU protected (nor does it need to be).
  
  Heh, if readers are lockless and writer does init/update/sync,
  this to me spells rcu.
 
 More correctly: it smells like RCU, but its not. ;)  It's rcu-like,
 but you are not really using the rcu facilities.  I think anyone that
 knows RCU and reads your code will likely be scratching their heads as well.
 
 Its probably not a big deal, as I understand your code now.  Just a
 suggestion to help clarify it.
 
 Regards,
 -Greg
 

OK, I'll add some comments about that.
Thanks for the review!

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Arnd Bergmann wrote:
 On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
 If I understand it correctly, you can at least connect a veth pair
 to a bridge, right? Something like

veth0 - veth1 - vhost - guest 1 
 eth0 - br0-|
veth2 - veth3 - vhost - guest 2

 Heh, you don't need a bridge in this picture:

 guest 1 - vhost - veth0 - veth1 - vhost guest 2
 
 Sure, but the setup I described is the one that I would expect
 to see in practice because it gives you external connectivity.
 
 Measuring two guests communicating over a veth pair is
 interesting for finding the bottlenecks, but of little
 practical relevance.
 
   Arnd 

Yeah, this would be the config I would be interested in.

Regards,
-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
 Arnd Bergmann wrote:
  On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
  If I understand it correctly, you can at least connect a veth pair
  to a bridge, right? Something like
 
 veth0 - veth1 - vhost - guest 1 
  eth0 - br0-|
 veth2 - veth3 - vhost - guest 2
 
  Heh, you don't need a bridge in this picture:
 
  guest 1 - vhost - veth0 - veth1 - vhost guest 2
  
  Sure, but the setup I described is the one that I would expect
  to see in practice because it gives you external connectivity.
  
  Measuring two guests communicating over a veth pair is
  interesting for finding the bottlenecks, but of little
  practical relevance.
  
  Arnd 
 
 Yeah, this would be the config I would be interested in.

Hmm, this wouldn't be the config to use for the benchmark though: there
are just too many variables.  If you want both guest to guest and guest
to host, create 2 nics in the guest.

Here's one way to do this:

-net nic,model=virtio,vlan=0 -net user,vlan=0
-net nic,vlan=1,model=virtio,vhost=veth0
-redir tcp:8022::22

-net nic,model=virtio,vlan=0 -net user,vlan=0
 -net nic,vlan=1,model=virtio,vhost=veth1
-redir tcp:8023::22

In guests, for simplicity, configure eth1 and eth0
to use separate subnets.

Long term, I hope macvlan will be extended to support
guest to guest.

 Regards,
 -Greg
 


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
 On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
  On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
   I think I understand what your comment above meant:  You don't need to
   do synchronize_rcu() because you can flush the workqueue instead to
   ensure that all readers have completed.
  
  Yes.
  
But if thats true, to me, the
   rcu_dereference itself is gratuitous,
  
  Here's a thesis on what rcu_dereference does (besides documentation):
  
  reader does this
  
  A: sock = n-sock
  B: use *sock
  
  Say writer does this:
  
  C: newsock = allocate socket
  D: initialize(newsock)
  E: n-sock = newsock
  F: flush
  
  
  On Alpha, reads could be reordered.  So, on smp, command A could get
  data from point F, and command B - from point D (uninitialized, from
  cache).  IOW, you get fresh pointer but stale data.
  So we need to stick a barrier in there.
  
   and that pointer is *not* actually
   RCU protected (nor does it need to be).
  
  Heh, if readers are lockless and writer does init/update/sync,
  this to me spells rcu.
 
 If you are using call_rcu(), synchronize_rcu(), or one of the
 similar primitives, then you absolutely need rcu_read_lock() and
 rcu_read_unlock(), or one of the similar pairs of primitives.

Right. I don't use any of these though.

 If you -don't- use rcu_read_lock(), then you are pretty much restricted
 to adding data, but never removing it.
 
 Make sense?  ;-)
 
   Thanx, Paul

Since I only access data from a workqueue, I replaced synchronize_rcu
with workqueue flush. That's why I don't need rcu_read_lock.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-12 Thread Fischer, Anna
 Subject: Re: [PATCH][RFC] net/bridge: add basic VEPA support
 
 On Tuesday 11 August 2009, Paul Congdon (UC Davis) wrote:
   
The patch from Eric Biederman to allow macvlan to bridge between
its slave ports is at
   
http://kerneltrap.org/mailarchive/linux-netdev/2009/3/9/5125774
  
   Looking through the discussions here, it does not seem as if a
 decision
   was made to integrate those patches, because they would make the
   macvlan interface behave too much like a bridge.
 
 Right, that question is still open, and dont't see it as very important
 right now, as long as we can still use it for VEPA.

Yes, for the basic VEPA this is not important. For MultiChannel VEPA, it 
would be nice if a macvlan device could operate as VEPA and as a typical
VEB (VEB = traditional bridge but no learning).
 
Basically, what we would need to be able to support is running a VEB and
a VEPA simultaneously on the same uplink port (e.g. the physical device). 
A new component (called the S-Component) would then multiplex frames
to the VEB or the VEPA based on a tagging scheme.

I could see this potentially working with macvlan, if it can operate in
both VEPA and VEB mode. But you are right that for basic VEPA, it would 
not be an immediate requirement.


  Also, is there a solution, or plans for a solution, to address
 macvtap
  interfaces that are set to 'promiscuous' mode?  It would seem fairly
 easy to
  support this for interfaces that are simply trying to listen to the
 port
  (e.g. Wireshark).
 
 If you want to use tcpdump or wireshark on all ports simulateously in a
 pure
 VEPA, you can still attach it to the 'lowerdev', e.g. eth0 or eth0.2
 (for macvlan
 nested in vlan).
 If we allow bridge ports, we might want to extend the local delivery
 to also go through all the hooks of the external port, so that you can
 attach packet sockets there.

I think the question here was whether there is a way for a macvlan interface
to be set to promiscuous mode. At the moment, I believe a macvlan interface
only receives packets based on its destination address (this is for unicast
packets now). What if a macvlan interface wanted to get all packets that
are being received (either on the physical device, or on a particular
VLAN if using macvlan nested in vlan). Would this work easily? Imagine
you have a virtual machine attached to that macvlan / macvtap device and
this VM wants to do packet inspection or network traffic monitoring on 
all packets flowing through the virtualized server.
 
Anna
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
  I think I understand what your comment above meant:  You don't need to
  do synchronize_rcu() because you can flush the workqueue instead to
  ensure that all readers have completed.
 
 Yes.
 
   But if thats true, to me, the
  rcu_dereference itself is gratuitous,
 
 Here's a thesis on what rcu_dereference does (besides documentation):
 
 reader does this
 
   A: sock = n-sock
   B: use *sock
 
 Say writer does this:
 
   C: newsock = allocate socket
   D: initialize(newsock)
   E: n-sock = newsock
   F: flush
 
 
 On Alpha, reads could be reordered.  So, on smp, command A could get
 data from point F, and command B - from point D (uninitialized, from
 cache).  IOW, you get fresh pointer but stale data.
 So we need to stick a barrier in there.
 
  and that pointer is *not* actually
  RCU protected (nor does it need to be).
 
 Heh, if readers are lockless and writer does init/update/sync,
 this to me spells rcu.

If you are using call_rcu(), synchronize_rcu(), or one of the
similar primitives, then you absolutely need rcu_read_lock() and
rcu_read_unlock(), or one of the similar pairs of primitives.

If you -don't- use rcu_read_lock(), then you are pretty much restricted
to adding data, but never removing it.

Make sense?  ;-)

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
  On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
   On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
I think I understand what your comment above meant:  You don't need to
do synchronize_rcu() because you can flush the workqueue instead to
ensure that all readers have completed.
   
   Yes.
   
 But if thats true, to me, the
rcu_dereference itself is gratuitous,
   
   Here's a thesis on what rcu_dereference does (besides documentation):
   
   reader does this
   
 A: sock = n-sock
 B: use *sock
   
   Say writer does this:
   
 C: newsock = allocate socket
 D: initialize(newsock)
 E: n-sock = newsock
 F: flush
   
   
   On Alpha, reads could be reordered.  So, on smp, command A could get
   data from point F, and command B - from point D (uninitialized, from
   cache).  IOW, you get fresh pointer but stale data.
   So we need to stick a barrier in there.
   
and that pointer is *not* actually
RCU protected (nor does it need to be).
   
   Heh, if readers are lockless and writer does init/update/sync,
   this to me spells rcu.
  
  If you are using call_rcu(), synchronize_rcu(), or one of the
  similar primitives, then you absolutely need rcu_read_lock() and
  rcu_read_unlock(), or one of the similar pairs of primitives.
 
 Right. I don't use any of these though.
 
  If you -don't- use rcu_read_lock(), then you are pretty much restricted
  to adding data, but never removing it.
  
  Make sense?  ;-)
 
 Since I only access data from a workqueue, I replaced synchronize_rcu
 with workqueue flush. That's why I don't need rcu_read_lock.

Well, you -do- need -something- that takes on the role of rcu_read_lock(),
and in your case you in fact actually do.  Your equivalent of
rcu_read_lock() is the beginning of execution of a workqueue item, and
the equivalent of rcu_read_unlock() is the end of execution of that same
workqueue item.  Implicit, but no less real.

If a couple more uses like this show up, I might need to add this to
Documentation/RCU.  ;-)

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
 On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
  On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
   On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
 I think I understand what your comment above meant:  You don't need to
 do synchronize_rcu() because you can flush the workqueue instead to
 ensure that all readers have completed.

Yes.

  But if thats true, to me, the
 rcu_dereference itself is gratuitous,

Here's a thesis on what rcu_dereference does (besides documentation):

reader does this

A: sock = n-sock
B: use *sock

Say writer does this:

C: newsock = allocate socket
D: initialize(newsock)
E: n-sock = newsock
F: flush


On Alpha, reads could be reordered.  So, on smp, command A could get
data from point F, and command B - from point D (uninitialized, from
cache).  IOW, you get fresh pointer but stale data.
So we need to stick a barrier in there.

 and that pointer is *not* actually
 RCU protected (nor does it need to be).

Heh, if readers are lockless and writer does init/update/sync,
this to me spells rcu.
   
   If you are using call_rcu(), synchronize_rcu(), or one of the
   similar primitives, then you absolutely need rcu_read_lock() and
   rcu_read_unlock(), or one of the similar pairs of primitives.
  
  Right. I don't use any of these though.
  
   If you -don't- use rcu_read_lock(), then you are pretty much restricted
   to adding data, but never removing it.
   
   Make sense?  ;-)
  
  Since I only access data from a workqueue, I replaced synchronize_rcu
  with workqueue flush. That's why I don't need rcu_read_lock.
 
 Well, you -do- need -something- that takes on the role of rcu_read_lock(),
 and in your case you in fact actually do.  Your equivalent of
 rcu_read_lock() is the beginning of execution of a workqueue item, and
 the equivalent of rcu_read_unlock() is the end of execution of that same
 workqueue item.  Implicit, but no less real.

Well put. I'll add this to comments in my code.

 If a couple more uses like this show up, I might need to add this to
 Documentation/RCU.  ;-)
 
   Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Wed, Aug 12, 2009 at 06:51:54PM +0300, Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 08:26:39AM -0700, Paul E. McKenney wrote:
  On Wed, Aug 12, 2009 at 05:15:59PM +0300, Michael S. Tsirkin wrote:
   On Wed, Aug 12, 2009 at 07:11:07AM -0700, Paul E. McKenney wrote:
On Wed, Aug 12, 2009 at 04:25:40PM +0300, Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 09:01:35AM -0400, Gregory Haskins wrote:
  I think I understand what your comment above meant:  You don't need 
  to
  do synchronize_rcu() because you can flush the workqueue instead to
  ensure that all readers have completed.
 
 Yes.
 
   But if thats true, to me, the
  rcu_dereference itself is gratuitous,
 
 Here's a thesis on what rcu_dereference does (besides documentation):
 
 reader does this
 
   A: sock = n-sock
   B: use *sock
 
 Say writer does this:
 
   C: newsock = allocate socket
   D: initialize(newsock)
   E: n-sock = newsock
   F: flush
 
 
 On Alpha, reads could be reordered.  So, on smp, command A could get
 data from point F, and command B - from point D (uninitialized, from
 cache).  IOW, you get fresh pointer but stale data.
 So we need to stick a barrier in there.
 
  and that pointer is *not* actually
  RCU protected (nor does it need to be).
 
 Heh, if readers are lockless and writer does init/update/sync,
 this to me spells rcu.

If you are using call_rcu(), synchronize_rcu(), or one of the
similar primitives, then you absolutely need rcu_read_lock() and
rcu_read_unlock(), or one of the similar pairs of primitives.
   
   Right. I don't use any of these though.
   
If you -don't- use rcu_read_lock(), then you are pretty much restricted
to adding data, but never removing it.

Make sense?  ;-)
   
   Since I only access data from a workqueue, I replaced synchronize_rcu
   with workqueue flush. That's why I don't need rcu_read_lock.
  
  Well, you -do- need -something- that takes on the role of rcu_read_lock(),
  and in your case you in fact actually do.  Your equivalent of
  rcu_read_lock() is the beginning of execution of a workqueue item, and
  the equivalent of rcu_read_unlock() is the end of execution of that same
  workqueue item.  Implicit, but no less real.
 
 Well put. I'll add this to comments in my code.

Very good, thank you!!!

  If a couple more uses like this show up, I might need to add this to
  Documentation/RCU.  ;-)

And I idly wonder if this approach could replace SRCU.  Probably not
for protecting the CPU-hotplug notifier chains, but worth some thought.

Thanx, Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
 Arnd Bergmann wrote:
 On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
 If I understand it correctly, you can at least connect a veth pair
 to a bridge, right? Something like

veth0 - veth1 - vhost - guest 1 
 eth0 - br0-|
veth2 - veth3 - vhost - guest 2

 Heh, you don't need a bridge in this picture:

 guest 1 - vhost - veth0 - veth1 - vhost guest 2
 Sure, but the setup I described is the one that I would expect
 to see in practice because it gives you external connectivity.

 Measuring two guests communicating over a veth pair is
 interesting for finding the bottlenecks, but of little
 practical relevance.

 Arnd 
 Yeah, this would be the config I would be interested in.
 
 Hmm, this wouldn't be the config to use for the benchmark though: there
 are just too many variables.  If you want both guest to guest and guest
 to host, create 2 nics in the guest.
 
 Here's one way to do this:
 
   -net nic,model=virtio,vlan=0 -net user,vlan=0
   -net nic,vlan=1,model=virtio,vhost=veth0
   -redir tcp:8022::22
 
   -net nic,model=virtio,vlan=0 -net user,vlan=0
-net nic,vlan=1,model=virtio,vhost=veth1
   -redir tcp:8023::22
 
 In guests, for simplicity, configure eth1 and eth0
 to use separate subnets.

I can try to do a few variations, but what I am interested is in
performance in a real-world L2 configuration.  This would generally mean
 all hosts (virtual or physical) in the same L2 domain.

If I get a chance, though, I will try to also wire them up in isolation
as another data point.

Regards,
-Greg




signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [PATCHv2 0/2] vhost: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 12:13:43PM -0400, Gregory Haskins wrote:
 Michael S. Tsirkin wrote:
  On Wed, Aug 12, 2009 at 09:51:45AM -0400, Gregory Haskins wrote:
  Arnd Bergmann wrote:
  On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
  If I understand it correctly, you can at least connect a veth pair
  to a bridge, right? Something like
 
 veth0 - veth1 - vhost - guest 1 
  eth0 - br0-|
 veth2 - veth3 - vhost - guest 2
 
  Heh, you don't need a bridge in this picture:
 
  guest 1 - vhost - veth0 - veth1 - vhost guest 2
  Sure, but the setup I described is the one that I would expect
  to see in practice because it gives you external connectivity.
 
  Measuring two guests communicating over a veth pair is
  interesting for finding the bottlenecks, but of little
  practical relevance.
 
Arnd 
  Yeah, this would be the config I would be interested in.
  
  Hmm, this wouldn't be the config to use for the benchmark though: there
  are just too many variables.  If you want both guest to guest and guest
  to host, create 2 nics in the guest.
  
  Here's one way to do this:
  
  -net nic,model=virtio,vlan=0 -net user,vlan=0
  -net nic,vlan=1,model=virtio,vhost=veth0
  -redir tcp:8022::22
  
  -net nic,model=virtio,vlan=0 -net user,vlan=0
   -net nic,vlan=1,model=virtio,vhost=veth1
  -redir tcp:8023::22
  
  In guests, for simplicity, configure eth1 and eth0
  to use separate subnets.
 
 I can try to do a few variations, but what I am interested is in
 performance in a real-world L2 configuration.  This would generally mean
  all hosts (virtual or physical) in the same L2 domain.
 
 If I get a chance, though, I will try to also wire them up in isolation
 as another data point.
 
 Regards,
 -Greg
 
 

Or patch macvlan to support guest to guest:
http://markmail.org/message/sjy74g57qsvdo2wh
That patch needs to be updated to support guest to guest multiast,
but it seems functional enough for your purposes.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Arnd Bergmann
On Monday 10 August 2009, Michael S. Tsirkin wrote:

 +struct workqueue_struct *vhost_workqueue;

[nitpicking] This could be static. 

 +/* The virtqueue structure describes a queue attached to a device. */
 +struct vhost_virtqueue {
 + struct vhost_dev *dev;
 +
 + /* The actual ring of buffers. */
 + struct mutex mutex;
 + unsigned int num;
 + struct vring_desc __user *desc;
 + struct vring_avail __user *avail;
 + struct vring_used __user *used;
 + struct file *kick;
 + struct file *call;
 + struct file *error;
 + struct eventfd_ctx *call_ctx;
 + struct eventfd_ctx *error_ctx;
 +
 + struct vhost_poll poll;
 +
 + /* The routine to call when the Guest pings us, or timeout. */
 + work_func_t handle_kick;
 +
 + /* Last available index we saw. */
 + u16 last_avail_idx;
 +
 + /* Last index we used. */
 + u16 last_used_idx;
 +
 + /* Outstanding buffers */
 + unsigned int inflight;
 +
 + /* Is this blocked? */
 + bool blocked;
 +
 + struct iovec iov[VHOST_NET_MAX_SG];
 +
 +} cacheline_aligned;

We discussed this before, and I still think this could be directly derived
from struct virtqueue, in the same way that vring_virtqueue is derived from
struct virtqueue. That would make it possible for simple device drivers
to use the same driver in both host and guest, similar to how Ira Snyder
used virtqueues to make virtio_net run between two hosts running the
same code [1].

Ideally, I guess you should be able to even make virtio_net work in the
host if you do that, but that could bring other complexities.

Arnd 

[1] http://lkml.org/lkml/2009/2/23/353
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
 On Monday 10 August 2009, Michael S. Tsirkin wrote:
 
  +struct workqueue_struct *vhost_workqueue;
 
 [nitpicking] This could be static. 

Good catch. Thanks!

  +/* The virtqueue structure describes a queue attached to a device. */
  +struct vhost_virtqueue {
  +   struct vhost_dev *dev;
  +
  +   /* The actual ring of buffers. */
  +   struct mutex mutex;
  +   unsigned int num;
  +   struct vring_desc __user *desc;
  +   struct vring_avail __user *avail;
  +   struct vring_used __user *used;
  +   struct file *kick;
  +   struct file *call;
  +   struct file *error;
  +   struct eventfd_ctx *call_ctx;
  +   struct eventfd_ctx *error_ctx;
  +
  +   struct vhost_poll poll;
  +
  +   /* The routine to call when the Guest pings us, or timeout. */
  +   work_func_t handle_kick;
  +
  +   /* Last available index we saw. */
  +   u16 last_avail_idx;
  +
  +   /* Last index we used. */
  +   u16 last_used_idx;
  +
  +   /* Outstanding buffers */
  +   unsigned int inflight;
  +
  +   /* Is this blocked? */
  +   bool blocked;
  +
  +   struct iovec iov[VHOST_NET_MAX_SG];
  +
  +} cacheline_aligned;
 
 We discussed this before, and I still think this could be directly derived
 from struct virtqueue, in the same way that vring_virtqueue is derived from
 struct virtqueue.

I prefer keeping it simple. Much of abstraction in virtio is due to the
fact that it needs to work on top of different hardware emulations:
lguest,kvm, possibly others in the future.  vhost is always working on
real hardware, using eventfd as the interface, so it does not need that.

 That would make it possible for simple device drivers
 to use the same driver in both host and guest,

I don't think so. For example, there's a callback field that gets
invoked in guest when buffers are consumed.  It could be overloaded to
mean buffers are available in host but you never handle both
situations in the same way, so what's the point?

 similar to how Ira Snyder used virtqueues to make virtio_net run
 between two hosts running the same code [1].
 Ideally, I guess you should be able to even make virtio_net work in the
 host if you do that, but that could bring other complexities.
 
   Arnd 
 
 [1] http://lkml.org/lkml/2009/2/23/353

As I pointed out earlier, most code in virtio net is asymmetrical: guest
provides buffers, host consumes them.  Possibly, one could use virtio
rings in a symmetrical way, but support of existing guest virtio net
means there's almost no shared code.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 0/3] qemu-kvm: vhost net support

2009-08-12 Thread Michael S. Tsirkin
On Mon, Aug 10, 2009 at 03:51:12PM -0500, Anthony Liguori wrote:
 Michael S. Tsirkin wrote:
 This adds support for vhost-net virtio kernel backend.

 This is RFC, but works without issues for me.

 Still needs to be split up, tested and benchmarked properly,
 but posting it here in case people want to test drive
 the kernel bits I posted.
   

 Any rough idea on performance?  Better or worse than userspace?

 Regards,

 Anthony Liguori

Well, I definitely see some gain in latency.
Here's a simple test over a 1G ethernet link (host to guest):

Native:
[r...@qus18 ~]# netperf -H 11.0.0.1 -t udp_rr
UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.1 
(11.0.0.1) port 0 AF_INET
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size SizeTime Rate
bytes  Bytes  bytesbytes   secs.per sec

126976 126976 11   10.0010393.23
124928 124928


vhost virtio:
[r...@qus18 ~]# netperf -H 11.0.0.3 -t udp_rr
UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3 
(11.0.0.3) port 0 AF_INET
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size SizeTime Rate
bytes  Bytes  bytesbytes   secs.per sec

126976 126976 11   10.008169.58
124928 124928

Userspace virtio:
[r...@qus18 ~]# netperf -H 11.0.0.3 -t udp_rr
UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 11.0.0.3 
(11.0.0.3) port 0 AF_INET
Local /Remote
Socket Size   Request  Resp.   Elapsed  Trans.
Send   Recv   Size SizeTime Rate
bytes  Bytes  bytesbytes   secs.per sec

126976 126976 11   10.002029.49
124928 124928


Part of it might be that tx mitigation does not come into play with vhost. I
need to disable it in qemu and see.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Michael S. Tsirkin
On Wed, Aug 12, 2009 at 10:19:22AM -0700, Ira W. Snyder wrote:
 On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
  On Monday 10 August 2009, Michael S. Tsirkin wrote:
  
   +struct workqueue_struct *vhost_workqueue;
  
  [nitpicking] This could be static. 
  
   +/* The virtqueue structure describes a queue attached to a device. */
   +struct vhost_virtqueue {
   + struct vhost_dev *dev;
   +
   + /* The actual ring of buffers. */
   + struct mutex mutex;
   + unsigned int num;
   + struct vring_desc __user *desc;
   + struct vring_avail __user *avail;
   + struct vring_used __user *used;
   + struct file *kick;
   + struct file *call;
   + struct file *error;
   + struct eventfd_ctx *call_ctx;
   + struct eventfd_ctx *error_ctx;
   +
   + struct vhost_poll poll;
   +
   + /* The routine to call when the Guest pings us, or timeout. */
   + work_func_t handle_kick;
   +
   + /* Last available index we saw. */
   + u16 last_avail_idx;
   +
   + /* Last index we used. */
   + u16 last_used_idx;
   +
   + /* Outstanding buffers */
   + unsigned int inflight;
   +
   + /* Is this blocked? */
   + bool blocked;
   +
   + struct iovec iov[VHOST_NET_MAX_SG];
   +
   +} cacheline_aligned;
  
  We discussed this before, and I still think this could be directly derived
  from struct virtqueue, in the same way that vring_virtqueue is derived from
  struct virtqueue. That would make it possible for simple device drivers
  to use the same driver in both host and guest, similar to how Ira Snyder
  used virtqueues to make virtio_net run between two hosts running the
  same code [1].
  
  Ideally, I guess you should be able to even make virtio_net work in the
  host if you do that, but that could bring other complexities.
 
 I have no comments about the vhost code itself, I haven't reviewed it.
 
 It might be interesting to try using a virtio-net in the host kernel to
 communicate with the virtio-net running in the guest kernel. The lack of
 a management interface is the biggest problem you will face (setting MAC
 addresses, negotiating features, etc. doesn't work intuitively).

That was one of the reasons I decided to move most of code out to
userspace. My kernel driver only handles datapath,
it's much smaller than virtio net.

 Getting
 the network interfaces talking is relatively easy.
 
 Ira

Tried this, but
- guest memory isn't pinned, so copy_to_user
  to access it, errors need to be handled in a sane way
- used/available roles are reversed
- kick/interrupt roles are reversed

So most of the code then looks like

if (host) {
} else {
}
return


The only common part is walking the descriptor list,
but that's like 10 lines of code.

At which point it's better to keep host/guest code separate, IMO.

-- 
MST
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Arnd Bergmann
On Wednesday 12 August 2009, Michael S. Tsirkin wrote:
 On Wed, Aug 12, 2009 at 07:03:22PM +0200, Arnd Bergmann wrote:
  We discussed this before, and I still think this could be directly derived
  from struct virtqueue, in the same way that vring_virtqueue is derived from
  struct virtqueue.
 
 I prefer keeping it simple. Much of abstraction in virtio is due to the
 fact that it needs to work on top of different hardware emulations:
 lguest,kvm, possibly others in the future.  vhost is always working on
 real hardware, using eventfd as the interface, so it does not need that.

Well, that was my point: virtio can already work on a number of abstractions,
so adding one more for vhost should not be too hard.

  That would make it possible for simple device drivers
  to use the same driver in both host and guest,
 
 I don't think so. For example, there's a callback field that gets
 invoked in guest when buffers are consumed.  It could be overloaded to
 mean buffers are available in host but you never handle both
 situations in the same way, so what's the point?

...
 
 As I pointed out earlier, most code in virtio net is asymmetrical: guest
 provides buffers, host consumes them.  Possibly, one could use virtio
 rings in a symmetrical way, but support of existing guest virtio net
 means there's almost no shared code.

The trick is to swap the virtqueues instead. virtio-net is actually
mostly symmetric in just the same way that the physical wires on a
twisted pair ethernet are symmetric (I like how that analogy fits).

virtio_net kicks the transmit virtqueue when it has data and
it kicks the receive queue when it has empty buffers to fill,
and it has callbacks when the two are done. You can do the
same in both the guest and the host, but then the guests input
virtqueue is the hosts output virtqueue and vice versa.

Once a virtqueue got kicked from both sides, the vhost_virtqueue
implementation between the two only needs to do a copy_from_user
or copy_to_user (possibly from a thread if it is in atomic context)
and then call the two callback functions. This is basically the
same thing you do already, except that you use slightly different
names for the components.

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: virtio-serial: An interface for host-guest communication

2009-08-12 Thread Paul Brook
 However, as I've mentioned repeatedly, the reason I won't merge
 virtio-serial is that it duplicates functionality with virtio-console.
 If the two are converged, I'm happy to merge it.  I'm not opposed to
 having more functionality.

I strongly agree.

Paul
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [Qemu-devel] Re: [PATCH] qemu/virtio: move features to an inline function

2009-08-12 Thread Anthony Liguori
Michael S. Tsirkin wrote:
 On
   
 Why bother switching to userspace for migration?  Can't you just have  
 get/set ioctls for the state?
 

 I have these. But live migration requires dirty page logging.
 I do not want to implement it in kernel.
   

Is it really that difficult?  I think it would be better to just do that.

I wonder though if mmu notifiers can be used to make it transparent...

Regards,

Anthony Liguori

 Regards,

 Anthony Liguori

 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Anthony Liguori
Michael S. Tsirkin wrote:

 We discussed this before, and I still think this could be directly derived
 from struct virtqueue, in the same way that vring_virtqueue is derived from
 struct virtqueue.
 

 I prefer keeping it simple. Much of abstraction in virtio is due to the
 fact that it needs to work on top of different hardware emulations:
 lguest,kvm, possibly others in the future.  vhost is always working on
 real hardware, using eventfd as the interface, so it does not need that.
   

Actually, vhost may not always be limited to real hardware.

We may on day use vhost as the basis of a driver domain.  There's quite 
a lot of interest in this for networking.

At any rate, I'd like to see performance results before we consider 
trying to reuse virtio code.

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Anthony Liguori
Arnd Bergmann wrote:
 As I pointed out earlier, most code in virtio net is asymmetrical: guest
 provides buffers, host consumes them.  Possibly, one could use virtio
 rings in a symmetrical way, but support of existing guest virtio net
 means there's almost no shared code.
 

 The trick is to swap the virtqueues instead. virtio-net is actually
 mostly symmetric in just the same way that the physical wires on a
 twisted pair ethernet are symmetric (I like how that analogy fits).
   

It's already been done between two guests.  See 
http://article.gmane.org/gmane.linux.kernel.virtualization/5423

Regards,

Anthony Liguori
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [PATCH 2/2] vhost_net: a kernel-level virtio server

2009-08-12 Thread Paul E. McKenney
On Mon, Aug 10, 2009 at 09:53:40PM +0300, Michael S. Tsirkin wrote:
 What it is: vhost net is a character device that can be used to reduce
 the number of system calls involved in virtio networking.
 Existing virtio net code is used in the guest without modification.
 
 There's similarity with vringfd, with some differences and reduced scope
 - uses eventfd for signalling
 - structures can be moved around in memory at any time (good for migration)
 - support memory table and not just an offset (needed for kvm)
 
 common virtio related code has been put in a separate file vhost.c and
 can be made into a separate module if/when more backend appear.  I used
 Rusty's lguest.c as the source for developing this part : this supplied
 me with witty comments I wouldn't be able to write myself.
 
 What it is not: vhost net is not a bus, and not a generic new system
 call. No assumptions are made on how guest performs hypercalls.
 Userspace hypervisors are supported as well as kvm.
 
 How it works: Basically, we connect virtio frontend (configured by
 userspace) to a backend. The backend could be a network device, or a
 tun-like device. In this version I only support raw socket as a backend,
 which can be bound to e.g. SR IOV, or to macvlan device.  Backend is
 also configured by userspace, including vlan/mac etc.
 
 Status:
 This works for me, and I haven't see any crashes.
 I have not run any benchmarks yet, compared to userspace, I expect to
 see improved latency (as I save up to 4 system calls per packet) but not
 yet bandwidth/CPU (as TSO and interrupt mitigation are not yet supported).
 
 Features that I plan to look at in the future:
 - TSO
 - interrupt mitigation
 - zero copy

Much better -- a couple of documentation nits below.

Thanx, Paul

 Signed-off-by: Michael S. Tsirkin m...@redhat.com
 
 ---
  MAINTAINERS|   10 +
  arch/x86/kvm/Kconfig   |1 +
  drivers/Makefile   |1 +
  drivers/block/virtio_blk.c |3 +
  drivers/vhost/Kconfig  |   11 +
  drivers/vhost/Makefile |2 +
  drivers/vhost/net.c|  462 ++
  drivers/vhost/vhost.c  |  663 
 
  drivers/vhost/vhost.h  |  108 +++
  include/linux/Kbuild   |1 +
  include/linux/miscdevice.h |1 +
  include/linux/vhost.h  |  100 +++
  12 files changed, 1363 insertions(+), 0 deletions(-)
  create mode 100644 drivers/vhost/Kconfig
  create mode 100644 drivers/vhost/Makefile
  create mode 100644 drivers/vhost/net.c
  create mode 100644 drivers/vhost/vhost.c
  create mode 100644 drivers/vhost/vhost.h
  create mode 100644 include/linux/vhost.h
 
 diff --git a/MAINTAINERS b/MAINTAINERS
 index ebc2691..eb0c1da 100644
 --- a/MAINTAINERS
 +++ b/MAINTAINERS
 @@ -6312,6 +6312,16 @@ S: Maintained
  F:   Documentation/filesystems/vfat.txt
  F:   fs/fat/
 
 +VIRTIO HOST (VHOST)
 +P:   Michael S. Tsirkin
 +M:   m...@redhat.com
 +L:   k...@vger.kernel.org
 +L:   virtualizat...@lists.osdl.org
 +L:   net...@vger.kernel.org
 +S:   Maintained
 +F:   drivers/vhost/
 +F:   include/linux/vhost.h
 +
  VIA RHINE NETWORK DRIVER
  P:   Roger Luethi
  M:   r...@hellgate.ch
 diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
 index b84e571..94f44d9 100644
 --- a/arch/x86/kvm/Kconfig
 +++ b/arch/x86/kvm/Kconfig
 @@ -64,6 +64,7 @@ config KVM_AMD
 
  # OK, it's a little counter-intuitive to do this, but it puts it neatly under
  # the virtualization menu.
 +source drivers/vhost/Kconfig
  source drivers/lguest/Kconfig
  source drivers/virtio/Kconfig
 
 diff --git a/drivers/Makefile b/drivers/Makefile
 index bc4205d..1551ae1 100644
 --- a/drivers/Makefile
 +++ b/drivers/Makefile
 @@ -105,6 +105,7 @@ obj-$(CONFIG_HID) += hid/
  obj-$(CONFIG_PPC_PS3)+= ps3/
  obj-$(CONFIG_OF) += of/
  obj-$(CONFIG_SSB)+= ssb/
 +obj-$(CONFIG_VHOST_NET)  += vhost/
  obj-$(CONFIG_VIRTIO) += virtio/
  obj-$(CONFIG_VLYNQ)  += vlynq/
  obj-$(CONFIG_STAGING)+= staging/
 diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
 index aa1a3d5..42e61b0 100644
 --- a/drivers/block/virtio_blk.c
 +++ b/drivers/block/virtio_blk.c
 @@ -293,6 +293,7 @@ static int __devinit virtblk_probe(struct virtio_device 
 *vdev)
   err = PTR_ERR(vblk-vq);
   goto out_free_vblk;
   }
 + printk(KERN_ERR vblk-vq = %p\n, vblk-vq);
 
   vblk-pool = mempool_create_kmalloc_pool(1,sizeof(struct virtblk_req));
   if (!vblk-pool) {
 @@ -383,6 +384,8 @@ static int __devinit virtblk_probe(struct virtio_device 
 *vdev)
   if (!err)
   blk_queue_logical_block_size(vblk-disk-queue, blk_size);
 
 + printk(KERN_ERR virtio_config_val returned %d\n, err);
 +
   add_disk(vblk-disk);
   return 0;
 
 diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
 new file mode 

Re: [PATCH 0/3] qemu-kvm: vhost net support

2009-08-12 Thread Gregory Haskins
Michael S. Tsirkin wrote:
 This adds support for vhost-net virtio kernel backend.
 
 This is RFC, but works without issues for me.
 
 Still needs to be split up, tested and benchmarked properly,
 but posting it here in case people want to test drive
 the kernel bits I posted.

This has a large degree of rejects against qemu-kvm.git/master.  What
tree does this apply to?

-Greg



signature.asc
Description: OpenPGP digital signature
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization