Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-27 Thread Or Gerlitz
Stephen Hemminger wrote:
 Or Gerlitz ogerl...@voltaire.com wrote:
 Looking in macvlan_set_multicast_list() it acts in a similar manner to 
 macvlan_set_mac_address() in the sense that it calls dev_mc_sync(). I assume 
 what's left is to add macvlan_hash_xxx multicast logic to map/unmap 
 multicast groups to what macvlan devices want to receive them and this way 
 the flooding can be removed, correct?
 The device can just flood all multicast packets, since the filtering is done 
 on the receive path anyway.
for each multicast packet, macvlan_broadcast is invoked and calls 
skb_clone/ netif_rx for each device, now a smart scheme that takes into 
account (hash) the multicast list of the different macvlan  devices 
would save the skb_clone call, isn't it?

Or.

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Stephen Hemminger
On Sun, 09 Aug 2009 14:19:08 +0300
Or Gerlitz ogerl...@voltaire.com wrote:

 Stephen Hemminger wrote:
  I have a patch that forwards all multicast packets, and another that does 
  proper forwarding. It should have worked that way in original macvlan, the 
  current behavior is really a bug.

 Looking in macvlan_set_multicast_list() it acts in a similar manner to
 macvlan_set_mac_address() in the sense that it calls dev_mc_sync(). I
 assume what's left is to add macvlan_hash_xxx multicast logic to
 map/unmap multicast groups to what macvlan devices want to receive them
 and this way the flooding can be removed, correct?

The device can just flood all multicast packets, since the filtering
is done on the receive path anyway.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] Re: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Arnd Bergmann
On Friday 07 August 2009, Paul Congdon (UC Davis) wrote:
 
 I don't think your scheme works too well because broadcast packet coming
 from other interfaces on br0 would get replicated and sent across the wire
 to ethB multiple times.

Right, that won't work. So the bridge patch for the hairpin turn
is still the best solution. Btw, how will that interact with
the bride-netfilter (ebtables) setup? Can you apply any filters
that work on current bridges also between two VEPA ports while
doing the hairpin turn?

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Arnd Bergmann
On Monday 10 August 2009, Stephen Hemminger wrote:
 On Sun, 09 Aug 2009 14:19:08 +0300, Or Gerlitz ogerl...@voltaire.com wrote:
  Looking in macvlan_set_multicast_list() it acts in a similar manner to
  macvlan_set_mac_address() in the sense that it calls dev_mc_sync(). I
  assume what's left is to add macvlan_hash_xxx multicast logic to
  map/unmap multicast groups to what macvlan devices want to receive them
  and this way the flooding can be removed, correct?
 
 The device can just flood all multicast packets, since the filtering
 is done on the receive path anyway.

But we'd still have to copy the frames to user space (for both
macvtap and raw packet sockets) and exit from the guest to inject
it into its stack, right?

I guess for multicast heavy workloads, we could save a lot of cycles
by throwing the frames away as early as possible. How common are those
setups in virtual servers though?

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Yaron Haviv
Paul,

I also think that bridge may not be the right place for VEPA, but rather a 
simpler sw/hw mux 
Although the VEPA support may reside in multiple places (I.e. also in the 
bridge)

As Arnd pointed out Or already added an extension to qemu that allow direct 
guest virtual NIC mapping to an interface device (vs using tap), this was done 
specifically to address VEPA, and result in much faster performance and lower 
cpu overhead (Or and some others are planning additional meaningful performance 
optimizations) 

The interface multiplexing can be achieved using macvlan driver or using an 
SR-IOV capable NIC (the preferred option), macvlan may need to be extended to 
support VEPA multicast handling, this looks like a rather simple task 

It may be counter intuitive for some, but we expect the (completed) qemu VEPA 
mode + SR-IOV + certain switches with hairpin (vepa) mode to perform faster 
than using bridge+tap even for connecting 2 VMs on the same host


Yaron 

Sent from BlackBerry



From: e...@yahoogroups.com 
To: 'Stephen Hemminger' ; 'Fischer, Anna' 
Cc: bri...@lists.linux-foundation.org ; linux-ker...@vger.kernel.org ; 
net...@vger.kernel.org ; virtualization@lists.linux-foundation.org ; 
e...@yahoogroups.com ; da...@davemloft.net ; ka...@trash.net ; 
adobri...@gmail.com ; 'Arnd Bergmann' 
Sent: Fri Aug 07 21:58:00 2009
Subject: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support 


  

 
 After reading more about this, I am not convinced this should be part 
 of the bridge code. The bridge code really consists of two parts:
 forwarding table and optional spanning tree. Well the VEPA code short 
 circuits both of these; it can't imagine it working with STP turned 
 on. The only part of bridge code that really gets used by this are the 
 receive packet hooks and the crufty old API.
 
 So instead of adding more stuff to existing bridge code, why not have 
 a new driver for just VEPA. You could do it with a simple version of 
 macvlan type driver.

Stephen,

Thanks for your comments and questions. We do believe the bridge code is
the right place for this, so I'd like to embellish on that a bit more to
help persuade you. Sorry for the long winded response, but here are some
thoughts:

- First and foremost, VEPA is going to be a standard addition to the IEEE
802.1Q specification. The working group agreed at the last meeting to
pursue a project to augment the bridge standard with hairpin mode (aka
reflective relay) and a remote filtering service (VEPA). See for details:
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-evbPar5C-0709 
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-evbPar5C-0709 
-v01.pdf

- The VEPA functionality was really a pretty small change to the code with
low risk and wouldn't seem to warrant an entire new driver or module.

- There are good use cases where VMs will want to have some of their
interfaces attached to bridges and others to bridges operating in VEPA mode.
In other words, we see simultaneous operation of the bridge code and VEPA
occurring, so having as much of the underlying code as common as possible
would seem to be beneficial. 

- By augmenting the bridge code with VEPA there is a great amount of re-use
achieved. It works wherever the bridge code works and doesn't need anything
special to support KVM, XEN, and all the hooks, etc...

- The hardware vendors building SR-IOV NICs with embedded switches will be
adding VEPA mode, so by keeping the bridge module in sync would be
consistent with this trend and direction. It will be possible to extend the
hardware implementations by cascading a software bridge and/or VEPA, so
being in sync with the architecture would make this more consistent.

- The forwarding table is still needed and used on inbound traffic to
deliver frames to the correct virtual interfaces and to filter any reflected
frames. A new driver would have to basically implement an equivalent
forwarding table anyway. As I understand the current macvlan type driver,
it wouldn't filter multicast frames properly without such a table.

- It seems the hairpin mode would be needed in the bridge module whether
VEPA was added to the bridge module or a new driver. Having the associated
changes together in the same code could aid in understanding and deployment.

As I understand the macvlan code, it currently doesn't allow two VMs on the
same machine to communicate with one another. I could imagine a hairpin
mode on the adjacent bridge making this possible, but the macvlan code would
need to be updated to filter reflected frames so a source did not receive
his own packet. I could imagine this being done as well, but to also
support selective multicast usage, something similar to the bridge
forwarding table would be needed. I think putting VEPA into a new driver
would cause you to implement many things the bridge code already supports.
Given that we expect the bridge standard to ultimately include VEPA, 

RE: [evb] Re: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Fischer, Anna
 Subject: Re: [PATCH][RFC] net/bridge: add basic VEPA support
 
 On Friday 07 August 2009, Paul Congdon (UC Davis) wrote:
 
  I don't think your scheme works too well because broadcast packet
 coming
  from other interfaces on br0 would get replicated and sent across the
 wire
  to ethB multiple times.
 
 Right, that won't work. So the bridge patch for the hairpin turn
 is still the best solution. 

Yes, I think that we should separate the discussions between hairpin 
mode on the adjacent bridge and the VEPA filtering service residing
within the end-station. The hairpin feature really has to be
implemented in the bridging code.


 Btw, how will that interact with
 the bride-netfilter (ebtables) setup? Can you apply any filters
 that work on current bridges also between two VEPA ports while
 doing the hairpin turn?

The hairpin mode is implemented on the adjacent bridge. The only 
difference for a hairpin mode port vs. a normal bridge port is that
it can pass frames back out to the same port it came from. All the
netfilter hooks are still in place.

On the VEPA filtering service side, the only change we have implemented
in the bridging code is that in VEPA mode all frames are passed to the
uplink on TX. However, frames are still passed through the netfilter 
hooks before they go out on the wire. On the inbound path, there are
no changes to the way frames are processed (except the filtering for
the original source port), so netfilter hooks work in the same way
as for a normal bridge.

If a frame is reflected back because of a hairpin turn, then of course
the incoming port is the VEPA uplink port and not the port that
originally sent the frame. So if you are trying to enforce some
packet filtering on that inbound path, then you would have to do that
based on MAC addresses and not on bridge ports. But I would assume that
you would enforce the filtering already before you send out the frame
to the adjacent bridge. Apart from that, if you enable your bridge to
behave in VEPA mode, then you would typically do packet filtering etc
on the adjacent bridge and not use the netfilter hook. You can still use
both though, if you like.

Anna
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Benny Amorsen
Fischer, Anna anna.fisc...@hp.com writes:

 If you do have a SRIOV NIC that supports VEPA, then I would think that
 you do not have QEMU or macvtap in the setup any more though. Simply
 because in that case the VM can directly access the VF on the physical
 device. That would be ideal.

I'm just trying to understand how this all works, so I'm probably asking
a stupid question:

Would a SRIOV NIC with VEPA support show up as multiple devices? I.e.
would I get e.g. eth0-eth7 for a NIC with support for 8 virtual
interfaces? Would they have different MAC addresses?


/Benny

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [evb] Re: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Paul Congdon (UC Davis)
Arnd,

 

I don't think your scheme works too well because broadcast packet coming
from other interfaces on br0 would get replicated and sent across the wire
to ethB multiple times.

 

Paul

That way you should be able to do something
like:

Host A Host B

/- nalvcam0 -\ /- macvlan0 - 192.168.1.1
br0 -| |- ethA === ethB -|
\- nalvcam1 -/ \- macvlan1 - 192.168.1.2

Now assuming that macvlan0 and macvlan1 are in different
network namespaces or belong to different KVM guests, these
guests would be able to communicate with each other through
the bridge on host A, which can set the policy (using ebtables)
for this communication and get interface statistics on its
nalvcam interfaces. Also, instead of having the br0, Host A could
assign an IP addresses to the two nalvcam interfaces that host
B has, and use IP forwarding between the guests of host B.



 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

RE: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Paul Congdon (UC Davis)
Yaron,


The interface multiplexing can be achieved using macvlan driver or using an 
SR-IOV capable NIC (the preferred option), macvlan may need to be extended to 
support VEPA multicast handling, this looks like a rather simple task 

Agreed that the hardware solution is preferred so the macvlan implementation 
doesn’t really matter.  If we are talking SR-IOV, then it is direct mapped, 
regardless of whether there is a VEB or VEPA in the hardware below, so you are 
bypassing the bridge software code also.  

I disagree that adding the multicast handling is simple – while not 
conceptually hard, it will basically require you to put an address table into 
the macvlan implementation – if you have that, then why not have just used the 
one already in the bridge code.  If you hook a VEPA up to a non-hairpin mode 
external bridge, you get the macvlan capability as well.

It also seems to me like the special macvlan interfaces for KVM don’t apply to 
XEN or a non-virtualized environment?  Or more has to be written to make that 
work?  If it is in the bridge code, you get all of this re-use.

 

 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization

Re: [evb] Re: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Arnd Bergmann
On Monday 10 August 2009, Fischer, Anna wrote:
 On the VEPA filtering service side, the only change we have implemented
 in the bridging code is that in VEPA mode all frames are passed to the
 uplink on TX. However, frames are still passed through the netfilter 
 hooks before they go out on the wire. On the inbound path, there are
 no changes to the way frames are processed (except the filtering for
 the original source port), so netfilter hooks work in the same way
 as for a normal bridge.

Ah, interesting. I did not realize that the hooks were still active,
although that obviously makes sense. So that would be another
important difference between our implementations.

 If a frame is reflected back because of a hairpin turn, then of course
 the incoming port is the VEPA uplink port and not the port that
 originally sent the frame. So if you are trying to enforce some
 packet filtering on that inbound path, then you would have to do that
 based on MAC addresses and not on bridge ports. But I would assume that
 you would enforce the filtering already before you send out the frame
 to the adjacent bridge. Apart from that, if you enable your bridge to
 behave in VEPA mode, then you would typically do packet filtering etc
 on the adjacent bridge and not use the netfilter hook. You can still use
 both though, if you like.

Right, that was my point. They bridge in VEPA mode would likely be
configured to be completely ignorant of the data going through it
and not do any filter, and you do all filterring on the adjacent
bridge.

I just wasn't sure that this is possible with ebtables if the
adjacent bridge is a Linux system with the bridge in hairpin turn
mode.

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Fischer, Anna
 Subject: Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support
 
 On Monday 10 August 2009, Stephen Hemminger wrote:
  On Sun, 09 Aug 2009 14:19:08 +0300, Or Gerlitz
 ogerl...@voltaire.com wrote:
   Looking in macvlan_set_multicast_list() it acts in a similar manner
 to
   macvlan_set_mac_address() in the sense that it calls dev_mc_sync().
 I
   assume what's left is to add macvlan_hash_xxx multicast logic to
   map/unmap multicast groups to what macvlan devices want to receive
 them
   and this way the flooding can be removed, correct?
 
  The device can just flood all multicast packets, since the filtering
  is done on the receive path anyway.

Is this handled by one of the additional patches? In the current kernel tree
macvlan code it looks as if multicast filtering is only handled by the
physical device driver, but not on particular macvlan devices.
 

 But we'd still have to copy the frames to user space (for both
 macvtap and raw packet sockets) and exit from the guest to inject
 it into its stack, right?

I think it would be nice if you can implement what Or describes for 
macvlan and avoid flooding, and it doesn't sound too hard to do. 

I guess one advantage for macvlan (over the bridge) is that you can 
program in all information you have for the ports attached to it, e.g. 
MAC addresses and multicast addresses. So you could take advantage of
that whereas the bridge always floods multicast frames to all ports.
 
How would this work though, if the OS inside the guest wants to register
to a particular multicast address? Is this propagated through the backend
drivers to the macvlan/macvtap interface?

Anna

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Stephen Hemminger
On Mon, 10 Aug 2009 16:32:01 +
Fischer, Anna anna.fisc...@hp.com wrote:

  Subject: Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support
  
  On Monday 10 August 2009, Stephen Hemminger wrote:
   On Sun, 09 Aug 2009 14:19:08 +0300, Or Gerlitz
  ogerl...@voltaire.com wrote:
Looking in macvlan_set_multicast_list() it acts in a similar manner
  to
macvlan_set_mac_address() in the sense that it calls dev_mc_sync().
  I
assume what's left is to add macvlan_hash_xxx multicast logic to
map/unmap multicast groups to what macvlan devices want to receive
  them
and this way the flooding can be removed, correct?
  
   The device can just flood all multicast packets, since the filtering
   is done on the receive path anyway.
 
 Is this handled by one of the additional patches? In the current kernel tree
 macvlan code it looks as if multicast filtering is only handled by the
 physical device driver, but not on particular macvlan devices.
  
 
  But we'd still have to copy the frames to user space (for both
  macvtap and raw packet sockets) and exit from the guest to inject
  it into its stack, right?
 
 I think it would be nice if you can implement what Or describes for 
 macvlan and avoid flooding, and it doesn't sound too hard to do. 
 
 I guess one advantage for macvlan (over the bridge) is that you can 
 program in all information you have for the ports attached to it, e.g. 
 MAC addresses and multicast addresses. So you could take advantage of
 that whereas the bridge always floods multicast frames to all ports.
  
 How would this work though, if the OS inside the guest wants to register
 to a particular multicast address? Is this propagated through the backend
 drivers to the macvlan/macvtap interface?

Sure filtering is better, but multicast performance with large number
of guests is really a corner case, not the real performance issue.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-10 Thread Arnd Bergmann
On Monday 10 August 2009, Stephen Hemminger wrote:
 On Mon, 10 Aug 2009 16:32:01, Fischer, Anna anna.fisc...@hp.com wrote:
  How would this work though, if the OS inside the guest wants to register
  to a particular multicast address? Is this propagated through the backend
  drivers to the macvlan/macvtap interface?
 
 Sure filtering is better, but multicast performance with large number
 of guests is really a corner case, not the real performance issue.

Well, right now, qemu does not care at all about this, it essentially
leaves the tun device in ALLMULTI state. I should check whether macvtap
at this stage can receive multicast frames at all, but if it does,
it will get them all ;-).

If we want to implement this with kvm, we would have to start with
the qemu virtio-net implementation, to move the receive filter into
the tap device. With tun/tap that will mean less copying to user
space, with macvtap (after implementing TUNSETTXFILTER) we get already
pretty far because we no longer need to have the external interface
in ALLMULTI state. Once that is in place, we can start thinking about
filtering per virtual device.

Arnd 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-09 Thread Or Gerlitz
Stephen Hemminger wrote:
 I have a patch that forwards all multicast packets, and another that does 
 proper forwarding. It should have worked that way in original macvlan, the 
 current behavior is really a bug.
   
Looking in macvlan_set_multicast_list() it acts in a similar manner to
macvlan_set_mac_address() in the sense that it calls dev_mc_sync(). I
assume what's left is to add macvlan_hash_xxx multicast logic to
map/unmap multicast groups to what macvlan devices want to receive them
and this way the flooding can be removed, correct?


Or.


___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-08 Thread Arnd Bergmann
On Saturday 08 August 2009, Benny Amorsen wrote:
 Would a SRIOV NIC with VEPA support show up as multiple devices? I.e.
 would I get e.g. eth0-eth7 for a NIC with support for 8 virtual
 interfaces? Would they have different MAC addresses?

It could, but the idea of SR-IOV is that it shows up as 8 PCI
devices. One of them is owned by the host and is seen as eth0
there. The other seven PCI devices (virtual functions) are meant
to be assigned to the guest using PCI passthrough and will show
up as the guests eth0, each one with its own MAC address.

An other mode of operation is VMDq, where the host owns all
interfaces and you might see eth0-eth7 there. You can then attach
a qemu process with a raw packet socket or a single macvtap port
for each of those interfaces. This is not yet implemented in Linux,
so how it will be done is still open. It might all be integrated
into macvlan or some new subsystem alternatively.

AFAIK, every SR-IOV adapter can also be operated as a VMDq adapter,
but there are VMDq adapters that do not support SR-IOV.

Arnd 

___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization


RE: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-07 Thread Fischer, Anna
Hi Yaron,

Yes, I also believe that VEPA + SRIOV can potentially, in some deployments, 
achieve better performance than a bridge/tap configuration, especially when you 
run multiple VMs and if you want to enable more sophisticated network 
processing in the data path.

If you do have a SRIOV NIC that supports VEPA, then I would think that you do 
not have QEMU or macvtap in the setup any more though. Simply because in that 
case the VM can directly access the VF on the physical device. That would be 
ideal.

I do think that the macvtap driver is a good addition as a simple and fast 
virtual network I/O interface, in case you do not need full bridge 
functionality. It does seem to assume though that the virtualization software 
uses QEMU/tap interfaces. How would this work with a Xen para-virtualized 
network interface? I guess there would need to be yet another driver?

Anna

--

From: Yaron Haviv [mailto:yar...@voltaire.com] 
Sent: 07 August 2009 21:36
To: e...@yahoogroups.com; shemmin...@linux-foundation.org; Fischer, Anna
Cc: bri...@lists.linux-foundation.org; net...@vger.kernel.org; 
virtualization@lists.linux-foundation.org; da...@davemloft.net; 
ka...@trash.net; adobri...@gmail.com; a...@arndb.de
Subject: Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

Paul,

I also think that bridge may not be the right place for VEPA, but rather a 
simpler sw/hw mux 
Although the VEPA support may reside in multiple places (I.e. also in the 
bridge)

As Arnd pointed out Or already added an extension to qemu that allow direct 
guest virtual NIC mapping to an interface device (vs using tap), this was done 
specifically to address VEPA, and result in much faster performance and lower 
cpu overhead (Or and some others are planning additional meaningful performance 
optimizations) 

The interface multiplexing can be achieved using macvlan driver or using an 
SR-IOV capable NIC (the preferred option), macvlan may need to be extended to 
support VEPA multicast handling, this looks like a rather simple task 

It may be counter intuitive for some, but we expect the (completed) qemu VEPA 
mode + SR-IOV + certain switches with hairpin (vepa) mode to perform faster 
than using bridge+tap even for connecting 2 VMs on the same host


Yaron 

Sent from BlackBerry

From: e...@yahoogroups.com 
To: 'Stephen Hemminger' ; 'Fischer, Anna' 
Cc: bri...@lists.linux-foundation.org ; linux-ker...@vger.kernel.org ; 
net...@vger.kernel.org ; virtualization@lists.linux-foundation.org ; 
e...@yahoogroups.com ; da...@davemloft.net ; ka...@trash.net ; 
adobri...@gmail.com ; 'Arnd Bergmann' 
Sent: Fri Aug 07 21:58:00 2009
Subject: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support 
  
 
 After reading more about this, I am not convinced this should be part 
 of the bridge code. The bridge code really consists of two parts:
 forwarding table and optional spanning tree. Well the VEPA code short 
 circuits both of these; it can't imagine it working with STP turned 
 on. The only part of bridge code that really gets used by this are the 
 receive packet hooks and the crufty old API.
 
 So instead of adding more stuff to existing bridge code, why not have 
 a new driver for just VEPA. You could do it with a simple version of 
 macvlan type driver.

Stephen,

Thanks for your comments and questions. We do believe the bridge code is
the right place for this, so I'd like to embellish on that a bit more to
help persuade you. Sorry for the long winded response, but here are some
thoughts:

- First and foremost, VEPA is going to be a standard addition to the IEEE
802.1Q specification. The working group agreed at the last meeting to
pursue a project to augment the bridge standard with hairpin mode (aka
reflective relay) and a remote filtering service (VEPA). See for details:
http://www.ieee802.org/1/files/public/docs2009/new-evb-congdon-evbPar5C-0709
-v01.pdf

- The VEPA functionality was really a pretty small change to the code with
low risk and wouldn't seem to warrant an entire new driver or module.

- There are good use cases where VMs will want to have some of their
interfaces attached to bridges and others to bridges operating in VEPA mode.
In other words, we see simultaneous operation of the bridge code and VEPA
occurring, so having as much of the underlying code as common as possible
would seem to be beneficial. 

- By augmenting the bridge code with VEPA there is a great amount of re-use
achieved. It works wherever the bridge code works and doesn't need anything
special to support KVM, XEN, and all the hooks, etc...

- The hardware vendors building SR-IOV NICs with embedded switches will be
adding VEPA mode, so by keeping the bridge module in sync would be
consistent with this trend and direction. It will be possible to extend the
hardware implementations by cascading a software bridge and/or VEPA, so
being in sync with the architecture would make this more consistent.

- The forwarding

Re: [evb] RE: [PATCH][RFC] net/bridge: add basic VEPA support

2009-08-07 Thread Stephen Hemminger
On Fri, 7 Aug 2009 14:06:58 -0700
Paul Congdon \(UC Davis\) ptcong...@ucdavis.edu wrote:

 Yaron,
 
 
 The interface multiplexing can be achieved using macvlan driver or using an 
 SR-IOV capable NIC (the preferred option), macvlan may need to be extended to 
 support VEPA multicast handling, this looks like a rather simple task 
 
 Agreed that the hardware solution is preferred so the macvlan implementation 
 doesn’t really matter.  If we are talking SR-IOV, then it is direct mapped, 
 regardless of whether there is a VEB or VEPA in the hardware below, so you 
 are bypassing the bridge software code also.  
 
 I disagree that adding the multicast handling is simple – while not 
 conceptually hard, it will basically require you to put an address table into 
 the macvlan implementation – if you have that, then why not have just used 
 the one already in the bridge code.  If you hook a VEPA up to a non-hairpin 
 mode external bridge, you get the macvlan capability as well.

I have a patch that forwards all multicast packets, and another that does
proper forwarding. It should have worked that way in original macvlan, the
current behavior is really a bug.


-- 
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linux-foundation.org/mailman/listinfo/virtualization