> On July 19, 2013, 4:56 p.m., Chiradeep Vittal wrote:
> > scripts/vm/network/vnet/modifyvxlan.sh, line 28
> > <https://reviews.apache.org/r/12623/diff/2/?file=323001#file323001line28>
> >
> >     I think there is a need to prevent the guest vm from spoofing the 
> > multicast?

We don't need to.

VM can only see Inner Frame of VXLAN packet.
VXLAN uses multicast for Outer Packet that VM cannot see or manipulate.

Below is bridging diagram within KVM host.

InnerFrame: VM <-> eth*|vnet* <-> brethX-Y <-> vxlanY
                                                 || (*1)                        
              
OuterPacket:                                   cloudbr* <-> eth* ==> Outside of 
the Host   


All frame that VM sent are encapsulated at (*1).
Almost frames are mapped to unicast here, since vxlanY interface learns mapping 
between other VMs' MAC address and Hosts IP address.
Only when vxlanY interface haven't learned mapping yet or inner frame is 
multicast or unicast frame, vxlanY interface uses multicast group statically 
assigned at line 33 of modifyvxlan.sh.

Multicast group is assigned statically to vxlan interface of host, so VM cannot 
spoof multicast group.


> On July 19, 2013, 4:56 p.m., Chiradeep Vittal wrote:
> > scripts/vm/network/vnet/modifyvxlan.sh, line 202
> > <https://reviews.apache.org/r/12623/diff/2/?file=323001#file323001line202>
> >
> >     What about vxlan with OVS, will this work?

Unfortunately, it won't work with OVS.
But it's not problem of my implementation.

Problem is that current release of OVS doesn't fully support VXLAN protocol.
Lack of multicast support is critical, because VXLAN isolation depends on 
multicast learning feature of VXLAN protocol.

Please see "Q: How much of the VXLAN protocol does Open vSwitch currently 
support?" in the URL below for detail.
http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=blob;f=FAQ;h=98d273dd2d4311d16a3fff33051b0c3beed6e6b1;hb=d4c5b6423aa063eaf296ec8cf7d1a50197863cec


VXLAN without multicast learning is very similar to GRE.
That means, if we want to support VXLAN tunneling frame format with current 
OVS, we have to have SDN controller to setup flow rules in OVS.
I think SDN controller is unnecessary component for VXLAN isolation, since 
VXLAN protocol is designed to work well without a hassle in SDN.

In my opinion, what we have to do is wait for OVS release supporting VXLAN 
completely.
Once OVS start supporting multicast aspect of VXLAN, we can hopefully start 
implementing VXLAN isolation for KVM with OVS and/or Xen.


On July 19, 2013, 4:56 p.m., Toshiaki Hatano wrote:
> > I just wanted to make sure that you have tested your patch with regular 
> > VLANs as well. 
> > And, what the behavior will be when VxLAN is enabled in the zone, but only 
> > Xen / VMW hypervisors are there
> > Also, some documentation on how the cloud operator can get this feature 
> > (KVM version/Which version of Centos/Ubuntu/etc), configuration of 
> > switches, bridges, etc would be useful.

Yes, I did same test with regular VLANs and it works fine.

I don't have Xen nor VMWare to test with, but as far as I read the code...
Xen-agent would rise CloudRuntimeException("Unable to support this type of 
network broadcast domain: " + nic.getBroadcastUri()) in 
com.cloud.hypervisor.xen.resource.CitrixResourceBase.getNetwork(Connection, 
NicTO) before they actually submit VIF.create to hypervisor.
VMWare-agent would warn("Unrecognized broadcast type in VmwareResource, type: " 
+ nicTo.getBroadcastType().toString() + ". Use vlan info from labeling: " + 
defaultVlan) in 
com.cloud.hypervisor.vmware.resource.VmwareResource.getVlanInfo(NicTO, String) 
and assign vlan interface with defultVlan instead of VXLAN.

In short, Xen-agent handle unknown isolation type as error and don't start VM.
VMWare-agent just ignore unknown isolation type and start VM with default VLAN.


Yes, I will write documentation.
Is that requirement for this patch to be committed?


- Toshiaki


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12623/#review23523
-----------------------------------------------------------


On July 17, 2013, 11:54 p.m., Toshiaki Hatano wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12623/
> -----------------------------------------------------------
> 
> (Updated July 17, 2013, 11:54 p.m.)
> 
> 
> Review request for cloudstack, Alena Prokharchyk, Chiradeep Vittal, Murali 
> Reddy, Hugo Trippaers, and Sheng Yang.
> 
> 
> Bugs: https://issues.apache.org/jira/browse/CLOUDSTACK-2328
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> -------
> 
> CLOUDSTACK-2328: Linux native VXLAN support on KVM hypervisor
> 
> Initial patch for VXLAN support.
> Fully functional, hopefully, for GuestNetwork - AdvancedZone.
> 
> Patch Note:
>  in cloudstack-server
> - Add isolation method VXLAN
> - Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
> - Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
> - Add VXLAN isolation option in zoneWizard UI
> 
>  in cloudstack-agent (kvm)
> - Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation 
> script
> -- Usage is exactly same to modifyvlan.sh
> - BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when 
> VXLAN is used for isolation
> 
> Database changes:
> - No change in database structure.
> - VXLAN isolation uses same tables that VLAN uses to store vNet allocation 
> status.
> 
> Known Issue:
> - Some resource still says 'VLAN' in log even if VXLAN is used
> - in UI, "Network - GuestNetworks" dosen't display VNI
> -- VLAN ID field displays "N/A"
> 
> 
> Diffs
> -----
> 
>   api/src/com/cloud/network/Networks.java 5aede05 
>   api/src/com/cloud/network/PhysicalNetwork.java f6cb1a6 
>   client/pom.xml 32ab94a 
>   client/tomcatconf/componentContext.xml.in 1fbec61 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/BridgeVifDriver.java
>  195cf40 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  da86612 
>   plugins/network-elements/vxlan/pom.xml PRE-CREATION 
>   
> plugins/network-elements/vxlan/src/com/cloud/network/guru/VxlanGuestNetworkGuru.java
>  PRE-CREATION 
>   
> plugins/network-elements/vxlan/test/com/cloud/network/guru/VxlanGuestNetworkGuruTest.java
>  PRE-CREATION 
>   plugins/pom.xml 261e8e8 
>   scripts/vm/network/vnet/modifyvlan.sh 8ed3905 
>   scripts/vm/network/vnet/modifyvxlan.sh PRE-CREATION 
>   server/src/com/cloud/network/NetworkManagerImpl.java f6e9a0a 
>   server/src/com/cloud/network/NetworkServiceImpl.java ccd23bf 
>   ui/scripts/ui-custom/zoneWizard.js 877dbc0 
> 
> Diff: https://reviews.apache.org/r/12623/diff/
> 
> 
> Testing
> -------
> 
> #) Test set up
> - Components
>   - 1x management server
>   - 1x nfs storage
>   - 3x Linux KVM host
>   -- CentOS 6.4 based
>   -- Replace kernel by version 3.8.13, VXLAN kernel module built as loadable 
> module
>   -- Replace iproute2 by version iproute2-ss130430
>   -- BridgeVifDriver (Default)
> 1. create advanced zone from zone wizard without security group option
> 2. select hypervisor: KVM
> 3. assign Guest network to separated physical network, isolated by VXLAN
>    specify bridge name (traffic label) for Guest network, this bridge should 
> have IPv4 address (global/private both are OK).
> 4. assign Guest vNet range 10000-20000
> 5. other parameter are normal
> 6. add 2 more hosts into same zone/pod/cluster after zone wizard is finished
> 
> #) Test case 1: start/stop VR
> 1. Create network offering, same configuration as 
> DefaultIsolatedNetworkOfferingWithSourceNatService but persistent
> 2. Create network with network offering which is created in step 0
> 3. Confirm VR is started and bridge/vxlan device created on host
> 4. Delete network which is created in step 1
> 5. Confirm VR is deleted and bridge/vxlan device deleted on host
> 
> #) Test case 2: start/stop an instance (VR is on same host)
> 1. Add an instance from UI, create network during wizard.
> 2. Confirm VM and VR are on the same host
> 3. Confirm it's pingable from VM to VR
> 4. Confirm it's pingable from VM to public network (after opening Egress rule)
> 5. Destroy instance
> 6. Confirm bridge/vxlan device is still on the host
> 7. Delete network after the VM is expunged
> 8. Confirm VR are deleted and bridge/vxlan device deleted on the host
> 
> #) Test case 3: start/stop an instance (VR is on different host)
> 1. Add an instance from UI, create network during wizard.
> 2. Confirm VM and VR are on the different host
> 3. Confirm it's pingable from VM to VR
> 4. Confirm it's pingable from VM to public network (after opening Egress rule)
> 5. Destroy instance, wait for expunging, then delete network
> 6. Confirm VM and VR are deleted and bridge/vxlan device deleted on both host
> 
> #) Test case 4: migrate instance
> 1. Add an instance from UI, create network during wizard.
> 2. Open Egress rule on the network
> 3. Migrate VM from host (A) to empty host (B)
> 4. Confirm it's pingable from VM to public network
> 5. Migrate VM from host (B) to host (C) that has VR
> 6. Confirm it's pingable from VM to public network
> 7. Confirm bridge/vxlan device deleted on the host (B)
> 8. Migrate VM from (C) to empty host (A)
> 9. Confirm it's pingable from VM to public network
> 
> #) Test case 5: plug/unplug Nic
> 1. Add an instance from UI, create network during wizard.
> 2. Create additional network
> 3. Add NIC for network created in step 2 to the VM
> 4. Confirm it's pingable from VM to public network by using both side of NICs
> 5. Delete NIC created in step 3
> 6. Confirm bridge/vxlan device deleted on the host
> 
> 
> Thanks,
> 
> Toshiaki Hatano
> 
>

Reply via email to