-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/12623/#review25492
-----------------------------------------------------------


Commit 34ae32e0c24777f9ac9cf265447688a3e21680c7 in branch refs/heads/master 
from Toshiaki Hatano
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=34ae32e ]

CLOUDSTACK-2328: Linux native VXLAN support on KVM hypervisor

Initial patch for VXLAN support.
Fully functional, hopefully, for GuestNetwork - AdvancedZone.

Patch Note:
 in cloudstack-server
- Add isolation method VXLAN
- Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
- Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
- Add VXLAN isolation option in zoneWizard UI

 in cloudstack-agent (kvm)
- Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation 
script
-- Usage is exactly same to modifyvlan.sh
- BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when VXLAN 
is used for isolation

Database changes:
- No change in database structure.
- VXLAN isolation uses same tables that VLAN uses to store vNet allocation 
status.

Known Issue:
- Some resource still says 'VLAN' in log even if VXLAN is used
- in UI, "Network - GuestNetworks" dosen't display VNI
-- VLAN ID field displays "N/A"


- ASF Subversion and Git Services


On Aug. 21, 2013, 2:51 a.m., Toshiaki Hatano wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/12623/
> -----------------------------------------------------------
> 
> (Updated Aug. 21, 2013, 2:51 a.m.)
> 
> 
> Review request for cloudstack, Alena Prokharchyk, Chiradeep Vittal, Murali 
> Reddy, Hugo Trippaers, and Sheng Yang.
> 
> 
> Bugs: https://issues.apache.org/jira/browse/CLOUDSTACK-2328
> 
> 
> Repository: cloudstack-git
> 
> 
> Description
> -------
> 
> CLOUDSTACK-2328: Linux native VXLAN support on KVM hypervisor
> 
> Initial patch for VXLAN support.
> Fully functional, hopefully, for GuestNetwork - AdvancedZone.
> 
> Patch Note:
>  in cloudstack-server
> - Add isolation method VXLAN
> - Add VxlanGuestNetworkGuru as plugin for VXLAN isolation
> - Modify NetworkServiceImpl to handle extended vNet range for VXLAN isolation
> - Add VXLAN isolation option in zoneWizard UI
> 
>  in cloudstack-agent (kvm)
> - Add modifyvxlan.sh script that handle bridge/vxlan interface manipulation 
> script
> -- Usage is exactly same to modifyvlan.sh
> - BridgeVifDriver will call modifyvxlan.sh instead of modifyvlan.sh when 
> VXLAN is used for isolation
> 
> Database changes:
> - No change in database structure.
> - VXLAN isolation uses same tables that VLAN uses to store vNet allocation 
> status.
> 
> Known Issue:
> - Some resource still says 'VLAN' in log even if VXLAN is used
> - in UI, "Network - GuestNetworks" dosen't display VNI
> -- VLAN ID field displays "N/A"
> 
> 
> Diffs
> -----
> 
>   api/src/com/cloud/network/Networks.java f8166c6 
>   api/src/com/cloud/network/PhysicalNetwork.java f6cb1a6 
>   client/pom.xml 1afe5b8 
>   client/tomcatconf/componentContext.xml.in f36d0ee 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/BridgeVifDriver.java
>  195cf40 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/LibvirtComputingResource.java
>  a156ae6 
>   
> plugins/hypervisors/kvm/src/com/cloud/hypervisor/kvm/resource/OvsVifDriver.java
>  7038d7e 
>   plugins/network-elements/vxlan/pom.xml PRE-CREATION 
>   
> plugins/network-elements/vxlan/src/com/cloud/network/guru/VxlanGuestNetworkGuru.java
>  PRE-CREATION 
>   
> plugins/network-elements/vxlan/test/com/cloud/network/guru/VxlanGuestNetworkGuruTest.java
>  PRE-CREATION 
>   plugins/pom.xml 0812642 
>   scripts/vm/network/vnet/modifyvlan.sh 8ed3905 
>   scripts/vm/network/vnet/modifyvxlan.sh PRE-CREATION 
>   server/src/com/cloud/network/NetworkManagerImpl.java c683809 
>   server/src/com/cloud/network/NetworkServiceImpl.java eadd7bc 
> 
> Diff: https://reviews.apache.org/r/12623/diff/
> 
> 
> Testing
> -------
> 
> #) Test set up
> - Components
>   - 1x management server
>   - 1x nfs storage
>   - 3x Linux KVM host
>   -- CentOS 6.4 based
>   -- Replace kernel by version 3.8.13, VXLAN kernel module built as loadable 
> module
>   -- Replace iproute2 by version iproute2-ss130430
>   -- BridgeVifDriver (Default)
> 1. create advanced zone from zone wizard without security group option
> 2. select hypervisor: KVM
> 3. assign Guest network to separated physical network, isolated by VXLAN
>    specify bridge name (traffic label) for Guest network, this bridge should 
> have IPv4 address (global/private both are OK).
> 4. assign Guest vNet range 10000-20000
> 5. other parameter are normal
> 6. add 2 more hosts into same zone/pod/cluster after zone wizard is finished
> 
> #) Test case 1: start/stop VR
> 1. Create network offering, same configuration as 
> DefaultIsolatedNetworkOfferingWithSourceNatService but persistent
> 2. Create network with network offering which is created in step 0
> 3. Confirm VR is started and bridge/vxlan device created on host
> 4. Delete network which is created in step 1
> 5. Confirm VR is deleted and bridge/vxlan device deleted on host
> 
> #) Test case 2: start/stop an instance (VR is on same host)
> 1. Add an instance from UI, create network during wizard.
> 2. Confirm VM and VR are on the same host
> 3. Confirm it's pingable from VM to VR
> 4. Confirm it's pingable from VM to public network (after opening Egress rule)
> 5. Destroy instance
> 6. Confirm bridge/vxlan device is still on the host
> 7. Delete network after the VM is expunged
> 8. Confirm VR are deleted and bridge/vxlan device deleted on the host
> 
> #) Test case 3: start/stop an instance (VR is on different host)
> 1. Add an instance from UI, create network during wizard.
> 2. Confirm VM and VR are on the different host
> 3. Confirm it's pingable from VM to VR
> 4. Confirm it's pingable from VM to public network (after opening Egress rule)
> 5. Destroy instance, wait for expunging, then delete network
> 6. Confirm VM and VR are deleted and bridge/vxlan device deleted on both host
> 
> #) Test case 4: migrate instance
> 1. Add an instance from UI, create network during wizard.
> 2. Open Egress rule on the network
> 3. Migrate VM from host (A) to empty host (B)
> 4. Confirm it's pingable from VM to public network
> 5. Migrate VM from host (B) to host (C) that has VR
> 6. Confirm it's pingable from VM to public network
> 7. Confirm bridge/vxlan device deleted on the host (B)
> 8. Migrate VM from (C) to empty host (A)
> 9. Confirm it's pingable from VM to public network
> 
> #) Test case 5: plug/unplug Nic
> 1. Add an instance from UI, create network during wizard.
> 2. Create additional network
> 3. Add NIC for network created in step 2 to the VM
> 4. Confirm it's pingable from VM to public network by using both side of NICs
> 5. Delete NIC created in step 3
> 6. Confirm bridge/vxlan device deleted on the host
> 
> 
> Thanks,
> 
> Toshiaki Hatano
> 
>

Reply via email to