Hi all,

I use the following environment: CS 4.1, KVM, Centos 6.4
(management+node1+node2), OpenIndiana NFS server as primary and secondary
storage
I have advanced networking in zone. I split management/public/guest traffic
into different vlans, and use kvm network labels (bridge names):
# cat /etc/cloud/agent/agent.properties |grep device
guest.network.device=cloudbrguest
private.network.device=cloudbrmanage
public.network.device=cloudbrpublic

I have following network configuration:
eth0+eth1=bond0
eth2+eth3=bond1

I use  vlan with id=211 on bond1 interface for guest traffic:
cloudbrguest            8000.90e2ba317614       yes             vlan211
cloudbrmanage           8000.90e2ba317614       yes             bond1.210
cloudbrpublic           8000.90e2ba317614       yes             bond1.221
cloudbrstor             8000.0025908814a4       yes             bond0


The problem appeared after I have upgraded CS from 4.0.2 to 4.1.

How it works in 4.0.2:
-bridge interface cloudVirBr#VLANID is created on hypervisor, #VLANID -
value from 1024 to 4096(is specified when creating zone), i.e.
cloudVirBr1224
-vlan interface vlan211.#VLANID is created on hypervisor and is plugged
into cloudVirBr#VLANID
I should had permitted 211 vlanid on switchports and all guest traffic
(vlans 1024-4096) was encapsulated.

How it works in 4.1:
-bridge interface br#ETHNAME-#VLANID is created on hypervisor, where
#VLANID - value from 1024 to 4096(is specified when creating zone) and
#ETHNAME - name of device on top of which vlan will be created
i.e. brbond1-1224
-vlan interface bond1.#VLANID is created on hypervisor and is plugged into
br#ETHNAME-#VLANID
However, vlan interface is created on top of bond1 interface, while I would
like it to be created on top of vlan211 (bond1.211)
Now I should permit 1024-4096 vlanid on switchports, that is not convenient.

How do I configure CS 4.1 so that it could work with guest vlans the same
way as it had worked in CS 4.0 ?

-- 
Regards,
Valery

http://protocol.by/slayer

Reply via email to