Hello all. My name is Tanner Danzey. Just a little introduction since this
will be my first post to this list. I am from Fargo, North Dakota and I
have had fairly significant Linux & Linux server experience for about four
years.

My coworker and I are working on rolling a Cloudstack cloud using Ubuntu
13.10, Ceph, Rados Gateway (S3) and KVM and commodity hardware. We
currently have our Ceph and Rados setup under lock and key as far as our
testing can tell. Installing the management server is a breeze, so that's
not an issue either. Our issue seems to be of a networking nature.

Our desire to use advanced networking has been made complicated by our
other desires. Originally we planned to use many LACP & trunk
configurations to achieve highest bandwidth and redundancy possible, but
our discovery that the switch we are using (two Catalyst 2960 48 port
switches in stacked configuration) only allows for six port channels
complicated that plan. Now we are still using bonded adaptors but in
active-backup mode so that we do not need switch side configuration tricks
or port channels. I have attached an example of our KVM hypervisor
configuration.

We have interface bond0 in active-backup mode bridged with em1 and em2.
Both of those ports are connected to switch trunk ports. Here's where
things get silly. Our KVM nodes are able to be pinged and managed on the
management IP assigned and when we create a zone and assign the bridges
their respective traffic. However, they are not connected to the public
network. Some variations of this configuration even result in no
connectivity other than link local.

Essentially, we are trying to find the best way to approach this situation.
We are open to using Open VSwitch. Our VLANs will be 50 as public, 100 as
management / storage, and 200-300 as guest VLANs unless there is a more
pragmatic arrangement.

Our plan after this deployment is to help brush up documentation where
possible regarding some of these edge-case scenarios and potentially do a
thorough writeup. Any and all help is appreciated, and if you require
compensation for the help I'm sure something can be arranged.

Thanks in advance, sorry for the overly long message :)
auto lo
iface lo inet loopback

auto em1
iface em1 inet manual
  bond-master bond0
  bond-primary em1

auto em2
iface em2 inet manual
  bond-master bond0

auto bond0
iface bond0 inet manual
  bond-mode active-backup
  bond-miimon 100
  bond-slaves em1 em2


# The primary network interface
auto bond0.100
iface bond0.100 inet static
  address 10.100.0.33
  netmask 255.255.255.0
  network 10.100.0.0
  broadcast 10.100.0.255
  gateway 10.100.0.1
  dns-nameservers 10.100.0.4
  dns-search dcnfargo.ntgcloud

#public network
auto cloudbr0
iface cloudbr0 inet manual
        bridge ports bond0.50
        bridge_fd 5
        bridge_stp off
        bridge_maxwait 1

# private network
auto cloudbr0
iface cloudbr0 inet manual
        bridge_ports bond0.200
        bridge_fd 5
        bridge_stp off
        bridge_maxwait 1

Reply via email to