Here's a simple (not recommended) one-nic setup: http://marcus.mlsorensen.com/cloudstack-extras/cs-4.1-kvm-networking-one-nic.rtf
And a simple two-nic setup: http://marcus.mlsorensen.com/cloudstack-extras/cs-4.1-kvm-networking-two-nic.rtf Hasty docs put together on the road... On Thu, Aug 1, 2013 at 11:28 AM, Marcus Sorensen <shadow...@gmail.com> wrote: > I'm short on time, but here's the KVM advanced networking config we > use for testing. If someone wants to write a doc based around it that > would be nice. > > Start out KVM host with two networks, eth0, eth1. eth0 is intended for > public traffic, eth0 will be guest vlans and management vlan. then > create a bridge interface for each: > > [root@devcloud-kvm ~]# brctl show > bridge name bridge id STP enabled interfaces > cloud0 8000.000000000000 no > br0 8000.5254004eff4f no eth0 > br1 8000.52540052b15e no eth1 > > br0 Link encap:Ethernet HWaddr 52:54:00:4E:FF:4F > inet addr:172.17.10.10 Bcast:172.17.10.255 Mask:255.255.255.0 > inet6 addr: fe80::5054:ff:fe4e:ff4f/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:127 errors:0 dropped:0 overruns:0 frame:0 > TX packets:30 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:5846 (5.7 KiB) TX bytes:4345 (4.2 KiB) > > br1 Link encap:Ethernet HWaddr 52:54:00:52:B1:5E > inet addr:192.168.100.10 Bcast:192.168.100.255 Mask:255.255.255.0 > inet6 addr: fe80::5054:ff:fe52:b15e/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:343 errors:0 dropped:0 overruns:0 frame:0 > TX packets:153 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:24227 (23.6 KiB) TX bytes:29108 (28.4 KiB) > > eth0 Link encap:Ethernet HWaddr 52:54:00:4E:FF:4F > inet6 addr: fe80::5054:ff:fe4e:ff4f/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:157 errors:0 dropped:0 overruns:0 frame:0 > TX packets:38 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:12276 (11.9 KiB) TX bytes:4897 (4.7 KiB) > > eth1 Link encap:Ethernet HWaddr 52:54:00:52:B1:5E > inet6 addr: fe80::5054:ff:fe52:b15e/64 Scope:Link > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > RX packets:377 errors:0 dropped:0 overruns:0 frame:0 > TX packets:163 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:1000 > RX bytes:34044 (33.2 KiB) TX bytes:29748 (29.0 KiB) > > lo Link encap:Local Loopback > inet addr:127.0.0.1 Mask:255.0.0.0 > inet6 addr: ::1/128 Scope:Host > UP LOOPBACK RUNNING MTU:16436 Metric:1 > RX packets:863 errors:0 dropped:0 overruns:0 frame:0 > TX packets:863 errors:0 dropped:0 overruns:0 carrier:0 > collisions:0 txqueuelen:0 > RX bytes:120247 (117.4 KiB) TX bytes:120247 (117.4 KiB) > > Ok, now kvm host is ready. Just define the kvm traffic label for > Management traffic to be 'br0', for guest to be 'br0', and for public > to be 'br1'. Cloudstack will create any necessary bridges or vlans. > You can leave the vlan option empty if you don't want it to create a > vlan (say for management). I can perhaps go into more detail later. > > On Wed, Jul 31, 2013 at 12:33 PM, Marcus Sorensen <shadow...@gmail.com> wrote: >> Yes, that's correct. I think we need to update the documentation. The >> user simply needs to create a bridge where 'public' traffic will work, >> and then set that bridge name as the traffic label for public traffic. >> Then it will create the vlan device and the bridge necessary for >> public based on the physical ethernet device of that bridge. >> >> Note, in this example, it is only looking for cloudVirBr for >> compatibility, if there are existing cloudVirBr bridges then the agent >> will continue to create cloudVirBr bridges, otherwise, it will create >> breth bridges, which allow the same vlan number on different physical >> interfaces. >> >> We can easily create some concrete examples for this... such as the >> one represented in devcloud-kvm by >> tools/devcloud-kvm/devcloud-kvm-advanced.cfg >> >> On Wed, Jul 31, 2013 at 12:06 PM, Edison Su <edison...@citrix.com> wrote: >>> The KVM installation guide at >>> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Installation_Guide/hypervisor-kvm-install-flow.html >>> , is unnecessary complicated and inaccurate. >>> For example, we don't need to configure vlan on kvm host by users >>> themselves, cloudstack-agent will create vlans automatically. >>> All users need to do is to create bridges(if the default bridge created by >>> cloudstack-agent is not enough), then add these bridge names from >>> cloudstack mgt server UI during the zone creation. >>> >>> -----Original Message----- >>> From: Noel Kendall [mailto:noeldkend...@hotmail.com] >>> Sent: Wednesday, July 31, 2013 9:49 AM >>> To: users@cloudstack.apache.org >>> Subject: CS 4.1.0 - this will help a number of people who struggle with >>> Advanced Networking >>> >>> The documentation for installation in a KVM environment is utterly >>> misleading. >>> The documentation reads as though one can set up the bridge for the public >>> network with any name one chooses, the default being cloudbr0. >>> You cannot use just any old name. That simply will not work. >>> Let's suppose I have a public network that I isolate on VLAN 5, which is >>> interfaced on ethernet adapter eth4. I will need to define an adapter >>> eth4.5 with VLAN set to yes. >>> So far, so good. >>> Next, for the bridge... >>> By enabling debugging output in the log, I was able to see that the code >>> looks for a bridge with the name cloudVirBr5 for my public network. >>> I had tried several different approaches, none would work if I did not name >>> my bridge cloudVirBr5, and set my traffic label on the network >>> configurationto the same. >>> I have seen numerous posts in the mailing lists, blog entries, you name it, >>> representing frustrations of throngs of users trying to validate a CS setup. >>> The documentation is utterly wrong and misleading. >>> Summary: >>> does not work:traffic label: cloudbr0 with eth4.5 pointing to cloudbr0 - >>> code still tries to create a breth4.5 and enlist eth4.5 to it but cannot >>> because it is already enlisted to cloudbr0. >>> Good luck everyone with advanced networking with VLAN isolation on CentOS >>> KVM hosts. >>>