Dear all,

i have a problem with advanced networking, i seem not to be able to wrap my 
brain around this, I think I am missing something specific - and I am not sure 
if the setup is as intended

I have the following setup

1 x Server with Cloudstack Management installed, IP: 192.168.10.11, Gateway 
192.168.10.1, Netmask: /23
1 x Storage Server with 2 NFS Exports for Primary and Secondary Storage, IP: 
192.168.10.21
1 x KVM Server -with an IP in the Management Network 192.168.10.101


-          The management server run fine (the network interface is bonded 
(802.3ad) to twp stacked switches)

-          The storage runs fine and can be mounted on the KVm and on the 
management server

-          The template is downloaded and in the secondary storage

-          the KVM server is configured as follows

o   1 x bond (802.3ad) with 4 physical links for the management network 
(192.168.10.0/23) the IP address 192.168.10.101 is actually on a bridge that is 
on top of the bond

<-- snip -->

auto bond0
iface bond0 inet manual
        bond-mode 4
        bond-miimon 100
        bond-lacp-rate 1
        bond-slaves em1 em2 em3 em4

auto mgmtbr0
iface mgmtbr0 inet static
        address 192.168.10.101
        netmask 255.255.254.0
        network 192.168.10.0
        broadcast 192.168.11.255
        gateway 192.168.10.1
        dns-nameservers 192.168.10.1
        bridge_ports bond0
        bridge_fd 5
        bridge_stp off
        bridge_maxwait 1

iface mgmtbr inet6 static
        address XXXX:XXXX:XXXX:17:0:5:0:101
        netmask 96
        gateway XXXX:XXXX:XXXX:17:0:5:0:1
        dns-nameservers XXXX:XXXX:XXXX:17:0:5:0:1

<-- snip -->

For the public and guest networks I prepared similar bonds, the bond1 has only 
the "public VLAN" tagged on the switch side, the bond2 has ~50 VLANs tagged on 
the switch side - in both cases it is correctly tagged on the Link Aggregation 
Interface, we tested it manually and it works fine.

<-- snip -->

# Public network
auto bond1
iface bond1 inet manual
        bond-mode 4
        bond-miimon 100
        bond-lacp-rate 1
        bond-slaves p3p1 p3p2

auto publicbr0
iface publicbr0 inet manual
    bridge_ports bond1
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

# Guest network
auto bond2
iface bond2 inet manual
        bond-mode 4
        bond-miimon 100
        bond-lacp-rate 1
        bond-slaves p3p3 p3p4

auto guestbr0
iface guestbr0 inet manual
    bridge_ports bond2
    bridge_fd 5
    bridge_stp off
    bridge_maxwait 1

<--snip -->

The target is to have advanced networking with security groups

The public IPs should be in 192.168.14.0/23
The Guest IPs for the start in 192.168.16.0/24

During setup I enter the following

Step 2 - Setup Zone

-          DNS: 192.168.14.1

-          Internal DNS: 192.168.10.1

-          Hypervisor: KVM

-          DefaultSharedNetworkOfferingWithSGService selected

-          The rest stays empty

Step 3 - Physical Network

-          Here I already do not understand why there is no "public network"

-          I change "Physical Network 1" to Network01

-          Ilsolation: VLAN

-          "Management" and "Storage" go here, both with "mgmtbr0" as traffic 
label

-          Next Network is "Network02"

-          One of the "Guests" go here and the label is "guestbr0"

Step 3 - POD

-          PodName: Pod01

-          Reserved System Gateway: 192.168.10.1 (it is in the management 
network)

-          Netmask: 255.255.254.0

-          Start reserved IP: 192.168.10.150

-          End reserved IP: 192.168.10.199

Step 3 - Guest traffic

-          Gateway: 192.168.16.1

-          Netmask: 255.255.255.0

-          Geust Start IP: 192.168.16.10

-          Guest End IP: 192.168.16.150

-          VLAN ID: 1016

Step 3 - Storage traffic

-          Gateway: 192.168.10.1

-          Netmask 255.255.254.0

-          VLAN ID: empty (traffic is not tagged, see bond0 setup)

-          Start IP: 192.168.10.200  (directly after the POD IPs)

-          End IP: 192.168.10.249

Step 4 - Cluster

-          Give it a Name

Step 5 - Host

-          Hostname: 192.168.10.101 (KVM Server IP)

-          Username and password with root / the_password

Step 5 - Storage

-          Primary and secondary storage from 192.168.10.21 (different exports)

After this the setup finishes successfully.

And now I am stuck,

-          Am I supposed to use the 192.168.16.0/24 network basically as a 
public network  and NAT to these IPs on our firewall ?

-          Also, if I try to check the console of the secondary storage VM 
which is running, I get a connection reset (to reach this I created the 
192.168.16.1 on the layer3 switch as a gateway - before I did that I got a 
timeout)

-          If I try to download an ISO file to install something new, I can not 
download at all

Any advice where to go from here.


And sorry for this exhausting email - but it thought it might be better to be 
very specific


Thanks in advance

Soeren



Reply via email to