I think my mail did not get through, I can't find it in the archives; I've probably sent it too fast after subscribing
I've started out with cloudstack a couple days ago and I've hit a bit of a brick wall. The installation is running on a system that has two network interfaces, a public one and private one (connecting to a local network). auto eth0 iface eth0 inet manual auto eth1 iface eth1 inet static address 10.158.231.29 netmask 255.255.255.0 network 10.158.231.0 broadcast 10.158.231.255 gateway 10.158.231.1 auto cloudbr0 iface cloudbr0 inet static address 172.16.8.7 netmask 255.255.255.0 network 172.16.8.0 broadcast 172.16.8.255 up route add -net 172.16.0.0 netmask 255.255.0.0 gw 172.16.8.1 bridge_ports eth0 bridge_fd 5 bridge_stp off bridge_maxwait 1 The idea is to connect all the guests to the 172.16 network because access to multicasting devices is needed. The systems are running on Debian wheezy, and got the system up after fixing the /etc/legal problem (echo Ubuntu) and after finding out in the logs that there is a package dependency problem on the cloudstack-manager package for 4.3 The biggest problem seems to be that host is running in a corporate network and cloudstack is auto configuring some things by downloading over http. This does not work since a corporate proxy is required. I downloaded the system images manually; but now it seems I need to do the same in the secondary storage vm. The vm is running and I can get the login prompt with virsh (kvm), however there is no password I can find. The trick with the connection to the local-link address does not work because I fear I have a networking/firewall issue. It does not matter if I disable the firewall or not, I cannot access the 169.254.x.x network. pinging the device returns ping 169.254.3.236 PING 169.254.3.236 (169.254.3.236) 56(84) bytes of data. >From 169.254.0.1 icmp_seq=1 Destination Host Unreachable >From 169.254.0.1 icmp_seq=2 Destination Host Unreachable >From 169.254.0.1 icmp_seq=3 Destination Host Unreachable The firewall configuration is firehol cat /etc/firehol/firehol.conf version 5 # Accept all client traffic on any interface FIREHOL_LOG_MODE="ULOG" server_cloudstackweb_ports="tcp/8080" client_cloudstackweb_ports="default" server_buildbot_ports="tcp/8010" client_buildbot_ports="default" server_git_ports="tcp/9418" client_git_ports="default" labo_ips="172.16.0.0/16 169.254.0.0/16" server_cloudstack_ports="tcp/1798" client_cloudstack_ports="default" server_libvirt_ports="tcp/16509" client_libvirt_ports="default" server_vnc_ports="tcp/5900:6100" client_vnc_ports="default" server_libvirtlive_ports="tcp/49152:49216" client_libvirtlive_ports="default" # local bridge address interface cloudbr0 LAN src "${labo_ips}" server all accept client all accept interface eth1 WAN src not "${labo_ips}" protection strong policy reject server ssh accept server cloudstack accept server cloudstackweb accept server libvirt accept server vnc accept server libvirtlive accept server buildbot accept client all accept # zeroconf bridge address interface cloud0 LLBR0 client all accept server all accept router LAN2WAN inface cloudbr0 outface eth1 masquerade route all accept I got a bit further by adding the interface cloud0 to the definition, but still no joy. I need to access the vm to run some script with a http_proxy environment variable... I am currently starting to run out of ideas; installing cloudstack was fine; but I entered hell afterwards; ... /sbin/ifconfig cloud0 Link encap:Ethernet HWaddr fe:00:a9:fe:01:f8 inet addr:169.254.0.1 Bcast:169.254.255.255 Mask:255.255.0.0 inet6 addr: fe80::4061:7aff:fe05:f240/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:4278 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:193752 (189.2 KiB) cloudbr0 Link encap:Ethernet HWaddr 9c:8e:99:26:6d:e4 inet addr:172.16.8.7 Bcast:172.16.8.255 Mask:255.255.255.0 inet6 addr: fe80::9e8e:99ff:fe26:6de4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:123565 errors:0 dropped:0 overruns:0 frame:0 TX packets:58009 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:42680449 (40.7 MiB) TX bytes:236280976 (225.3 MiB) eth0 Link encap:Ethernet HWaddr 9c:8e:99:26:6d:e4 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:930187 errors:0 dropped:871 overruns:0 frame:0 TX packets:699239 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:314733647 (300.1 MiB) TX bytes:565773302 (539.5 MiB) eth1 Link encap:Ethernet HWaddr 9c:8e:99:26:6d:e6 inet addr:10.158.231.29 Bcast:150.158.231.255 Mask:255.255.255.0 inet6 addr: fe80::9e8e:99ff:fe26:6de6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:676481 errors:0 dropped:0 overruns:0 frame:0 TX packets:471296 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:681352598 (649.7 MiB) TX bytes:121936012 (116.2 MiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:8578260 errors:0 dropped:0 overruns:0 frame:0 TX packets:8578260 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4369297693 (4.0 GiB) TX bytes:4369297693 (4.0 GiB) vnet0 Link encap:Ethernet HWaddr fe:00:a9:fe:01:f8 inet6 addr: fe80::fc00:a9ff:fefe:1f8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:281 errors:0 dropped:3656 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:14269 (13.9 KiB) vnet1 Link encap:Ethernet HWaddr fe:30:ac:00:00:03 inet6 addr: fe80::fc30:acff:fe00:3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:284 errors:0 dropped:47331 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:23713 (23.1 KiB) vnet2 Link encap:Ethernet HWaddr fe:c8:d4:00:00:08 inet6 addr: fe80::fcc8:d4ff:fe00:8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:284 errors:0 dropped:47331 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:23713 (23.1 KiB) vnet3 Link encap:Ethernet HWaddr fe:00:a9:fe:03:ec inet6 addr: fe80::fc00:a9ff:fefe:3ec/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:256 errors:0 dropped:3175 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:25346 (24.7 KiB) vnet4 Link encap:Ethernet HWaddr fe:32:ac:00:00:04 inet6 addr: fe80::fc32:acff:fe00:4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:256 errors:0 dropped:9386 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:20313 (19.8 KiB) vnet5 Link encap:Ethernet HWaddr fe:26:52:00:00:18 inet6 addr: fe80::fc26:52ff:fe00:18/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:256 errors:0 dropped:9386 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:20313 (19.8 KiB) vnet6 Link encap:Ethernet HWaddr fe:e4:38:00:00:05 inet6 addr: fe80::fce4:38ff:fe00:5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:256 errors:0 dropped:9386 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:20313 (19.8 KiB) /sbin/route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 150.158.231.1 0.0.0.0 UG 0 0 0 eth1 10.158.231.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cloud0 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cloudbr0 172.16.8.0 0.0.0.0 255.255.255.0 U 0 0 0 cloudbr0 UPDATE: I've managed to get link local working my modifying the routing (disable the fw atm too). /sbin/route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.158.231.1 0.0.0.0 UG 0 0 0 eth1 150.158.231.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cloudbr0 172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 cloudbr0 172.16.8.0 0.0.0.0 255.255.255.0 U 0 0 0 cloudbr0 However, only to OTHER machines, I still cannot access the Secondary storage VM by the ll address. So LL addresses work to other physical machines, not to the VM on the same host I am connecting from (or from another machine for that matter).