Hi,

Have you tried pinging the Console Proxy from the SSVM?

It is possible to login to these VMs using the following guidance.
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Admin_Guide/accessing-system-vms.html

Sorry about the old link, couldn't find this in the new documentation.

Marty

On 17 April 2014 20:23, Ana Paula de Sousa <apso0...@gmail.com> wrote:
> Hi,
> I'm currently trying cloudstack on my college scientific project and I've
> been struggling with a few things.
> First of all, I have two machines with Ubuntu (where the firewall is unable
> in both of them), one is acting as the hypervisor (with primary local
> storage and KVM) and the other as the management server (with the secondary
> storage). These machines are at a lab where they are connected physically
> with each other and also with the internal network of the lab, that
> provides them access to internet.
>
> My network is like this:
>
> - I have the 10.16.22... range to connect the hypervisor and management
> server to the internet;
> - I have the 192.168.22... range to connect the hypervisor with the
> managament server.
>
> On my hypervisor I have 4 bridges, cloud0 (created automatically by
> cloudstack with IP 169.254.0.1), cloudbr1 (with  IP 10.16.22.100), cloudbr2
> (with IP 192.168.22.70) and virbr0 (created automatically by kvm with IP
> 192.168.122.1).When I type brctl show it shows the following message:
>
> bridge name    bridge id        STP enabled    interfaces
> cloud0        8000.00e04c681730    no        eth0
>                             vnet0
>                             vnet3
>                             vnet7
> cloudbr1    8000.1c6f65d74a4b    no        eth2
> cloudbr2    8000.5cd998b16f2d    no        eth1
>                             vnet1
>                             vnet2
>                             vnet4
>                             vnet5
>                             vnet6
> virbr0        8000.000000000000    yes
>
> As we can see, cloud0 is linked to eth0, cloudbr1 is linked to eth2 and
> cloudbr2 is linked to eth1.
>
> On my Management Server I don't have any bridges, but I have 2 interfaces,
> eth0 (with IP 192.168.22.71) and eth2 (with IP 10.16.22.101).
>
> On both machines eth2 is the interface connecting them to the internet, and
> they are linked physically through eth1 (on the hypervisor) and eth0 (on
> the management server).
>
> I created a basic zone with the following informations:
>
> - IPv4 DNS1: 8.8.8.8, Internal DNS 1: 8.8.4.4, DefaultSharedNetworkOffering;
> - Pod has the IP range of 192.168.22.100 to 192.168.22.150 and the gateway
> 192.168.22.1;
> - The management traffic has the IP range of 192.168.22.2 to 192.168.22.20
> and also the gateway 192.168.22.1;
> - I'm also using the 192.168.22.70 of my hypervisor as the host IP and the
> 192.168.22.71 as the secondary storage (since the secondary storage is at
> the management server);
>
> I enabled the zone and it created 2 System VMs.
>
> The VM Proxy has the following interfaces/IP Adresses:
>
> Public IP Address    192.168.22.3
> Private IP Address    192.168.22.116
> Link Local IP Address    169.254.0.69
>
> And the SSVM:
>
> Public IP Address    192.168.22.2
> Private IP Address    192.168.22.135
> Link Local IP Address    169.254.0.55
>
> I can ping all the interfaces of my hypervisor through the management
> server (except the cloud0) and vice versa.
>
> Both System VMs are as "Running", but I'm unable to ping them through my
> hypervisor and I can't find why.
>
> If there is any information missing here that could help to resolve this
> problem I would gladly give to you.
>
> Thanks.
>
> --
> Ana Paula de Sousa Oliveira
> Graduando em Ciência da Computação
> Universidade Federal de Goiás

Reply via email to