Make sure you have security groups configured to allow ssh access to the
vms.

On Thu, Feb 16, 2017 at 4:31 PM, Dag Sonstebo <dag.sonst...@shapeblue.com>
wrote:

> Hi John,
>
> Thanks for clarifying. Got a few more questions regarding your design:
>
> First of all – were you planning on using two zones, or were you planning
> on using one zone with two hypervisors?
>
> Secondly – you’ve mentioned two subnets (rather than two VLANs) –
> 192.168.30.0/24 and 192.168.10.0/24 – how were you planning on using
> these? Which of these is your management network (where your hypervisors,
> management and storage lives) and which one is your guest network? Do you
> have L3 routing between these? How did you map the traffic types
> (management, guest) to your cloudbridges on your KVM hosts?
>
> With regards to your questions:
>
> > *The good:*
> >I can create VMs on either of the hosts. I'm able to ping the VMs and even
> > ssh into them only if I'm on the host or the management server or from
> the
> > ACS console itself (within the network).
>
> This doesn’t quite make sense – did you configure security groups to allow
> ICMP and ssh? If not your networking is not right – you should not be able
> to do this unless you allow the traffic through security groups.
>
> > *The Issue:*
> > I can't ssh or even ping the VMs when in the same network outside the
> host
> > environment. What could be the problem?
>
> As above – this all depends on your security groups.
>
> > A. Management Server network config is as below:
> > B. The KVM host network configuration is a below:
>
> OK not sure what you are trying to achieve. With the resources you have
> listed I would probably do something like the following:
>
> - Configure one zone with one pod, one cluster and two hypervisors.
> - Configure 192.168.10.0/24 as your Management network – put all three
> hosts on this and map the traffic to cloudbr0.
> - Configure 192.168.30.0/24 as your guest network, map traffic to
> cloudbr1.
>
> Hope this makes sense.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 15/02/2017, 14:19, "John Adams" <adams.op...@gmail.com> wrote:
>
>     Hi Boris,
>
>     Thanks for your response. Yes I'm building a basic zone, just for
> starters.
>
>
>     --John O. Adams
>
>     On 15 February 2017 at 16:32, Boris Stoyanov <
> boris.stoya...@shapeblue.com>
>     wrote:
>
>     > Hi John,
>     >
>     > Maybe I misunderstood, are you building advanced or basic zone?
>     >
>     > Thanks,
>     > Boris Stoyanov
>     >
>     > boris.stoya...@shapeblue.com
>     > www.shapeblue.com
>     > @shapeblue
>     >
>     >
>     >
>     >
>     > On Feb 15, 2017, at 12:34 PM, John Adams <adams.op...@gmail.com>
> wrote:
>     >
>     > Hi Boris,
>     >
>     > I think I'm actually using the Shared network offering. The VMs being
>     > created are in the same same physical network subnet. Isolation is an
>     > option but I'm not using that at this point.
>     >
>     > Thanks.
>     >
>     >
>     > --John O. Adams
>     >
>     > On 15 February 2017 at 11:50, Boris Stoyanov <
> boris.stoya...@shapeblue.com
>     > > wrote:
>     >
>     >> Hi John,
>     >>
>     >> In isolated networks VMs should be accessed only through the virtual
>     >> router IP.
>     >>
>     >> To access the VM over ssh, you should go to network setting and
> enable a
>     >> port on the Virtual Router IP. Then create a port forwarding rule
> from that
>     >> enabled port to port 22 on the specific VM within that network.
> After that
>     >> try to ssh the enabled port on the VR and you should end-up in the
> VM
>     >>
>     >> PS. In isolated networks you shouldn’t be able to ping the VM, all
> the
>     >> traffic goes through the VR.
>     >>
>     >> Thanks,
>     >> Boris Stoyanov
>     >>
>     >>
>     >>
>     >> boris.stoya...@shapeblue.com
>     >> www.shapeblue.com
>     >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>     >> @shapeblue
>     >>
>     >>
>     >>
>     >> > On Feb 15, 2017, at 8:37 AM, John Adams <adams.op...@gmail.com>
> wrote:
>     >> >
>     >> > Hi all,
>     >> >
>     >> > Still learning the ropes in a test environment here. Hitting a
> little
>     >> snag
>     >> > with networking here. The physical network has 2 VLANs.
> (192.168.10.0
>     >> and
>     >> > 192.168.30.0)
>     >> >
>     >> > This is my current ACS testing environment:
>     >> >
>     >> > 1 management server (Ubuntu 14.04): 192.168.30.14
>     >> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
>     >> >
>     >> > With that, I created 2 different zones, each with 1 pod and 1
> cluster
>     >> and 1
>     >> > host respectively.
>     >> >
>     >> > *The good:*
>     >> > I can create VMs on either of the hosts. I'm able to ping the VMs
> and
>     >> even
>     >> > ssh into them only if I'm on the host or the management server or
> from
>     >> the
>     >> > ACS console itself (within the network).
>     >> >
>     >> > *The Issue:*
>     >> > I can't ssh or even ping the VMs when in the same network outside
> the
>     >> host
>     >> > environment. What could be the problem?
>     >> >
>     >> > A. Management Server network config is as below:
>     >> > -------------------------
>     >> > *auto lo*
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *auto eth0*
>     >> > *iface eth0 inet static*
>     >> > *       address 192.168.30.14*
>     >> > *       netmask 255.255.255.0*
>     >> > *       gateway 192.168.30.254*
>     >> >       *dns-nameservers 192.168.30.254 4.2.2.2*
>     >> >       *#dns-domain cloudstack.et.test.local*
>     >> > ---------------------------------------------
>     >> >
>     >> > B. The KVM host network configuration is a below:
>     >> >
>     >> > Host 1: .10
>     >> > -----------------------------------------
>     >> >
>     >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>     >> >
>     >> > *auto lo*
>     >> >
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *# The primary network interface*
>     >> >
>     >> > *auto em1*
>     >> >
>     >> > *iface em1 inet manual*
>     >> >
>     >> >
>     >> > *# Public network*
>     >> >
>     >> > *   auto cloudbr0*
>     >> >
>     >> > *   iface cloudbr0 inet static*
>     >> >
>     >> > *    address 192.168.10.12*
>     >> >
>     >> > *    network 192.168.10.0*
>     >> >
>     >> > *    netmask 255.255.255.0*
>     >> >
>     >> > *    gateway 192.168.10.254*
>     >> >
>     >> > *    broadcast 192.168.10.255*
>     >> >
>     >> > *    dns-nameservers 192.168.10.254 4.2.2.2*
>     >> >
>     >> > *    #dns-domain cloudstack.et.test.local*
>     >> >
>     >> > *    bridge_ports em1*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> >
>     >> > *# Private network (not in use for now. Just using 1 bridge)*
>     >> >
>     >> > *    auto cloudbr1*
>     >> >
>     >> > *    iface cloudbr1 inet manual*
>     >> >
>     >> > *    bridge_ports none*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> > -----------------------------------
>     >> >
>     >> >
>     >> > Host 2: .30
>     >> > -----------------------------------
>     >> >
>     >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>     >> >
>     >> > *auto lo*
>     >> >
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *# The primary network interface*
>     >> >
>     >> > *auto em1*
>     >> >
>     >> > *iface em1 inet manual*
>     >> >
>     >> >
>     >> > *# Public network*
>     >> >
>     >> > *   auto cloudbr0*
>     >> >
>     >> > *   iface cloudbr0 inet static*
>     >> >
>     >> > *    address 192.168.30.12*
>     >> >
>     >> > *    network 192.168.30.0*
>     >> >
>     >> > *    netmask 255.255.255.0*
>     >> >
>     >> > *    gateway 192.168.30.254*
>     >> >
>     >> > *    broadcast 192.168.30.255*
>     >> >
>     >> > *    dns-nameservers 192.168.30.254 4.2.2.2*
>     >> >
>     >> > *    #dns-domain cloudstack.et.test.local*
>     >> >
>     >> > *    bridge_ports em1*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> >
>     >> > *# Private network (not in use for now. Just using 1 bridge)*
>     >> >
>     >> > *    auto cloudbr1*
>     >> >
>     >> > *    iface cloudbr1 inet manual*
>     >> >
>     >> > *    bridge_ports none*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> > -----------------------------------
>     >> >
>     >> >
>     >> > --John O. Adams
>     >>
>     >>
>     >
>     >
>
>
>
> dag.sonst...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 
Best Regards,
Sanjeev N
Chief Product Engineer@Accelerite

Reply via email to