Re: Alternative Cloudstack UI for KVM and Basic Zones (with SG)

2017-04-25 Thread John Adams
This is great!!!


--John O. Adams

On 25 April 2017 at 10:11, Ivan Kudryavtsev 
wrote:

> Hello, Cloudstack community.
>
> We are proud to present our last development effort to you. During the last
> 5 months we spend some time to develop alternative Cloudstack UI for basic
> zones with KVM hypervisor and security groups. This is basically the thing
> we are using in our clouds. During the design of the software we tried to
> fulfill the expectations of our average cloud users and simplify operations
> as much as possible.
>
> The project is OSS and can be found at GitHub with bunch of screenshots and
> deployment guide. It's under active development so, we will ge glad if you
> join and provide us with additional feedback, UX considerations and other
> interesting information.
>
> Project page at GitHub: https://bwsw.github.io/cloudstack-ui/
> Source code: https://github.com/bwsw/cloudstack-ui
>
> Have a good day. Looking forward hearing your feedback.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks Software, Ltd.
> Cell: +7-923-414-1515
> WWW: http://bw-sw.com/
>


Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

2017-02-15 Thread John Adams
Hi Boris,

Thanks for your response. Yes I'm building a basic zone, just for starters.


--John O. Adams

On 15 February 2017 at 16:32, Boris Stoyanov <boris.stoya...@shapeblue.com>
wrote:

> Hi John,
>
> Maybe I misunderstood, are you building advanced or basic zone?
>
> Thanks,
> Boris Stoyanov
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> @shapeblue
>
>
>
>
> On Feb 15, 2017, at 12:34 PM, John Adams <adams.op...@gmail.com> wrote:
>
> Hi Boris,
>
> I think I'm actually using the Shared network offering. The VMs being
> created are in the same same physical network subnet. Isolation is an
> option but I'm not using that at this point.
>
> Thanks.
>
>
> --John O. Adams
>
> On 15 February 2017 at 11:50, Boris Stoyanov <boris.stoya...@shapeblue.com
> > wrote:
>
>> Hi John,
>>
>> In isolated networks VMs should be accessed only through the virtual
>> router IP.
>>
>> To access the VM over ssh, you should go to network setting and enable a
>> port on the Virtual Router IP. Then create a port forwarding rule from that
>> enabled port to port 22 on the specific VM within that network. After that
>> try to ssh the enabled port on the VR and you should end-up in the VM
>>
>> PS. In isolated networks you shouldn’t be able to ping the VM, all the
>> traffic goes through the VR.
>>
>> Thanks,
>> Boris Stoyanov
>>
>>
>>
>> boris.stoya...@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>> > On Feb 15, 2017, at 8:37 AM, John Adams <adams.op...@gmail.com> wrote:
>> >
>> > Hi all,
>> >
>> > Still learning the ropes in a test environment here. Hitting a little
>> snag
>> > with networking here. The physical network has 2 VLANs. (192.168.10.0
>> and
>> > 192.168.30.0)
>> >
>> > This is my current ACS testing environment:
>> >
>> > 1 management server (Ubuntu 14.04): 192.168.30.14
>> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
>> >
>> > With that, I created 2 different zones, each with 1 pod and 1 cluster
>> and 1
>> > host respectively.
>> >
>> > *The good:*
>> > I can create VMs on either of the hosts. I'm able to ping the VMs and
>> even
>> > ssh into them only if I'm on the host or the management server or from
>> the
>> > ACS console itself (within the network).
>> >
>> > *The Issue:*
>> > I can't ssh or even ping the VMs when in the same network outside the
>> host
>> > environment. What could be the problem?
>> >
>> > A. Management Server network config is as below:
>> > -
>> > *auto lo*
>> > *iface lo inet loopback*
>> >
>> > *auto eth0*
>> > *iface eth0 inet static*
>> > *   address 192.168.30.14*
>> > *   netmask 255.255.255.0*
>> > *   gateway 192.168.30.254*
>> >   *dns-nameservers 192.168.30.254 4.2.2.2*
>> >   *#dns-domain cloudstack.et.test.local*
>> > -
>> >
>> > B. The KVM host network configuration is a below:
>> >
>> > Host 1: .10
>> > -
>> >
>> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>> >
>> > *auto lo*
>> >
>> > *iface lo inet loopback*
>> >
>> > *# The primary network interface*
>> >
>> > *auto em1*
>> >
>> > *iface em1 inet manual*
>> >
>> >
>> > *# Public network*
>> >
>> > *   auto cloudbr0*
>> >
>> > *   iface cloudbr0 inet static*
>> >
>> > *address 192.168.10.12*
>> >
>> > *network 192.168.10.0*
>> >
>> > *netmask 255.255.255.0*
>> >
>> > *gateway 192.168.10.254*
>> >
>> > *broadcast 192.168.10.255*
>> >
>> > *dns-nameservers 192.168.10.254 4.2.2.2*
>> >
>> > *#dns-domain cloudstack.et.test.local*
>> >
>> > *bridge_ports em1*
>> >
>> > *bridge_fd 5*
>> >
>> > *bridge_stp off*
>> >
>> > *bridge_maxwait 1*
>> >
>> >
>> > *# Private network (not in use for now. Just using 1 bridge)*
>> >
>> > *auto cloudbr1*
>> >
>> > * 

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

2017-02-15 Thread John Adams
Hi Boris,

I think I'm actually using the Shared network offering. The VMs being
created are in the same same physical network subnet. Isolation is an
option but I'm not using that at this point.

Thanks.


--John O. Adams

On 15 February 2017 at 11:50, Boris Stoyanov <boris.stoya...@shapeblue.com>
wrote:

> Hi John,
>
> In isolated networks VMs should be accessed only through the virtual
> router IP.
>
> To access the VM over ssh, you should go to network setting and enable a
> port on the Virtual Router IP. Then create a port forwarding rule from that
> enabled port to port 22 on the specific VM within that network. After that
> try to ssh the enabled port on the VR and you should end-up in the VM
>
> PS. In isolated networks you shouldn’t be able to ping the VM, all the
> traffic goes through the VR.
>
> Thanks,
> Boris Stoyanov
>
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On Feb 15, 2017, at 8:37 AM, John Adams <adams.op...@gmail.com> wrote:
> >
> > Hi all,
> >
> > Still learning the ropes in a test environment here. Hitting a little
> snag
> > with networking here. The physical network has 2 VLANs. (192.168.10.0 and
> > 192.168.30.0)
> >
> > This is my current ACS testing environment:
> >
> > 1 management server (Ubuntu 14.04): 192.168.30.14
> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
> >
> > With that, I created 2 different zones, each with 1 pod and 1 cluster
> and 1
> > host respectively.
> >
> > *The good:*
> > I can create VMs on either of the hosts. I'm able to ping the VMs and
> even
> > ssh into them only if I'm on the host or the management server or from
> the
> > ACS console itself (within the network).
> >
> > *The Issue:*
> > I can't ssh or even ping the VMs when in the same network outside the
> host
> > environment. What could be the problem?
> >
> > A. Management Server network config is as below:
> > -
> > *auto lo*
> > *iface lo inet loopback*
> >
> > *auto eth0*
> > *iface eth0 inet static*
> > *   address 192.168.30.14*
> > *   netmask 255.255.255.0*
> > *   gateway 192.168.30.254*
> >   *dns-nameservers 192.168.30.254 4.2.2.2*
> >   *#dns-domain cloudstack.et.test.local*
> > -
> >
> > B. The KVM host network configuration is a below:
> >
> > Host 1: .10
> > -
> >
> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
> >
> > *auto lo*
> >
> > *iface lo inet loopback*
> >
> > *# The primary network interface*
> >
> > *auto em1*
> >
> > *iface em1 inet manual*
> >
> >
> > *# Public network*
> >
> > *   auto cloudbr0*
> >
> > *   iface cloudbr0 inet static*
> >
> > *address 192.168.10.12*
> >
> > *network 192.168.10.0*
> >
> > *netmask 255.255.255.0*
> >
> > *gateway 192.168.10.254*
> >
> > *broadcast 192.168.10.255*
> >
> > *dns-nameservers 192.168.10.254 4.2.2.2*
> >
> > *#dns-domain cloudstack.et.test.local*
> >
> > *bridge_ports em1*
> >
> > *bridge_fd 5*
> >
> > *bridge_stp off*
> >
> > *bridge_maxwait 1*
> >
> >
> > *# Private network (not in use for now. Just using 1 bridge)*
> >
> > *auto cloudbr1*
> >
> > *iface cloudbr1 inet manual*
> >
> > *bridge_ports none*
> >
> > *bridge_fd 5*
> >
> > *bridge_stp off*
> >
> > *bridge_maxwait 1*
> > ---
> >
> >
> > Host 2: .30
> > ---
> >
> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
> >
> > *auto lo*
> >
> > *iface lo inet loopback*
> >
> > *# The primary network interface*
> >
> > *auto em1*
> >
> > *iface em1 inet manual*
> >
> >
> > *# Public network*
> >
> > *   auto cloudbr0*
> >
> > *   iface cloudbr0 inet static*
> >
> > *address 192.168.30.12*
> >
> > *network 192.168.30.0*
> >
> > *netmask 255.255.255.0*
> >
> > *gateway 192.168.30.254*
> >
> > *broadcast 192.168.30.255*
> >
> > *dns-nameservers 192.168.30.254 4.2.2.2*
> >
> > *#dns-domain cloudstack.et.test.local*
> >
> > *bridge_ports em1*
> >
> > *bridge_fd 5*
> >
> > *bridge_stp off*
> >
> > *bridge_maxwait 1*
> >
> >
> > *# Private network (not in use for now. Just using 1 bridge)*
> >
> > *auto cloudbr1*
> >
> > *iface cloudbr1 inet manual*
> >
> > *bridge_ports none*
> >
> > *bridge_fd 5*
> >
> > *bridge_stp off*
> >
> > *bridge_maxwait 1*
> >
> > ---
> >
> >
> > --John O. Adams
>
>


Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

2017-02-14 Thread John Adams
Hi all,

Still learning the ropes in a test environment here. Hitting a little snag
with networking here. The physical network has 2 VLANs. (192.168.10.0 and
192.168.30.0)

This is my current ACS testing environment:

1 management server (Ubuntu 14.04): 192.168.30.14
2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12

With that, I created 2 different zones, each with 1 pod and 1 cluster and 1
host respectively.

*The good:*
I can create VMs on either of the hosts. I'm able to ping the VMs and even
ssh into them only if I'm on the host or the management server or from the
ACS console itself (within the network).

*The Issue:*
I can't ssh or even ping the VMs when in the same network outside the host
environment. What could be the problem?

A. Management Server network config is as below:
-
*auto lo*
*iface lo inet loopback*

*auto eth0*
*iface eth0 inet static*
*   address 192.168.30.14*
*   netmask 255.255.255.0*
*   gateway 192.168.30.254*
   *dns-nameservers 192.168.30.254 4.2.2.2*
   *#dns-domain cloudstack.et.test.local*
-

B. The KVM host network configuration is a below:

Host 1: .10
-

*# interfaces(5) file used by ifup(8) and ifdown(8)*

*auto lo*

*iface lo inet loopback*

*# The primary network interface*

*auto em1*

*iface em1 inet manual*


*# Public network*

*   auto cloudbr0*

*   iface cloudbr0 inet static*

*address 192.168.10.12*

*network 192.168.10.0*

*netmask 255.255.255.0*

*gateway 192.168.10.254*

*broadcast 192.168.10.255*

*dns-nameservers 192.168.10.254 4.2.2.2*

*#dns-domain cloudstack.et.test.local*

*bridge_ports em1*

*bridge_fd 5*

*bridge_stp off*

*bridge_maxwait 1*


*# Private network (not in use for now. Just using 1 bridge)*

*auto cloudbr1*

*iface cloudbr1 inet manual*

*bridge_ports none*

*bridge_fd 5*

*bridge_stp off*

*bridge_maxwait 1*
---


Host 2: .30
---

*# interfaces(5) file used by ifup(8) and ifdown(8)*

*auto lo*

*iface lo inet loopback*

*# The primary network interface*

*auto em1*

*iface em1 inet manual*


*# Public network*

*   auto cloudbr0*

*   iface cloudbr0 inet static*

*address 192.168.30.12*

*network 192.168.30.0*

*netmask 255.255.255.0*

*gateway 192.168.30.254*

*broadcast 192.168.30.255*

*dns-nameservers 192.168.30.254 4.2.2.2*

*#dns-domain cloudstack.et.test.local*

*bridge_ports em1*

*bridge_fd 5*

*bridge_stp off*

*bridge_maxwait 1*


*# Private network (not in use for now. Just using 1 bridge)*

*auto cloudbr1*

*iface cloudbr1 inet manual*

*bridge_ports none*

*bridge_fd 5*

*bridge_stp off*

*bridge_maxwait 1*

---


--John O. Adams


Re: Giving Users SSH Access to VMs

2017-02-09 Thread John Adams
Hi there Rene,

It's under the account, there is a dropdown which contains the "SSH key
pairs". If you select it, you will get into a view and on the right hand
side you find the button to generate the keys.
Opips: Yes I saw it, and I have been able to add!. It's well hidden though
lol.

(hmm seems there is no way to upload existing keys in the UI, is there any?)
Opips: I would imagine it's the same way. As from the pop-up dialog, if the
key you paste in has no public key, it generates a new key pair altogether.
The caveat is that, the template from which the VM was provisioned from
must have been configured to support ssh-key Authentication. Working on
this then will get back on the outcome.

Yes, but the VM has to be stopped, in the VM detail view, second icon
from right "reset ssh key pair"
Opips: Nice!! I saw it :)


--John O. Adams

On 8 February 2017 at 15:37, Rene Moser <m...@renemoser.net> wrote:

> Hi John
>
> On 02/08/2017 01:14 PM, John Adams wrote:
> > Hello,
> >
> > Just managed to setup version 4.9.2.0 with various Ubuntu 14.04 KVM
> hosts.
> > In the release notes for v 4.6 there's a mention of being able to
> generate
> > ssh-keys from the Web UI  but there's no mention of this in the
> > administration documentation, or unless I'm not looking hard enough.
>
> It's under the account, there is a dropdown which contains the "SSH key
> pairs". If you select it, you will get into a view and on the right hand
> side you find the button to generate the keys.
>
> (hmm seems there is no way to upload existing keys in the UI, is there
> any?)
>
> >
> > Also is it possible to add a user's public key into an already
> provisioned
> > virtual machine?
>
> Yes, but the VM has to be stopped, in the VM detail view, second icon
> from right "reset ssh key pair"
>
> René
>


Giving Users SSH Access to VMs

2017-02-08 Thread John Adams
Hello,

Just managed to setup version 4.9.2.0 with various Ubuntu 14.04 KVM hosts.
In the release notes for v 4.6 there's a mention of being able to generate
ssh-keys from the Web UI  but there's no mention of this in the
administration documentation, or unless I'm not looking hard enough.

Also is it possible to add a user's public key into an already provisioned
virtual machine?

--John O. Adams