Re: RE: XCP-NG 8.2 LTS

2021-06-26 Thread Jermaine Kendall
Thank you for the reply. It is very much appreciated. Can the documentation be 
upgraded for XCP-NG hypervisor setup and let me know it's done

On 2021/06/25 20:33:08, Yordan Kostov  wrote: 
> Hello Jermaine,
> 
>   Long story short:
>   1. Install XCP-NG 8.2 
>   1.1. Make a a note of the name of the interface OR bond that is 
> going to be used for management traffic (this can be seen when you go to 
> CLUSTER -> networking tab
>   1.2. Make a note of the name of the interface OR bond that is 
> going to be used for user/public traffic if it is different then the 
> management
>   For example I use ETH0 for management and have bonded eth2 and eth3 and 
> use that bond for Public and user traffic - https://imgur.com/nbV2aAu
>   So the labels here are MGMT and DATA.
>   2. If you use presetup storage (for example fiber) attach it as you 
> usually do to the XCP cluster and note the name of the LUN in XenCenter.
>   3. Install Cloudstack somewhere (4.15) and deploy a zone by filling up 
> the guide from start to end using the interface labels and the LUN name for 
> primary storage.
>   4. Launch the zone deployment and wait for it to fail.
>   5. Cancel the wizard and go to Infrastruture tab. You will see that 
> Zone, Pod, cluster and hosts are deployed.
>   6. select primary storage tab. Add new one with presetup type and fill 
> in the storage LUN name. It will be added properly.
>   7. Deploy secondary storage (it was not deployed initially because the 
> zone wizard was interrupted before that step)
> 
>   You environment is now ready!
> 
>   Note: the deployment guide should be edited as it says XenServer switch 
> needs to be in bridge mode in order the system to work which is not correct 
> as ACS fully supports OVS.
>   May be bridge mode is required for some specific network design setting 
> which I am not aware of at the moment.
> 
> Best regards,
> Jordan
> 
> 
> -Original Message-
> From: Jermaine Kendall  
> Sent: Friday, June 25, 2021 10:49 PM
> To: users@cloudstack.apache.org
> Subject: Re: XCP-NG 8.2 LTS
> 
> 
> [X] This message came from outside your organization
> 
> 
> On 2021/05/05 09:50:58, Andrija Panic  wrote:
> > If you use officially unsupported hypevisor with 4.14 (XCP-ng 8.2), 
> > you will be missing the records in the guest_os_hypervisor table, and 
> > that means your VMs would be i.e. started as PV instead of HVM etc.
> >
> > Be warned :)
> >
> > Best,
> > Andrija
> >
> > On Wed, 5 May 2021 at 09:09, Yordan Kostov  wrote:
> >
> > > One more problem is that if you deploy a zone with presetup storage 
> > > it will give an error when it is at the step when the storage is 
> > > configured.
> > > It is already fixed in 4.15.1:
> > >
> > > https://urldefense.com/v3/__https://github.com/apache/cloudstack/pul
> > > l/4845__;!!A6UyJA!xbgZNCFeV-go7cGrUjQ---_MlaAER2Fdumg-cMdW4msQVKLQOh
> > > yt15t8yi-mQoUBNEaGjZ6RgICA$
> > >
> > > Best regards,
> > > Jordan
> > >
> > > -Original Message-
> > > From: Dominik Czerepiński 
> > > Sent: Tuesday, May 4, 2021 3:06 PM
> > > To: users@cloudstack.apache.org
> > > Subject: Re: XCP-NG 8.2 LTS
> > >
> > >
> > > [X] This message came from outside your organization
> > >
> > >
> > > Me. Works stable. One problem is if You don’t use SSL for 
> > > consoleproxy change noVNC to older version.
> > >
> > > Wysłane z iPhone'a
> > >
> > > > Wiadomość napisana przez Matheus Fontes  w 
> > > > dniu
> > > 04.05.2021, o godz. 03:18:
> > > >
> > > > I saw in cloudstack 4.15 documentation that it supports xcp-ng 
> > > > 8.1 Does someone tried 8.2 LTS version?
> > > >
> > > > thanks
> > > > Matheus Fontes
> > >
> >
> >
> > --
> >
> > Andrija Panić
> > Hi, I have a few questions with regard to setting up xcp-ng with 
> > cloudstack, I have never been successful so far. I followed the xenserver 
> > hypervisor setup and ran into some issues. Do you have any instructions 
> > specifically for setting up XCP-NG with cloudstack? Also is the CSP package 
> > needed for XCP-NG becuase the download link is no longer available. The 
> > network labels, are they necessary and why can't I use the default 
> > networking for XCP-NG which is OVS?
> 


Re: How to use ansible for cloudstack initialization

2021-06-26 Thread Rudraksh MK
Hi Jerry!

I’m not sure if this solution would work for you, but we find that it’s better 
to use Ansible for just setting up and deploying the management server and the 
compute nodes; when it comes to setting up zones, clusters, pods and so on, we 
typically use Python scripts and the cs library. It allows us to have more 
control over what’s being created, and it also allows us to have people specify 
what hosts/zones/clusters they want in a spreadsheet, with the scripts just 
reading those sheets and making the relevant API calls. Ansible felt 
slightly..clunky in this context.


Best!

Rudraksh Mukta Kulshreshtha
Vice-President - DevOps & R
IndiQus Technologies
O +91 11 4055 1411 | M +91 99589 54879
indiqus.com

This message is intended only for the use of the individual or entity to which 
it is addressed and may contain information that is confidential and/or 
privileged. If you are not the intended recipient please delete the original 
message and any copy of it from your computer system. You are hereby notified 
that any dissemination, distribution or copying of this communication is 
strictly prohibited unless proper authorization has been obtained for such 
action. If you have received this communication in error, please notify the 
sender immediately. Although IndiQus attempts to sweep e-mail and attachments 
for viruses, it does not guarantee that both are virus-free and accepts no 
liability for any damage sustained as a result of viruses.
On 26 Jun 2021, 2:11 PM +0530, li jerry , wrote:
> Hello everyone
>
> Does anyone use ansible to add zone/pod/cluster/host/storage to cloudstack?
> Currently I can only use ansible to complete the deployment of cloudstack, 
> nfs, mysql and other services.
> I can't use ansible to complete operations such as adding zone/pod
>
> Can someone provide relevant documents or solutions?
>
> thank you very much!
>
>
>
> -Jerry
>


Re: AW: CloudStack and Ansible

2021-06-26 Thread Rene Moser

Hi Peter

On 25.06.21 10:55, peter.murysh...@zv.fraunhofer.de wrote:

Hi Rafael,

as a follow-up to your great talk at the CSEUG session: in your email you wrote,

"The Ansible implementation for ACS is very complete and robust. It made it possible 
for us to fully automate from metal to the service."

Which Ansible implementation do you mean? The one I can find addresses rather 
API usage [1]; for full automation there is probably more scripting required to 
setup the actual
cluster, possibly with some variations depending on the architecture.

[1] 
https://docs.ansible.com/ansible/latest/collections/ngine_io/cloudstack/index.html#plugins-in-ngine-io-cloudstack
To provision your hardware, OS, and install cloudstack, like any other 
application, and dep services like DB, java, storage, nfs servers, 
firewall, networking (e.g. cisco switches), ansible is a perfect match 
but depending on your infra and choices.


The cloudstack integration addresses the api usage only, it is the 
missing piece after you (automated) installed cloudstack to fully 
automate the configuration of the cloud.


Hope this clarifies.

Regards
René


How to use ansible for cloudstack initialization

2021-06-26 Thread li jerry
Hello everyone

Does anyone use ansible to add zone/pod/cluster/host/storage to cloudstack?
Currently I can only use ansible to complete the deployment of cloudstack, nfs, 
mysql and other services.
I can't use ansible to complete operations such as adding zone/pod

Can someone provide relevant documents or solutions?

thank you very much!



-Jerry



Re: Management server reboot appears to cause vms on other hosts to shutdown?

2021-06-26 Thread Brian Fitzpatrick
Hi Jordan,

Thanks for your reply. Apologies I might not have been clear.

The management server is aware of the vm and when I set the host that is also 
the same server that is running the management server (and mysql) into 
maintenance mode, I can see it not longer has any running vms on it. They have 
migrated to other hosts. Cloud stack can see them. Bu when I then to an apt 
update and reboot the management server, the vms on the other hosts seem to 
have shutdown.

The reboot did take a while (15-20 mins), but I am surprised that it has 
affected other kvm hosts, which I thought should just carry on running. Unless 
I have missed something that was still on the management(and mysql) server.

Thanks

Brian

-Original Message-
From: Yordan Kostov 
mailto:yordan%20kostov%20%3cyord...@nsogroup.com%3e>>
Reply-To: users@cloudstack.apache.org
To: users@cloudstack.apache.org 
mailto:%22us...@cloudstack.apache.org%22%20%3cus...@cloudstack.apache.org%3e>>
Subject: RE: Management server reboot appears to cause vms on other hosts to 
shutdown?
Date: Fri, 25 Jun 2021 09:10:44 +


CAUTION !


This email was NOT sent using a University of Chester account, so we are unable 
to verify the identity of the sender. Do not click links or open attachments 
unless you recognise the sender and know the content is safe.


=



Hello Brian,


May be I did not understand very well but from what you say I get that 
the management server + SQL and NFS are on the same physical hosts that are 
being managed by cloudstack?

If those VMs are not visible in Cloudstack, the system is not aware 
that they exist so it wont try to roll them to another host if you perform 
hypervisor host reboot.


Best regards,

Jordan


-Original Message-

From: Brian Fitzpatrick <



b.fitzpatr...@chester.ac.uk

>

Sent: Friday, June 25, 2021 12:06 PM

To:



users@cloudstack.apache.org


Subject: Management server reboot appears to cause vms on other hosts to 
shutdown?



[X] This message came from outside your organization



Hi all,


Still relatively new to CloudStack and learning, testing etc.


I have created 1 management server with mysql on it and created 2 clusters with 
a nfs primary storage server in each and a number of hosts in each.


I have been working through the servers, putting them in maintenance mode 
(noting the vm migrations), updating and rebooting them. All working fine


I then wanted to update and reboot the server running the management and mysql. 
It is also a host, so I set it in maintenance mode so no vms running on it.


I thought if I update it and reboot, all I would lose for a period of time was 
access to the management server, the vms should keep running on their various 
hosts


The reboot, took longer than usual, it seemed to hang for 15-20mins before 
shutting down and rebooting. To my surprise though I lost contact to all the 
vms on the other hosts.


They all shut down.


Apologies, if I have missed something here, I thought I understood. All virtual 
routers and system vms appeared to be running on the other hosts.


Is it because the management server took a while to reboot, the other hosts 
have lost contact and shutdown their vms? seems odd?


Any suggestions, help welcome. As I say, still learning!


Thanks


Brian


Re: Management server reboot appears to cause vms on other hosts to shutdown?

2021-06-26 Thread Brian Fitzpatrick
Hi Jordan,

Thanks for your reply. Apologies I might not have been clear.

The management server is aware of the vm and when I set the host that is also 
the same server that is running the management server (and mysql) into 
maintenance mode, I can see it not longer has any running vms on it (including 
system vms and routers). They have migrated to other hosts. Cloud stack can see 
them. But when I then to an apt update and reboot the management server, the 
vms on the other hosts seem to have shutdown.

The reboot did take a while (15-20 mins), but I am surprised that it has 
affected other kvm hosts, which I thought should just carry on running. Unless 
I have missed something that was still on the management(and mysql) server.

Thanks

Brian

-Original Message-
From: Yordan Kostov 
mailto:yordan%20kostov%20%3cyord...@nsogroup.com%3e>>
Reply-To: users@cloudstack.apache.org
To: users@cloudstack.apache.org 
mailto:%22us...@cloudstack.apache.org%22%20%3cus...@cloudstack.apache.org%3e>>
Subject: RE: Management server reboot appears to cause vms on other hosts to 
shutdown?
Date: Fri, 25 Jun 2021 09:10:44 +


CAUTION !


This email was NOT sent using a University of Chester account, so we are unable 
to verify the identity of the sender. Do not click links or open attachments 
unless you recognise the sender and know the content is safe.


=



Hello Brian,


May be I did not understand very well but from what you say I get that 
the management server + SQL and NFS are on the same physical hosts that are 
being managed by cloudstack?

If those VMs are not visible in Cloudstack, the system is not aware 
that they exist so it wont try to roll them to another host if you perform 
hypervisor host reboot.


Best regards,

Jordan


-Original Message-

From: Brian Fitzpatrick <



b.fitzpatr...@chester.ac.uk

>

Sent: Friday, June 25, 2021 12:06 PM

To:



users@cloudstack.apache.org


Subject: Management server reboot appears to cause vms on other hosts to 
shutdown?



[X] This message came from outside your organization



Hi all,


Still relatively new to CloudStack and learning, testing etc.


I have created 1 management server with mysql on it and created 2 clusters with 
a nfs primary storage server in each and a number of hosts in each.


I have been working through the servers, putting them in maintenance mode 
(noting the vm migrations), updating and rebooting them. All working fine


I then wanted to update and reboot the server running the management and mysql. 
It is also a host, so I set it in maintenance mode so no vms running on it.


I thought if I update it and reboot, all I would lose for a period of time was 
access to the management server, the vms should keep running on their various 
hosts


The reboot, took longer than usual, it seemed to hang for 15-20mins before 
shutting down and rebooting. To my surprise though I lost contact to all the 
vms on the other hosts.


They all shut down.


Apologies, if I have missed something here, I thought I understood. All virtual 
routers and system vms appeared to be running on the other hosts.


Is it because the management server took a while to reboot, the other hosts 
have lost contact and shutdown their vms? seems odd?


Any suggestions, help welcome. As I say, still learning!


Thanks


Brian


Re: Unable to add template to new deployment

2021-06-26 Thread Joshua Schaeffer


On 6/24/21 5:31 AM, Andrija Panic wrote:
> LXC is nothing short of untested recently (for years) - the ones that DO
> work (used in production by people) are KVM, XenServer/XCP-ng, VMware.
> That's all.
> LXC, OVM and co, are most probably doomed, to be honest.
Thanks, I'm not surprised to hear this. I will switch to KVM.

-- 
Thanks,
Joshua Schaeffer