Re: secure hosts communications

2019-01-31 Thread Ugo Vasi
Hi Rohit, this is a fresh installed infrastructure, but we had some hardware problems (a mgms server restart) and now the hosts are in "unsecure" state. Do you have any idea how it could have happened to go to this state? I'm analyzing the logs but I do not find much about it. Il 31/01/19 08

Re: secure hosts communications

2019-01-31 Thread Ugo Vasi
Hi Rohit, sorry if I insist with the questions... by launching the procedure, does the framework rebuild and "overwrite" the configuration of the certificates? Il 31/01/19 09:28, Ugo Vasi ha scritto: Hi Rohit, this is a fresh installed infrastructure, but we had some hardware problems (a mgms

Re: secure hosts communications

2019-01-31 Thread Rohit Yadav
Old keystore if any on the KVM hosts (at /etc/cloudstack/agent/cloud.jks) will be removed. - Rohit From: Ugo Vasi Sent: Thursday, January 31, 2019 2:24:06 PM To: Rohit Yadav; [email protected] Subject: Re: secure hos

Re: secure hosts communications

2019-01-31 Thread Ugo Vasi
Hi Rohit, I tryed renew certificate but it failed! Now libvirt does not restart and agent is disconnected: agent log: 2019-01-31 11:17:07,530 INFO [resource.wrapper.LibvirtPostCertificateRenewalCommandWrapper] (Certificate Renewal Timer:null) (logid:fe1554cc) Restarting libvirt after certifica

Re: secure hosts communications

2019-01-31 Thread Ugo Vasi
Update: by rebooting the host system, the libvirt is restarted and the ACS-agent has been reconnected to management. The host remains in "unsecure" mode If I set to false "ca.plugin.root.auth.strictness" can I migrate the VM? Il 31/01/19 11:50, Ugo Vasi ha scritto: Hi Rohit, I tryed re

Re: secure hosts communications

2019-01-31 Thread Rohit Yadav
Looks like some error occurred while generating the keystore. Can you check if you see any .jks and crt/key files at /etc/cloudstack/agent/ directory? Also share output of: netstat -nl | grep 16509 # if you get any listening libvirtd, then your libvirtd is NOT secured netstat -nl | grep 16514

Re: secure hosts communications

2019-01-31 Thread Ugo Vasi
Hi Rohit, the cloudstack-agent  version is 4.11.2.0: # dpkg -l cloudstack-agent Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version   

Re: secure hosts communications

2019-01-31 Thread Rohit Yadav
Hi Ugo, Please make sure your KVM host's libvirtd is in the listening -l mode. Without the libvirtd daemon in listening mode kvm agent will have issues using libvirtd as well. Once you fix it, restart libvirtd and cloudstack-agent and you should see some output for: netstat -nl | grep 16514

Re: how to run rhel 6.x VM as PV VM on xenserver 7.1CU1?

2019-01-31 Thread Yiping Zhang
Hi, Andrija: I am willing to try this approach given that we are working in a lab environment. Otherwise we would have to downgrade to use XenServer 7.1 + installing security patches afterwards Since we also need to add one new entry in hypervisor_capabilities table for XenServer 7.1.1 and the

RE: how to run rhel 6.x VM as PV VM on xenserver 7.1CU1?

2019-01-31 Thread Andrija Panic
Hi Yiping, Please check how this was done in previous releases (i.e. added support for XenServer 7.1.0 and some additional missing guest os mapping etc: https://github.com/apache/cloudstack/blob/master/engine/schema/src/main/resources/META-INF/db/schema-41000to41100.sql Just make sure that your

Keeping snapshot when volume and instance is deleted

2019-01-31 Thread Cloud List
Hi, I am using CloudStack with KVM hypervisor. I noted that a volume snapshot will be deleted when the volume (and instance using the volume) is deleted. Is it possible to keep the snapshot saved somewhere when the volume and instance are deleted? My goal is to have a backup copy of the data bef

Re: Keeping snapshot when volume and instance is deleted

2019-01-31 Thread Ivan Kudryavtsev
Hi, this is a case I also thinking about. It's like an icebox... the easiest way is to deploy cheap primary storage with specific tags, create small compute node in separate dedicated cluster for icebox account. Next, upon archiving transfer the __stopped__ vm to that account snd migrate volumes to