[DISCUSS] Management server default port conflict

2020-06-30 Thread Abhishek Kumar
Hi all,

I would like to know everyone's opinion regarding an issue seen with CloudStack 
on CentOS8 (https://github.com/apache/cloudstack/pull/4068). CentOS8 comes with 
cockpit (https://cockpit-project.org/) installed which uses port 9090, although 
it is not active by default. CloudStack management server also needs port 9090. 
And when CloudStack management server is started with systemd it triggers the 
start of cockpit first and management server fails to start,


2020-06-25 07:20:51,707 ERROR [c.c.c.ClusterManagerImpl] (main:null) (logid:) 
Detected that another management node with the same IP 10.10.2.167 is already 
running, please check your cluster configuration
2020-06-25 07:20:51,708 ERROR [o.a.c.s.l.CloudStackExtendedLifeCycle] 
(main:null) (logid:) Failed to configure ClusterManagerImpl
javax.naming.ConfigurationException: Detected that another management node with 
the same IP 10.10.2.167 is already running, please check your cluster 
configuration
at 
com.cloud.cluster.ClusterManagerImpl.checkConflicts(ClusterManagerImpl.java:1192)
at 
com.cloud.cluster.ClusterManagerImpl.configure(ClusterManagerImpl.java:1065)
at 
org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle$3.with(CloudStackExtendedLifeCycle.java:114)
at 
org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle.with(CloudStackExtendedLifeCycle.java:153)
at 
org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle.configure(CloudStackExtendedLifeCycle.java:110)
at 
org.apache.cloudstack.spring.lifecycle.CloudStackExtendedLifeCycle.start(CloudStackExtendedLifeCycle.java:55)
at 
org.springframework.context.support.DefaultLifecycleProcessor.doStart(DefaultLifecycleProcessor.java:182)
at 
org.springframework.context.support.DefaultLifecycleProcessor.access$200(DefaultLifecycleProcessor.java:53)
at 
org.springframework.context.support.DefaultLifecycleProcessor$LifecycleGroup.start(DefaultLifecycleProcessor.java:360)
at 
org.springframework.context.support.DefaultLifecycleProcessor.startBeans(DefaultLifecycleProcessor.java:158)
at 
org.springframework.context.support.DefaultLifecycleProcessor.onRefresh(DefaultLifecycleProcessor.java:122)
at 
org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:894)
at 
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContext(DefaultModuleDefinitionSet.java:144)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$2.with(DefaultModuleDefinitionSet.java:121)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:244)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:249)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:249)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:232)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContexts(DefaultModuleDefinitionSet.java:116)
at 
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load(DefaultModuleDefinitionSet.java:78)
at 
org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules(ModuleBasedContextFactory.java:37)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.init(CloudStackSpringContext.java:70)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.(CloudStackSpringContext.java:57)
at 
org.apache.cloudstack.spring.module.factory.CloudStackSpringContext.(CloudStackSpringContext.java:61)
at 
org.apache.cloudstack.spring.module.web.CloudStackContextLoaderListener.contextInitialized(CloudStackContextLoaderListener.java:51)
at 
org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:930)
at 
org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:553)
at 
org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:889)
at 
org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:356)
at 
org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
at 
org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
at 
org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
at 
org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
   

[VOTE] Release Apache CloudStack CloudMonkey 6.1.0

2020-06-30 Thread Rohit Yadav
Hi All,

I've created a 6.1.0 release of CloudMonkey, with the following artifacts
up for a vote:

Git Branch:
https://github.com/apache/cloudstack-cloudmonkey/commits/abc31929e74a9f5b07507db203e75393fffc9f3e
Commit: abc31929e74a9f5b07507db203e75393fffc9f3e

Commits since last release 6.0.0:
https://github.com/apache/cloudstack-cloudmonkey/compare/6.0.0...abc31929e74a9f5b07507db203e75393fffc9f3e

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/cloudmonkey-6.1.0

To facilitate voting and testing, the builds are uploaded in this
pre-release:
https://github.com/apache/cloudstack-cloudmonkey/releases/tag/6.1.0

List of changes:
https://github.com/apache/cloudstack-cloudmonkey/blob/master/CHANGES.md

PGP release keys (signed using 5ED1E1122DC5E8A4A45112C2484248210EE3D884):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

For sanity in tallying the vote, can PMC members please be sure to indicate
"(binding)" with their vote?
[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Vote will be open for till the end of the next week (10 July 2020),
otherwise, extend until we reach lazy consensus. Thanks.

Regards.


Re: Upgrading XenServer Clusters managed by ACS...

2020-06-30 Thread Andrija Panic
Hi  David,

with a bit of delay...

those steps need to be tested - I would skip the whole
"environment.properties" file and see how it behaves today - the reason
being that, though the article on shapeblue.com was old, it does mention
both "Unmanage" cluster and Maintenace mode - so I'm not quite sure what is
the difference today vs. how it behaved back in time of ACS 4.3 / XS 6.2 -
the explanation that you've found on the mailing thread may not make sense,
I mean specifically the "There wasn't an 'unmanage' button at the time",
since the article clearly mentions it).

There is also no need to manually migrate VM's away from a host (i.e. pool
master) - simply put it into the Maintenance mode and it will move VMs away
to other hosts.

Assuming that putting the pool-master into Maintenance mode in ACS will,
these days, NOT trigger a new host to become a master, your steps look fine.

For the records, I've updated& the official documentation, for
XenServer 6.5+ : https://github.com/apache/cloudstack-documentation/pull/140
 /
https://acs-www.shapeblue.com/docs/WIP-PROOFING/pr140/installguide/hypervisor/xenserver.html#upgrading-xenserver-versions
-
i.e. removed unneeded steps and explained what each script is doing. This
guide assumes a much longer management plane downtime, as the cluster is
unmanaged while you update all the hosts in the pool. Works fine also for
XCP-ng upgrades, etc.

Either way, I prefer doing it based on the plan you laid out here.

Regards,
Andrija

On Fri, 19 Jun 2020 at 23:24, David Merrill 
wrote:

> Hi All,
>
> I have a production deployment of ACS managing three XenServer Clusters
> (XenServer pools of 6 hosts each) in two different Zones. I now find myself
> in the position of needing to do a major version upgrade of those hosts.
> Happily I have a ACS lab managing a XenServer cluster running the same
> (old) version of XenServer that I can practice on.
>
> I have plenty of practice operating ACS to “quiesce things” for XenServer
> patches (start with the pool master, move guest VMs off, put that host into
> maintenance mode, unmanage the cluster, patch the host, then reverse & move
> onto the next host with the same steps except we don’t bother
> w/un-managing/re-managing the cluster), but as I understand a XenServer
> version upgrade backs up the whole original XenServer installation to
> another partition and makes a clean installation of XenServer on the
> original partition (the problem there being that when the upgraded
> XenServer boots up all the ACS provided/copied scripts are not there & ACS
> can’t manage the host).
>
> So not much of an ask here (OK maybe at the end – have I missed something
> obvious or doing anything foolish?), I wanted to share a bit research & lay
> out a set of steps that I think will work to get a pool of XenServers in a
> cluster upgraded and end up in a place where ACS is happy with them.
>
> Bear with me it’s a little long,
>
>
>   1.  In XenCenter – if HA is enabled for the XenServer pool, disable it
>   2.  Stop ACS management/usage services
>   3.  Do MySQL database backups
>   4.  Start ACS management/usage services
>   5.  Start with the pool master.
>   6.  In ACS – Migrate all guest VMs to other hosts in the cluster
>   7.  In ACS – Prepare to put the pool master into maintenance mode (so no
> new guest VMs)
>  *   A caveat here related to this item I found when researching –
> https://www.shapeblue.com/how-to-upgrade-an-apache-cloudstack-citrix-xenserver-cluster/
>
>i.  A
> recommendation is made here to edit
> /etc/cloudstack/management/environment.properties
>
>  ii.  And
> set manage.xenserver.pool.master=false
>
>iii.  And
> restart CloudStack management services
>
>iv.
> BECAUSE if one didn’t I understand CloudStack WOULD force an election for
> another host to become the pool master (which is “bad” as ASCs is
> configured to speak to the currently configured pool master)
>
>  *   HOWEVER THIS MAY NOT BE NECESSARY
>
>i.
> Found a thread titled “A Story of a failed XenServer Upgrade” here –
> http://mail-archives.apache.org/mod_mbox/cloudstack-users/201601.mbox/browser
>
>  ii.  At
> the end of the thread Paul Angus states that Geoff’s ShapeBlue blog article
> was written in the ACS 4.3 era and that ACS’ behavior “used to be that
> putting a host that was the pool master into maintenance would cause
> CloudStack to force an election for another host to become pool master -
> stopping you from then upgrading the active pool master first. There wasn't
> an 'unmanage' button at the time.”
>
>