[VOTE] 4.11.2.0 RC1

2018-09-18 Thread Paul Angus
Hi All,

I've created a 4.11.2.0 release (RC1), with the following artefacts up for 
testing and a vote:

Git Branch and Commit SH:
https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.11.2.0-RC20180918T1628
Commit: 115fb482409da71b5e82bbc2190c50291c833c15

Source release (checksums and signatures are available at the same location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.11.2.0/

PGP release keys (signed using 8B309F7251EE0BC8):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

The vote will be open until the beginning of next week, 24th September 2018.

For sanity in tallying the vote, can PMC members please be sure to indicate 
"(binding)" with their vote

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Additional information:

For users' convenience, I've built packages from 
115fb482409da71b5e82bbc2190c50291c833c15 and published RC1 repository here:
http://packages.shapeblue.com/testing/41120rc1/

4.11.2 systemvm templates are available from here:
http://packages.shapeblue.com/testing/systemvm/




paul.an...@shapeblue.com 
www.shapeblue.com
Amadeus House, Floral Street, London  WC2E 9DPUK
@shapeblue
  
 



VRs swapping with 256 MB RAM

2018-09-18 Thread Rene Moser
Hi

While running test for a 4.11.1 (VMware) upgrade in our lab, we run into
low memory / swapping of VRs having 256 MB RAM. After 2-3 days it became
critical because the management server connections to VRs took very
long, minutes, this resulted in many more problems all over.

Make sure your VRs have enough RAM.

Regards
René


Re: [DISCUSS] deployment planner improvement

2018-09-18 Thread Rafael Weingärtner
Sorry the late reply, I was kind of swamped.

Been there, done that. I would not change the deployment/allocation
planners as they focus in the start phase. Their goal is to find a place to
put the VM when it is starting and that is it. On the other hand, to
execute live optimization as the system (cloud configuration)changes over
time… Well, that is something else…

I have been researching that in my Ph.D., and we basically ignore the start
(allocation/deployment of VMs) and focus on live balancing, distribution,
or consolidation of workloads (VMs). We have developed a prototype that
worked altogether in ACS. It was developed as a plugin for ACS [1]. There
you can find the whole structure we used, and the callbacks we created to
enable the development of workload management techniques
(heuristics/methods, or whatever name you use to describe something that
guides a software agent to balance/unbalance /consolidate workloads). This
plugin should work in ACS 4.11/master already. Therefore, it would be a
matter of implementing this interface [2] with your requirements.

We moved away from that though. Adding that kind of complexity in ACS
seemed too much. Moreover, we would restrict any solution to work only with
ACS. Therefore, we created something else that uses cloud orchestrators API
(OpenStack or CloudStack) to gather data, process it, and then execute
optimization tasks.

Are you going to be in Montreal next week? We would be happy to collaborate
with you. I can help you get the plugin developed in [1] running with ACS
4.11 or master, or we can share with you the other software as well.


[1] https://github.com/Autonomiccs/autonomiccs-platform
[2]
https://github.com/Autonomiccs/autonomiccs-platform/blob/master/autonomic-administration-algorithms/src/main/java/br/com/autonomiccs/autonomic/administration/algorithms/ClusterAdministrationHeuristicAlgorithm.java

On Fri, Sep 7, 2018 at 3:39 PM, Pierre-Luc Dion  wrote:

> #scopecreep Paul ;)
>
> But I think the problem you identify is related to selection of the
> deployment planner strategy, globally define or at the compute offering.
> You can select how cloudstack choose the host to deploy a new vm.
>
> But even then, like marcus stated, if you add a node to a full cluster, all
> new vm will be created on that node.
>
> So if nobody have WIP around post deployment orchestration, I'll work on
> the feature spec with the university, with objective in mind to easy
> hypervisor maintenances, better distribution of workload.
>
> I would not expect PR before ~6months, but will have some actions around it
> very soon I hope.
>
>
>
> Le ven. 7 sept. 2018 09 h 05, Marc-Andre Jutras  a
> écrit :
>
> > I agree, it is affecting all hypervisor... I basically had to migrate a
> > bunch of vm manually to re-balance a cluster after an upgrade or even
> > after re-adding new host to a cluster.
> >
> > Personally, I think Cloudstack should be able to handles this  > balancing> of resource, example: having a piece of code somewhere that
> > can run every hours or on demand to re-calculate and re-balance
> > resources across hosts within a cluster...
> >
> > Even the deployment planner is not really relevant here: this process
> > will basically balance new VM creation through different clusters of a
> > POD, not between hosts within a cluster and it's also becoming a
> > nightmare when you start to do cross-cluster migration...
> >
> > Sum of all : The deployment strategies planner should be re-worked a
> bit...
> >
> > +1 on #scoopcreep ;)
> >
> > Marcus ( mjut...@cloudops.com )
> >
> > On 2018-09-07 6:01 AM, Paul Angus wrote:
> > > I think that this affects all hypervisors as CloudStack's deployment
> > strategies are generally sub-optimal to say the least.
> > >  From what our devs have told me, a large part of the problem is that
> > capacity/usage and suitability due to tags is calculated by multiple
> parts
> > of the code independently, there is no central method, which will give a
> > consistent answer.
> > >
> > > In Trillian we take a micro-management approach and have a custom
> module
> > which will return the least used cluster, the least used host or the
> least
> > used host in a given cluster.  With that info we place VMs on a specific
> > hosts - keeping virtualised hypervisors in the same cluster (least used)
> so
> > that processor types match, and all other VMs on the least used hosts.
> > >
> > > For cross-cluster migrations (VMs and/or storage) I think that most
> > times people want to move from cluster A to the least used
> > (cluster/storage) in cluster B - making them choose which host/pool is
> > actually unhelpful.
> > >
> > > #scopecreep - sorry Pierre-Luc
> > >
> > > Kind regards,
> > >
> > > Paul Angus
> > >
> > > paul.an...@shapeblue.com
> > > www.shapeblue.com
> > > Amadeus House, Floral Street, London  WC2E 9DPUK
> > > @shapeblue
> > >
> > >
> > >
> > >
> > > -Original Message-
> > > From: Will Stevens 
> > > Sent: 06 September 2018