Hi Marco,

Please find my answers inline:

On Tue, Sep 15, 2015 at 4:11 AM, Monaco Marco <ma.mon...@almaviva.it> wrote:
>
>
> Plus another question: is possible to use Stratos to automatically assign
> floating ips to spawned openstack vm?
>
> Yes this is possible. Please refer following wiki page, if you find any
problem please send configuration files, logs (stratos, cartridge agent)
and other information to have a look.
https://cwiki.apache.org/confluence/display/STRATOS/4.1.x+Multiple+Network+Interfaces

>
> Now I have just few questions:
>>
>> 1) in PPaas 4.0.0 for each cartridge it was automatically spawned and
>> configured an LB. Now not, plus I should need to pass LB_IP on cartridge
>> definition, so I suppose it requires a manually loadel LB (WSO2 ELB was
>> great). Am I right?
>>
>> Yes your understanding is correct. Now we do not automatically spin load
> balancers, rather they need to be started separately. We now have support
> for Nginx, Haproxy, LVS, EC2 load balancer & GCE load balancer. What we
> mean by support is that the load balancer configuration can be
> auto-generated according to the topology.
>
> Clarification: does balancer configuration automatically update when new
> instances are spawned from autoscaler?
>
> Yes, that's the responsibility of the load balancer extension. For each
load balancer above we have a separate extension.

>
> 2) I saw that configurator only supports WSO2 AM, IS and ESB. For other
>> WSO2 Products (I need MB, IS, DES (or CEP + BAM) and EMS) can I use old
>> PPAS 4.0.0 cartridges and puppet definitions?
>>
>> At the moment we have Configurator Template Modules for API-M 1.9.0, IS
> 5.0.0, ESB 4.8.1, AS 5.2.1, DAS 3.0.0, CEP 4.0.0, & DSS 3.2.2 [1], [2]. MB
> is in the road map and it will be available in a month or two, BAM won't be
> there because of DAS.
>
> Clarification: Are there some documentation on how to create configurator
> templates? We can try to manage to develop some templates and make them
> available to community.
>

Currently I do not think we have documented this but its not that complex:

   1. Take the product distribution and copy of the set of configuration
   files that need to be templated to a folder.
   2. Refer Jinja2 for template syntax: http://jinja.pocoo.org/docs/dev/
   and add place holders.
   3. Add a module.ini file to the root fo the above folder.
   4. Now this template module can be used with the Configurator.

It's great to hear that you are willing to contribute to this. Appreciate
it!

No, we can't use Private PaaS 4.0.0 cartridges and their puppet definitions
> with 4.1.0. We have completely re-architected the cartridge and the
> cartridge agent design.
>
> Clarification: what about new cartridges? What if we use old stratos
> cartirdge agent on some type of services? I'm referring to some cartridges
> that we developed for 4.0.0 version. They won't be usable with 4.1.0?
>
> No it may not work properly because there is a protocol for the cartridge
agent to communicate with Stratos and it has been changed from 4.0.0 to
4.1.0.

On Tue, Sep 15, 2015 at 4:11 AM, Monaco Marco <ma.mon...@almaviva.it> wrote:

> Hi Imesh,
>
> Now it's very late in Italy, I'll try your suggestion tomorrow. Thank you.
>
> I need just few clarification (inline to your previuos email).
>
> Plus another question: is possible to use Stratos to automatically assign
> floating ips to spawned openstack vm?
>
> We have configured only 1 private network on neutron witha router
> connected to the external openstack network. A floating pool has been
> preallocated on the wso2 tenant, but when Stratos starts instances only
> private ip is assigned. We configured cloud-controller.xml with both
> autoAssignIp true and flase, but it do not change behaviour.
>
> Thanks,
>
> Marco
>
>
>
> Inviato dal mio dispositivo Samsung
>
>
> -------- Messaggio originale --------
> Da: Imesh Gunaratne <im...@wso2.com>
> Data: 14/09/2015 21:07 (GMT+01:00)
> A: Monaco Marco <ma.mon...@almaviva.it>
> Cc: Anuruddha Liyanarachchi <anurudd...@wso2.com>, WSO2 Developers' List <
> dev@wso2.org>
> Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2
>
> Hi Marco,
>
> It's really nice to hear that you were able to deploy ESB 4.8.1 on Private
> PaaS 4.1.0 latest code base. Thanks for sharing your experience with us.
> Please see my comments inline:
>
> On Mon, Sep 14, 2015 at 11:56 PM, Monaco Marco <ma.mon...@almaviva.it>
>  wrote:
>
>>
>> 1) on /etc/puppet/manifests/nodes/base.pp I had to add $mb_url =
>> "tcp://[mb_ip]:[mb_port]" since PCA complained about the missing MB
>> information
>>
>
> We have made Puppet Master configurations optional with Private PaaS
> 4.1.0. We can now define these parameters in the network partitions and
> have Puppet Master with zero configurations. You can find a sample here:
>
>
> https://github.com/wso2/product-private-paas/blob/master/samples/network-partitions/openstack/network-partition-openstack.json
>
>>
>>
> 2) on plugin plugins/wso2esb-481-startup-handler.py on line 75 i received
>> error on MB_IP since PCA looks in values array with MB_IP keys and it don't
>> find any MB_IP property. Since I didn't want to recreate the full
>> cartridge, groups and application I just hardcoded MB_IP to
>> "tcp://[mb_ip]:[mb_port]", but I suppose that I just need to create the
>> cartridge with MB_IP property. This was not documented and present on
>> sample cartridge.
>>
>> I think this has happened because of 1). Please try to define MB_IP in
> the relevant network partition.
>
>
>> Now I have just few questions:
>>
>> 1) in PPaas 4.0.0 for each cartridge it was automatically spawned and
>> configured an LB. Now not, plus I should need to pass LB_IP on cartridge
>> definition, so I suppose it requires a manually loadel LB (WSO2 ELB was
>> great). Am I right?
>>
>> Yes your understanding is correct. Now we do not automatically spin load
> balancers, rather they need to be started separately. We now have support
> for Nginx, Haproxy, LVS, EC2 load balancer & GCE load balancer. What we
> mean by support is that the load balancer configuration can be
> auto-generated according to the topology.
>
> Clarification: does balancer configuration automatically update when new
> instances are spawned from autoscaler?
>
>
> 2) I saw that configurator only supports WSO2 AM, IS and ESB. For other
>> WSO2 Products (I need MB, IS, DES (or CEP + BAM) and EMS) can I use old
>> PPAS 4.0.0 cartridges and puppet definitions?
>>
>> At the moment we have Configurator Template Modules for API-M 1.9.0, IS
> 5.0.0, ESB 4.8.1, AS 5.2.1, DAS 3.0.0, CEP 4.0.0, & DSS 3.2.2 [1], [2]. MB
> is in the road map and it will be available in a month or two, BAM won't be
> there because of DAS.
>
> Clarification: Are there some documentation on how to create configurator
> templates? We can try to manage to develop some templates and make them
> available to communtiy.
>
> No, we can't use Private PaaS 4.0.0 cartridges and their puppet
> definitions with 4.1.0. We have completely re-architected the cartridge and
> the cartridge agent design.
>
> Clarification: what about new cartridges? What if we use old stratos
> cartirdge agent on some type of services? I'm referring to some cartridges
> that we developed for 4.0.0 version. They won't be usable with 4.1.0?
>
>
> [1]
> https://github.com/wso2/product-private-paas/tree/master/cartridges/templates-modules
> [2]
> https://github.com/wso2/product-private-paas/tree/ppaas-4.2.0/cartridges/templates-modules
>
> Thanks
>
> On Mon, Sep 14, 2015 at 11:56 PM, Monaco Marco <ma.mon...@almaviva.it>
> wrote:
>
>> Hi Imesh,
>>
>> I just successfully deployed ESB 4.81 cartridge (manager + workers) on
>> openstack right now, but I had to make some changes on cartridge
>> configuration:
>>
>> 1) on /etc/puppet/manifests/nodes/base.pp I had to add $mb_url =
>> "tcp://[mb_ip]:[mb_port]" since PCA complained about the missing MB
>> information
>>
>> 2) on plugin plugins/wso2esb-481-startup-handler.py on line 75 i received
>> error on MB_IP since PCA looks in values array with MB_IP keys and it don't
>> find any MB_IP property. Since I didn't want to recreate the full
>> cartridge, groups and application I just hardcoded MB_IP to
>> "tcp://[mb_ip]:[mb_port]", but I suppose that I just need to create the
>> cartridge with MB_IP property. This was not documented and present on
>> sample cartridge.
>>
>> Now I have just few questions:
>>
>> 1) in PPaas 4.0.0 for each cartridge it was automatically spawned and
>> configured an LB. Now not, plus I should need to pass LB_IP on cartridge
>> definition, so I suppose it requires a manually loadel LB (WSO2 ELB was
>> great). Am I right?
>>
>> 2) I saw that configurator only supports WSO2 AM, IS and ESB. For other
>> WSO2 Products (I need MB, IS, DES (or CEP + BAM) and EMS) can I use old
>> PPAS 4.0.0 cartridges and puppet definitions?
>>
>> Thank you very much,
>>
>> Marco
>> ------------------------------
>> *Da:* Imesh Gunaratne [im...@wso2.com]
>> *Inviato:* venerdì 11 settembre 2015 14.43
>>
>> *A:* Monaco Marco
>> *Cc:* Anuruddha Liyanarachchi; WSO2 Developers' List
>> *Oggetto:* Re: Strange Error in WSO2 Private Paas 4.1.2
>>
>> Hi Marco,
>>
>> On Fri, Sep 11, 2015 at 12:49 PM, Monaco Marco <ma.mon...@almaviva.it>
>>  wrote:
>>
>>> Hi Imesh,
>>>
>>> thanks for suggestion, I solved deleting the entire VM of stratos and
>>> creating everything new from scratch. Anyway even if I can spawn new
>>> instances I'm still facing problems here: i forgot to modify
>>> ./repository/conf/cartridge-config.properties and now in lauch-params file
>>> I have the wrong puppet master ip. I fixed the file after the first try and
>>> also restarted stratos but when I try to deploy instances I still see that
>>> launch-params has wrong configurations.
>>>
>>> I'm sorry this is a bug. You could overcome this for the moment by
>> deleting the application and creating it again. The problem is that the
>> puppet master properties are added to the payload once an application is
>> created.
>>
>>
>>> I noticed the same with iaas configuration: if I want to change some
>>> iaas properties in cloud-controller.xml file after the first lauch of
>>> Stratos, such modification does not take place. The only way is to delete
>>> all files, H2 db and mysql db and configure everything from scratch.
>>>
>>
>> Yes this is similar to the above issue, if you redeploy the cartridge
>> this problem can be solved. Here the problem is that we have a IaaS
>> configuration cache in cloud controller. It first take the configuration
>> from the cloud-controller.xml and then apply values specified in the
>> cartridge definition (to be overwritten) on top of that. This cache is not
>> updated once we update the cloud-controller.xml file.
>>
>> Just to clarify, except for the above issues were you able to deploy an
>> application in OpenStack successfully with Private PaaS 4.1.0? If not
>> please let us know, we can arrange a google hangout and have a look.
>>
>> Thanks
>>
>> On Fri, Sep 11, 2015 at 12:49 PM, Monaco Marco <ma.mon...@almaviva.it>
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> thanks for suggestion, I solved deleting the entire VM of stratos and
>>> creating everything new from scratch. Anyway even if I can spawn new
>>> instances I'm still facing problems here: i forgot to modify
>>> ./repository/conf/cartridge-config.properties and now in lauch-params file
>>> I have the wrong puppet master ip. I fixed the file after the first try and
>>> also restarted stratos but when I try to deploy instances I still see that
>>> launch-params has wrong configurations.
>>>
>>> I noticed the same with iaas configuration: if I want to change some
>>> iaas properties in cloud-controller.xml file after the first lauch of
>>> Stratos, such modification does not take place. The only way is to delete
>>> all files, H2 db and mysql db and configure everything from scratch.
>>>
>>> Hope you guys have a method to change such configuration without require
>>> a fresh reinstall of the product...
>>>
>>> Thank you as usual,
>>>
>>> Marco
>>> ------------------------------
>>> *Da:* Imesh Gunaratne [im...@wso2.com]
>>> *Inviato:* mercoledì 9 settembre 2015 18.58
>>> *A:* Monaco Marco
>>> *Cc:* Anuruddha Liyanarachchi; WSO2 Developers' List
>>>
>>> *Oggetto:* Re: Strange Error in WSO2 Private Paas 4.1.2
>>>
>>> Hi Marco,
>>>
>>> What's Private PaaS distribution you are using? Can you please try with
>>> 4.1.0-Alpha:
>>> https://svn.wso2.org/repos/wso2/scratch/PPAAS/wso2ppaas-4.1.0-ALPHA/
>>>
>>> Thanks
>>>
>>> On Wed, Sep 9, 2015 at 11:22 AM, Monaco Marco <ma.mon...@almaviva.it>
>>> wrote:
>>>
>>>> Hi Anuruddha,
>>>>
>>>> many thanks. I did it also with gui pasting JSON, and did it with UI,
>>>> too.
>>>>
>>>> When I do with GUI it justs hangs for a while and came back to previous
>>>> page... nothing is logged, also with DEBUG loglevel.
>>>>
>>>> Still stucked at this point.
>>>>
>>>> Marco
>>>>
>>>>
>>>> Inviato dal mio dispositivo Samsung
>>>>
>>>>
>>>> -------- Messaggio originale --------
>>>> Da: Anuruddha Liyanarachchi <anurudd...@wso2.com>
>>>> Data: 09/09/2015 07:47 (GMT+01:00)
>>>> A: Monaco Marco <ma.mon...@almaviva.it>
>>>> Cc: WSO2 Developers' List <dev@wso2.org>, im...@wso2.com
>>>> Oggetto: Re: Strange Error in WSO2 Private Paas 4.1.2
>>>>
>>>> Hi Marco,
>>>>
>>>> This error occur when the cartridge definition doesn't contain iaasProvider
>>>> section. I guess the path to your cartridge json is incorrect in the
>>>> curl command.
>>>>
>>>> I deployed the same cartridge json, that you are using and I did not
>>>> face any issue.
>>>> Can you switch to JSON view from the UI and paste the cartridge
>>>> definition and try to add it [1].
>>>>
>>>> [1]
>>>> https://drive.google.com/file/d/0B0v957zZwVWrQjVWX2x6YzI3Mkk/view?usp=sharing
>>>> <https://drive.google.com/file/d/0B0v957zZwVWrQjVWX2x6YzI3Mkk/view?usp=sharing>
>>>>
>>>>
>>>> On Wed, Sep 9, 2015 at 12:39 AM, Monaco Marco <ma.mon...@almaviva.it>
>>>> wrote:
>>>>
>>>>> Anuruddha,
>>>>>
>>>>> thank you for the suggestion.
>>>>>
>>>>> I made a git pull from repo without deleting DBs (just replacing
>>>>> files) and it started to spawn new instances, but it complained about
>>>>> public ip addresses and stopping.
>>>>>
>>>>> i made a fresh installation deleting all SQL dbs, but now we are
>>>>> stucked on another error.
>>>>>
>>>>> I successfully set up again Network Partition, Autoscale Policies,
>>>>> Deployment Policies, but when I came to cartridge Stratos started to
>>>>> missbehave.
>>>>>
>>>>> If I try to set up a cartridge on GUI it just stucks and then came
>>>>> back to Add Cartridge page.
>>>>>
>>>>> I tryied to use api (https://127.0.0.1:9443/api/cartridges) sending
>>>>> this json:
>>>>>
>>>>> {
>>>>>     "type": "wso2esb-481-manager",
>>>>>     "category": "framework",
>>>>>     "provider": "wso2",
>>>>>     "host": "esb.alma.it",
>>>>>     "displayName": "WSO2 ESB 4.8.1 Manager",
>>>>>     "description": "WSO2 ESB 4.8.1 Manager Cartridge",
>>>>>     "version": "4.8.1",
>>>>>     "multiTenant": false,
>>>>>     "loadBalancingIPType": "private",
>>>>>     "portMapping": [
>>>>>         {
>>>>>             "name": "mgt-http",
>>>>>             "protocol": "http",
>>>>>             "port": 9763,
>>>>>             "proxyPort": 0
>>>>>         },
>>>>>         {
>>>>>             "name": "mgt-https",
>>>>>             "protocol": "https",
>>>>>             "port": 9443,
>>>>>             "proxyPort": 0
>>>>>         },
>>>>>         {
>>>>>             "name": "pt-http",
>>>>>             "protocol": "http",
>>>>>             "port": 8280,
>>>>>             "proxyPort": 0
>>>>>         },
>>>>>         {
>>>>>             "name": "pt-https",
>>>>>             "protocol": "https",
>>>>>             "port": 8243,
>>>>>             "proxyPort": 0
>>>>>         }
>>>>>     ],
>>>>>     "iaasProvider": [
>>>>>         {
>>>>>             "type": "openstack",
>>>>>             "imageId":
>>>>> "RegionOne/c2951a15-47b7-4f9c-a6e0-d3b7a50bc9aa",
>>>>>             "networkInterfaces": [
>>>>>                 {
>>>>>                     "networkUuid":
>>>>> "bd02ca5c-4a57-45c3-8478-db0624829bdb"
>>>>>                 }
>>>>>             ],
>>>>>             "property": [
>>>>>                 {
>>>>>                     "name": "instanceType",
>>>>>                     "value": "RegionOne/3"
>>>>>                 },
>>>>>                 {
>>>>>                     "name": "securityGroups",
>>>>>                     "value": "default"
>>>>>                 },
>>>>>                 {
>>>>>                     "name": "autoAssignIp",
>>>>>                     "value": "true"
>>>>>                 },
>>>>>                 {
>>>>>                     "name": "keyPair",
>>>>>                     "value": "alma-keypair"
>>>>>                 }
>>>>>             ]
>>>>>         }
>>>>>     ],
>>>>>     "property": [
>>>>>         {
>>>>>             "name": "payload_parameter.CONFIG_PARAM_CLUSTERING",
>>>>>             "value": "true"
>>>>>         },
>>>>>         {
>>>>>             "name": "payload_parameter.LB_IP",
>>>>>             "value": "<LOAD_BALANCER_IP>"
>>>>>         }
>>>>>     ]
>>>>> }
>>>>>
>>>>> and getting this response:
>>>>>
>>>>> {"status":"error","message":"IaaS providers not found in cartridge:
>>>>> null"}
>>>>>
>>>>> in stratos logs I can see the same errors:
>>>>>
>>>>> [2015-09-08 19:00:26,600] ERROR
>>>>> {org.apache.stratos.rest.endpoint.handlers.CustomExceptionMapper} -  IaaS
>>>>> providers not found in cartridge: null
>>>>> org.apache.stratos.rest.endpoint.exception.RestAPIException: IaaS
>>>>> providers not found in cartridge: null
>>>>>         at
>>>>> org.apache.stratos.rest.endpoint.api.StratosApiV41Utils.addCartridge(StratosApiV41Utils.java:126)
>>>>>         at
>>>>> org.apache.stratos.rest.endpoint.api.StratosApiV41.addCartridge(StratosApiV41.java:292)
>>>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>         at
>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>         at
>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>         at java.lang.reflect.Method.invoke(Method.java:606)
>>>>>         at
>>>>> org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)
>>>>>         at
>>>>> org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:96)
>>>>>         at
>>>>> org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:194)
>>>>>         at
>>>>> org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:100)
>>>>>         at
>>>>> org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:57)
>>>>>         at
>>>>> org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:93)
>>>>>         at
>>>>> org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:271)
>>>>>         at
>>>>> org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
>>>>>         at
>>>>> org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:239)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:223)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:203)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:137)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:159)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:286)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:206)
>>>>>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
>>>>>         at
>>>>> org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:262)
>>>>>         at
>>>>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
>>>>>
>>>>>
>>>>> and i'm not able to add any cartridge, including simple php.
>>>>>
>>>>> I triple checked all IaaS settings in cloud-controller.xml and in
>>>>> cartridge definitions and they are correct.
>>>>>
>>>>> Any throught?
>>>>>
>>>>> Tanks....
>>>>>
>>>>> Marco
>>>>> ------------------------------
>>>>> *Da:* Anuruddha Liyanarachchi [anurudd...@wso2.com]
>>>>> *Inviato:* martedì 8 settembre 2015 15.46
>>>>> *A:* Monaco Marco; WSO2 Developers' List
>>>>> *Cc:* im...@wso2.com
>>>>> *Oggetto:* Re: Strange Error in WSO2 Private Paas 4.1.2
>>>>>
>>>>> [Removing stratos dev and adding wso2dev ]
>>>>>
>>>>> Hi Marco,
>>>>>
>>>>> We have fixed this in the master branch with commit [1]. Please take a
>>>>> pull from master branch or download the alpha release pack from [2].
>>>>>
>>>>> [1]
>>>>> https://github.com/wso2/product-private-paas/commit/54a77e9d85538a20ace00c57ec9e8aed410fb773
>>>>> [2]
>>>>> https://svn.wso2.org/repos/wso2/scratch/PPAAS/wso2ppaas-4.1.0-ALPHA/
>>>>>
>>>>> On Tue, Sep 8, 2015 at 7:06 PM, Monaco Marco <ma.mon...@almaviva.it>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We have successfully installed WSO2 PPaaS 4.1.2 on our Openstack IaaS
>>>>>> Environment.
>>>>>>
>>>>>> After following this procedure (
>>>>>> https://docs.wso2.com/display/PP410/Deploy+Private+PaaS+in+OpenStack)
>>>>>> we are able to open the Private PaaS console and configure Network
>>>>>> Partitions, Autoscale Policies, Deployment, Cartridge, ecc..
>>>>>>
>>>>>> We have problems trying to deploy applications. We tested both PHP
>>>>>> and WSO2ESB applications, also using Mock IaaS, but we receive always the
>>>>>> same error:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *ERROR {org.apache.stratos.autoscaler.rule.AutoscalerRuleEvaluator}
>>>>>> -  Unable to Analyse Expression log.debug("[scaling] Number of required
>>>>>> instances based on stats: " + numberOfRequiredInstances + " " +
>>>>>>                 "[active instances count] " + activeInstancesCount + "
>>>>>> [network-partition] " +
>>>>>> clusterInstanceContext.getNetworkPartitionId() + " [cluster] " +
>>>>>> clusterId);         int nonTerminatedMembers =
>>>>>> clusterInstanceContext.getNonTerminatedMemberCount();         
>>>>>> if(scaleUp){
>>>>>>             int clusterMaxMembers =
>>>>>> clusterInstanceContext.getMaxInstanceCount();             if
>>>>>> (nonTerminatedMembers < clusterMaxMembers) {                 int
>>>>>> additionalInstances = 0;                 if(clusterMaxMembers <
>>>>>> numberOfRequiredInstances){                     additionalInstances =
>>>>>> clusterMaxMembers - nonTerminatedMembers;                     log.info
>>>>>> <http://log.info>("[scale-up] Required member count based on stat based
>>>>>> scaling is higher than max, hence"                             + "
>>>>>> notifying to parent for possible group scaling or app bursting. 
>>>>>> [cluster] "
>>>>>> + clusterId                             + " [instance id]" +
>>>>>> clusterInstanceContext.getId() + " [max] " + clusterMaxMembers
>>>>>>                             + " [number of required instances] " +
>>>>>> numberOfRequiredInstances                             + " [additional
>>>>>> instances to be created] " + additionalInstances);
>>>>>> delegator.delegateScalingOverMaxNotification(clusterId,
>>>>>> clusterInstanceContext.getNetworkPartitionId(),
>>>>>> clusterInstanceContext.getId());                 } else {
>>>>>>                     additionalInstances = numberOfRequiredInstances -
>>>>>> nonTerminatedMembers;                 }
>>>>>> clusterInstanceContext.resetScaleDownRequestsCount();
>>>>>> log.debug("[scale-up] " + " [has scaling dependents] " +
>>>>>> clusterInstanceContext.hasScalingDependants() +                     "
>>>>>> [cluster] " + clusterId );
>>>>>> if(clusterInstanceContext.hasScalingDependants()) {
>>>>>> log.debug("[scale-up] Notifying dependencies [cluster] " + clusterId);
>>>>>>
>>>>>> delegator.delegateScalingDependencyNotification(clusterId,
>>>>>> clusterInstanceContext.getNetworkPartitionId(),
>>>>>> clusterInstanceContext.getId(), numberOfRequiredInstances,
>>>>>> clusterInstanceContext.getMinInstanceCount());                 } else {
>>>>>>                     boolean partitionsAvailable = true;
>>>>>> int count = 0;                     String autoscalingReason =
>>>>>> (numberOfRequiredInstances ==
>>>>>> numberOfInstancesReuquiredBasedOnRif)?"Scaling up due to RIF, [Predicted
>>>>>> Value] "+rifPredictedValue+" [Threshold]
>>>>>> "+rifThreshold:(numberOfRequiredInstances==
>>>>>> numberOfInstancesReuquiredBasedOnMemoryConsumption)?"Scaling up due to 
>>>>>> MC,
>>>>>> [Predicted Value] "+mcPredictedValue+" [Threshold] "+mcThreshold:"Scaling
>>>>>> up due to LA, [Predicted Value] "+laPredictedValue+" [Threshold]
>>>>>> "+laThreshold;                     autoscalingReason += " [Number of
>>>>>> required instances] "+numberOfRequiredInstances+" [Cluster Max Members]
>>>>>> "+clusterMaxMembers+" [Additional instances to be created] " +
>>>>>> additionalInstances;                     while(count != 
>>>>>> additionalInstances
>>>>>> && partitionsAvailable){
>>>>>> ClusterLevelPartitionContext partitionContext =
>>>>>> (ClusterLevelPartitionContext)
>>>>>> partitionAlgorithm.getNextScaleUpPartitionContext(clusterInstanceContext.getPartitionCtxtsAsAnArray());
>>>>>>                         if(partitionContext != null){
>>>>>>                             log.info <http://log.info>("[scale-up]
>>>>>> Partition available, hence trying to spawn an instance to scale up! " +
>>>>>>                                 " [application id] " + applicationId +
>>>>>>                                 " [cluster] " + clusterId + " [instance 
>>>>>> id]
>>>>>> " + clusterInstanceContext.getId() +                                 "
>>>>>> [network-partition] " + clusterInstanceContext.getNetworkPartitionId() +
>>>>>>                                 " [partition] " +
>>>>>> partitionContext.getPartitionId() +                                 "
>>>>>> scaleup due to RIF: " + (rifReset && (rifPredictedValue > rifThreshold)) 
>>>>>> +
>>>>>>                                 " [rifPredictedValue] " + 
>>>>>> rifPredictedValue
>>>>>> + " [rifThreshold] " + rifThreshold +                                 "
>>>>>> scaleup due to MC: " + (mcReset && (mcPredictedValue > mcThreshold)) +
>>>>>>                                 " [mcPredictedValue] " + 
>>>>>> mcPredictedValue +
>>>>>> " [mcThreshold] " + mcThreshold +                                 " 
>>>>>> scaleup
>>>>>> due to LA: " + (laReset && (laPredictedValue > laThreshold)) +
>>>>>>                                 " [laPredictedValue] " + 
>>>>>> laPredictedValue +
>>>>>> " [laThreshold] " + laThreshold);
>>>>>> log.debug("[scale-up] " + " [partition] " +
>>>>>> partitionContext.getPartitionId() + " [cluster] " + clusterId );
>>>>>>                             long scalingTime = 
>>>>>> System.currentTimeMillis();
>>>>>>                             delegator.delegateSpawn(partitionContext,
>>>>>> clusterId, clusterInstanceContext.getId(),
>>>>>> isPrimary,autoscalingReason,scalingTime);
>>>>>> count++;                         } else {
>>>>>> log.warn("[scale-up] No more partition available even though " +
>>>>>>                              "cartridge-max is not reached!, [cluster] " 
>>>>>> +
>>>>>> clusterId +                             " Please update deployment-policy
>>>>>> with new partitions or with higher " +
>>>>>> "partition-max");                             partitionsAvailable = 
>>>>>> false;
>>>>>>                         }                     }                 }
>>>>>>             } else {                 log.info 
>>>>>> <http://log.info>("[scale-up]
>>>>>> Trying to scale up over max, hence not scaling up cluster itself and
>>>>>>                         notifying to parent for possible group scaling or
>>>>>> app bursting.                         [cluster] " + clusterId + " 
>>>>>> [instance
>>>>>> id]" + clusterInstanceContext.getId() +                         " [max] 
>>>>>> " +
>>>>>> clusterMaxMembers);
>>>>>> delegator.delegateScalingOverMaxNotification(clusterId,
>>>>>> clusterInstanceContext.getNetworkPartitionId(),
>>>>>> clusterInstanceContext.getId());             }         } else
>>>>>> if(scaleDown){             if(nonTerminatedMembers >
>>>>>> clusterInstanceContext.getMinInstanceCount){
>>>>>> log.debug("[scale-down] Decided to Scale down [cluster] " + clusterId);
>>>>>>                 if(clusterInstanceContext.getScaleDownRequestsCount() > 2
>>>>>> ){                     log.debug("[scale-down] Reached scale down 
>>>>>> requests
>>>>>> threshold [cluster] " + clusterId + " Count " +
>>>>>> clusterInstanceContext.getScaleDownRequestsCount());
>>>>>> if(clusterInstanceContext.hasScalingDependants()) {
>>>>>> log.debug("[scale-up] Notifying dependencies [cluster] " + clusterId);
>>>>>>
>>>>>> delegator.delegateScalingDependencyNotification(clusterId,
>>>>>> clusterInstanceContext.getNetworkPartitionId(),
>>>>>> clusterInstanceContext.getId(), numberOfRequiredInstances,
>>>>>> clusterInstanceContext.getMinInstanceCount());                     } 
>>>>>> else{
>>>>>>                         MemberStatsContext selectedMemberStatsContext =
>>>>>> null;                         double lowestOverallLoad = 0.0;
>>>>>>                         boolean foundAValue = false;
>>>>>>                         ClusterLevelPartitionContext partitionContext =
>>>>>> (ClusterLevelPartitionContext)
>>>>>> partitionAlgorithm.getNextScaleDownPartitionContext(clusterInstanceContext.getPartitionCtxtsAsAnArray());
>>>>>>                         if(partitionContext != null){
>>>>>>                             log.info <http://log.info>("[scale-down]
>>>>>> Partition available to scale down " +                                 "
>>>>>> [application id] " + applicationId +                                 "
>>>>>> [cluster] " + clusterId + " [instance id] " +
>>>>>> clusterInstanceContext.getId() +                                 "
>>>>>> [network-partition] " + clusterInstanceContext.getNetworkPartitionId() +
>>>>>>                                 " [partition] " +
>>>>>> partitionContext.getPartitionId() +                                 "
>>>>>> scaledown due to RIF: " + (rifReset && (rifPredictedValue < 
>>>>>> rifThreshold))
>>>>>> +                                 " [rifPredictedValue] " +
>>>>>> rifPredictedValue + " [rifThreshold] " + rifThreshold +
>>>>>>                                 " scaledown due to MC: " + (mcReset &&
>>>>>> (mcPredictedValue < mcThreshold)) +                                 "
>>>>>> [mcPredictedValue] " + mcPredictedValue + " [mcThreshold] " + 
>>>>>> mcThreshold +
>>>>>>                                 " scaledown due to LA: " + (laReset &&
>>>>>> (laPredictedValue < laThreshold)) +                                 "
>>>>>> [laPredictedValue] " + laPredictedValue + " [laThreshold] " + laThreshold
>>>>>>                             );                             // In 
>>>>>> partition
>>>>>> context member stat context, all the primary members need to be
>>>>>>                             // avoided being selected as the member to
>>>>>> terminated*
>>>>>>
>>>>>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.gunaratne.org
Lean . Enterprise . Middleware
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to