Re: [Architecture] [UUF] Extensible Session Management for UUF

2017-05-01 Thread Imesh Gunaratne
On Tue, May 2, 2017 at 10:44 AM, Dilan Udara Ariyaratne 
wrote:

>
> In the meantime, could you elaborate on the method level details of the
> Session manager interface, too?
>

​SessionManager:​
​https://github.com/wso2/carbon-uuf/pull/241/files#diff-
50f82419222617b7f14b6d08d45984ac

SessionHandler:
https://github.com/wso2/carbon-uuf/pull/241/files#diff-
39f1b4c291c422e721e8c56d191c75e3​

Thanks

>
> Cheers,
> Dilan.
>
> *Dilan U. Ariyaratne*
> Senior Software Engineer
> WSO2 Inc. <http://wso2.com/>
> Mobile: +94766405580 <%2B94766405580>
> lean . enterprise . middleware
>
>
> On Mon, May 1, 2017 at 10:39 PM, Shazni Nazeer  wrote:
>
>> It is beneficial to have this in the UUF.
>>
>> Many frameworks (in particular web frameworks such as Django, CakePHP and
>> Ruby on Rails) support this kind of pluggable Session Management
>> capabilities.
>>
>> On Fri, Apr 28, 2017 at 3:46 PM, Vidura Nanayakkara 
>> wrote:
>>
>>> Hi All,
>>>
>>> We are in the process of introducing extensible session management
>>> mechanism for Carbon UUF.
>>>
>>> Previously in Carbon UUF, the session management was not extensible and
>>> was tightly coupled to the Carbon UUF framework. The purpose of introducing
>>> an extensible session management mechanism is to give the ability for the
>>> web app developers to plug in any session management implementation of
>>> choice. For instance, this can be a JDBC persistent session management or a
>>> token based session management implementation.
>>>
>>> In order to plug in a custom session manager, one need to implement the
>>> given `SessionManager` interface. That implementation needs to be specified
>>> in the `app.yaml` configuration of the particular UUF app.
>>>
>>> Example app.yaml configuration:
>>>
>>> *...*
>>>
>>> # Session manager for this app
>>>
>>> sessionManager: *"org.wso2.carbon.uuf.api.auth.InMemorySessionManager"*
>>>
>>> *...*
>>>
>>> *WDYT?*
>>>
>>>
>>> Best Regards,
>>>
>>> *Vidura Nanayakkara*
>>> Software Engineer
>>>
>>> Email : vidu...@wso2.com
>>> Mobile : +94 (0) 717 919277 <+94%2071%20791%209277>
>>> Web : http://wso2.com
>>> Blog : https://medium.com/@viduran <http://wso2.com/>
>>> LinkedIn : https://lk.linkedin.com/in/vidura-nanayakkara
>>> <http://wso2.com/>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Shazni Nazeer
>>
>> Mob : +94 37331
>> LinkedIn : http://lk.linkedin.com/in/shazninazeer
>> Blog : http://shazninazeer.blogspot.com
>>
>> <http://wso2.com/signature>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [UUF] Extensible Authorization for UUF

2017-05-16 Thread Imesh Gunaratne
As we discussed offline I think it would be better to provide a default
implementation for $subject while providing the extension point.

Thanks

On Wed, May 3, 2017 at 10:47 AM, SajithAR Ariyarathna 
wrote:

> Hi All,
>
> We are in the process of introducing an extensible authorizer for Carbon
> UUF.
>
> At the moment authorization is done via the org.wso2.carbon.uuf.spi.au
> th.User interface [1]. When creating an user session, implementation of
> the User interface (e.g. CaasUser [2]) should be passed. The main
> drawback of this approach is, the logic in the hasPermission() method has
> to be serializable. Usually this is difficult to achieve because in order
> to evaluate permissions one might need to access some user management
> services (e.g. Realm Service) which cannot be serialized. Hence moving the
> hasPermission() method out of the User class and allowing to plug-in a
> custom authorizer would be a better approach.
>
> WDYT?
>
> [1] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/compo
> nents/uuf-core/src/main/java/org/wso2/carbon/uuf/spi/auth/User.java#L28
> [2] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/sampl
> es/osgi-bundles/org.wso2.carbon.uuf.sample.simple-auth.bundl
> e/src/main/java/org/wso2/carbon/uuf/sample/simpleauth/bundle/CaasUser.java
>
> Thanks.
> --
> Sajith Janaprasad Ariyarathna
> Senior Software Engineer; WSO2, Inc.;  http://wso2.com/
> <https://wso2.com/signature>
>



-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [UUF] Extensible Authorization for UUF

2017-05-17 Thread Imesh Gunaratne
On Wed, May 17, 2017 at 11:57 AM, Vidura Nanayakkara 
wrote:


> Since we are not aware of the 'Authorizer' implementations that can be in
> a product (persisting and retrieving permissions logic) we cannot provide a
> default implementation to the 'Authorizer'.
>

Thanks Vidura! Would you mind explaining why each product has to implement
it's own authorizer?

Thanks
Imesh
​

> This has been documented in the 'Authorizer' interface [1].
>
> [1] https://github.com/wso2/carbon-uuf/blob/3fbf10907747806d6311acef2095e5
> a8b623e339/components/uuf-core/src/main/java/org/wso2/carbon/uuf/spi/auth/
> Authorizer.java
>
> Best Regards,
> Vidura Nanayakkara
>
> On Wed, May 17, 2017 at 10:27 AM, Chandana Napagoda 
> wrote:
>
>> Hi Imesh,
>>
>> I think during the offline meeting, we have already discussed about the
>> default implementation.
>>
>> @ViduraN, Can you please elaborate it in here?
>>
>> Regards,
>> Chandana
>>
>> On Wed, May 17, 2017 at 10:08 AM, Imesh Gunaratne  wrote:
>>
>>> As we discussed offline I think it would be better to provide a default
>>> implementation for $subject while providing the extension point.
>>>
>>> Thanks
>>>
>>> On Wed, May 3, 2017 at 10:47 AM, SajithAR Ariyarathna >> > wrote:
>>>
>>>> Hi All,
>>>>
>>>> We are in the process of introducing an extensible authorizer for
>>>> Carbon UUF.
>>>>
>>>> At the moment authorization is done via the org.wso2.carbon.uuf.spi.au
>>>> th.User interface [1]. When creating an user session, implementation
>>>> of the User interface (e.g. CaasUser [2]) should be passed. The main
>>>> drawback of this approach is, the logic in the hasPermission() method
>>>> has to be serializable. Usually this is difficult to achieve because in
>>>> order to evaluate permissions one might need to access some user management
>>>> services (e.g. Realm Service) which cannot be serialized. Hence moving the
>>>> hasPermission() method out of the User class and allowing to plug-in a
>>>> custom authorizer would be a better approach.
>>>>
>>>> WDYT?
>>>>
>>>> [1] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/compo
>>>> nents/uuf-core/src/main/java/org/wso2/carbon/uuf/spi/auth/User.java#L28
>>>> [2] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/sampl
>>>> es/osgi-bundles/org.wso2.carbon.uuf.sample.simple-auth.bundl
>>>> e/src/main/java/org/wso2/carbon/uuf/sample/simpleauth/bundle
>>>> /CaasUser.java
>>>>
>>>> Thanks.
>>>> --
>>>> Sajith Janaprasad Ariyarathna
>>>> Senior Software Engineer; WSO2, Inc.;  http://wso2.com/
>>>> <https://wso2.com/signature>
>>>>
>>>
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>>> W: https://medium.com/@imesh TW: @imesh
>>> lean. enterprise. middleware
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Chandana Napagoda*
>> Associate Technical Lead
>> WSO2 Inc. - http://wso2.org
>>
>> *Email  :  chand...@wso2.com **Mobile : +94718169299
>> <+94%2071%20816%209299>*
>>
>> *Blog  :http://cnapagoda.blogspot.com <http://cnapagoda.blogspot.com>
>> | http://chandana.napagoda.com <http://chandana.napagoda.com>*
>>
>> *Linkedin : http://www.linkedin.com/in/chandananapagoda
>> <http://www.linkedin.com/in/chandananapagoda>*
>>
>>
>
>
> --
> Best Regards,
>
> *Vidura Nanayakkara*
> Software Engineer
>
> Email : vidu...@wso2.com
> Mobile : +94 (0) 717 919277 <+94%2071%20791%209277>
> Web : http://wso2.com
> Blog : https://medium.com/@viduran <http://wso2.com/>
> LinkedIn : https://lk.linkedin.com/in/vidura-nanayakkara
> <http://wso2.com/>
>



-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [UUF] Extensible Authorization for UUF

2017-05-17 Thread Imesh Gunaratne
Thanks for the clarifications Sajith and Vidura!!

On Wed, May 17, 2017 at 5:46 PM, Vidura Nanayakkara 
wrote:

> Hi Imesh,
>
> Thanks Vidura! Would you mind explaining why each product has to implement
>> it's own authorizer?
>
>
> At the moment, AFAIK there is no common permission model for WSO2
> products. For WSO2 Identity Server, we have [1] and there is currently a
> discussion going on regarding the permission model for WSO2 Message Broker
> 4 in [2]. If we are to decide on a common permission model for WSO2
> products then we can provide a default Authorizer that would be packaged
> with Carbon UUF. Even in this case we should not use the implemented
> default Authorizer if it is not explicitly specified in the 'app.yaml'
> configuration. The reason for this is that Carbon UUF is an UI framework
> and should be able to be reused by any other product (should be loosely
> coupled).
>
> WDYT?
>
> Also, should we have a common permission model across the platform?
>
> [1] https://github.com/wso2/carbon-identity-mgt
> [2] Architecture mail thread "C5 based permission model for MB-4"
>
> Best Regards,
> Vidura Nanayakkara
>
> On Wed, May 17, 2017 at 4:30 PM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Wed, May 17, 2017 at 11:57 AM, Vidura Nanayakkara 
>> wrote:
>>
>>
>>> Since we are not aware of the 'Authorizer' implementations that can be
>>> in a product (persisting and retrieving permissions logic) we cannot
>>> provide a default implementation to the 'Authorizer'.
>>>
>>
>> Thanks Vidura! Would you mind explaining why each product has to
>> implement it's own authorizer?
>>
>> Thanks
>> Imesh
>> ​
>>
>>> This has been documented in the 'Authorizer' interface [1].
>>>
>>> [1] https://github.com/wso2/carbon-uuf/blob/3fbf10907747806d
>>> 6311acef2095e5a8b623e339/components/uuf-core/src/main/java/
>>> org/wso2/carbon/uuf/spi/auth/Authorizer.java
>>>
>>> Best Regards,
>>> Vidura Nanayakkara
>>>
>>> On Wed, May 17, 2017 at 10:27 AM, Chandana Napagoda 
>>> wrote:
>>>
>>>> Hi Imesh,
>>>>
>>>> I think during the offline meeting, we have already discussed about the
>>>> default implementation.
>>>>
>>>> @ViduraN, Can you please elaborate it in here?
>>>>
>>>> Regards,
>>>> Chandana
>>>>
>>>> On Wed, May 17, 2017 at 10:08 AM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>> As we discussed offline I think it would be better to provide a
>>>>> default implementation for $subject while providing the extension point.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Wed, May 3, 2017 at 10:47 AM, SajithAR Ariyarathna <
>>>>> sajit...@wso2.com> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> We are in the process of introducing an extensible authorizer for
>>>>>> Carbon UUF.
>>>>>>
>>>>>> At the moment authorization is done via the
>>>>>> org.wso2.carbon.uuf.spi.auth.User interface [1]. When creating an
>>>>>> user session, implementation of the User interface (e.g. CaasUser [2]) 
>>>>>> should
>>>>>> be passed. The main drawback of this approach is, the logic in the
>>>>>> hasPermission() method has to be serializable. Usually this is
>>>>>> difficult to achieve because in order to evaluate permissions one might
>>>>>> need to access some user management services (e.g. Realm Service) which
>>>>>> cannot be serialized. Hence moving the hasPermission() method out of
>>>>>> the User class and allowing to plug-in a custom authorizer would be
>>>>>> a better approach.
>>>>>>
>>>>>> WDYT?
>>>>>>
>>>>>> [1] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/compo
>>>>>> nents/uuf-core/src/main/java/org/wso2/carbon/uuf/spi/auth/Us
>>>>>> er.java#L28
>>>>>> [2] https://github.com/wso2/carbon-uuf/blob/v1.0.0-m14/sampl
>>>>>> es/osgi-bundles/org.wso2.carbon.uuf.sample.simple-auth.bundl
>>>>>> e/src/main/java/org/wso2/carbon/uuf/sample/simpleauth/bundle
>>>>>> /CaasUser.java
>>>>>>
>>>>>> Thanks.
>>>>>&

Re: [Architecture] [Kubernetes] Improving Kubernetes Deployment Support for WSO2 Products

2017-07-20 Thread Imesh Gunaratne
On Thu, Jul 20, 2017 at 6:06 AM, Isuru Haththotuwa  wrote:

> [ += Architecture ]
>
> On Thu, Jul 20, 2017 at 2:57 PM, Isuru Haththotuwa 
> wrote:
>
>> Hi Dilan,
>>
>> Apologies for the delayed response.
>>
>> A couple of changes that I can think of:
>>
>>- Rather than using Puppet to build the Docker images, use a simple
>>copy based approach.
>>   - IMHO using puppet to build docker images is an overkill. We can
>>   have one product base images, and build the images relevant to product
>>   specific deployment patterns extending from the base image.
>>
>>
​+1 Isuru! We also had an offline discussion on this with Lakmal. Better to
make usage of Puppet optional for K8S based deployments.

Thanks
Imesh​

>
>>-
>>- Without using an intermediate set of scripts do build docker images
>>(currently we have our own docker build, run scripts), let the user
>>directly use Docker API for building images, running them, etc.
>>   - For a Docker user its more natural to use the Docker API.
>>   Additionally, there would be no need to maintain our own build scripts.
>>
>> Please share your thoughts.
>> I have started this effort for APIM the new deployment patterns discussed
>> in thread [1] in APIM group, and the WIP artifacts can be found at [2] and
>> [3]. Note that these artifacts are not finalized.
>>
>> [1]. API-M perf results to share with a customer
>> [2]. https://github.com/isurulucky/docker-apim/tree/new-deploymen
>> t-patterns
>> [3]. https://github.com/isurulucky/kubernetes-apim/tree/new-deplo
>> yment-patterns
>>
>> On Thu, Jul 13, 2017 at 7:56 PM, Dilan Udara Ariyaratne 
>> wrote:
>>
>>>
>>> -- Forwarded message --
>>> From: Dilan Udara Ariyaratne 
>>> Date: Thu, Jul 13, 2017 at 7:48 PM
>>> Subject: [Architecture] [Kubernetes] Improving Kubernetes Deployment
>>> Support for WSO2 Products
>>> To: architecture 
>>>
>>>
>>> Hi All,
>>>
>>> I am currently working on $subject. Initial idea is to deliver a fully
>>> automated, stable Kubernetes deployment experience for the end users of
>>> both WSO2 EI and APIM products
>>> such that during the process, we can understand how we can improve
>>> Kubernetes-common, the base Kubernetes deployment enabling layer as its
>>> platform.
>>>
>>> Earlier with our old product strategy, we did maintain a repository
>>> called, kubernetes-artifacts
>>> <https://github.com/wso2/kubernetes-artifacts> where all the product
>>> specific artifacts were also kept.
>>>
>>> Now, we have split this out in to two levels, namely
>>> [1] kubernetes-common <https://github.com/wso2/kubernetes-common> -
>>> base Kubernetes deployment enabling layer
>>> [2] kubernetes-, i.e. for example, kubernetes-ei -
>>> Product specific kubernetes artifacts that integrates with kubernetes-common
>>>
>>> However, with our new product strategy and architectural changes, most
>>> of the product-specific kubernetes artifacts are not yet done and WSO2 EI,
>>> WSO2 APIM are two such products.
>>>
>>> While there is an on-going effort by Isuru (Isuruh) on finalizing
>>> kubernetes-apim related deployment artifacts, I will be working on
>>> kubernetes-ei related artifacts
>>> as an effort to improve Kubernetes Deployment Support for WSO2 Products.
>>>
>>> As mentioned above, during this process, we will be also accessing ways
>>> of improving Kubernetes-common layer for a much-developer friendly
>>> framework in building
>>> more customer centric production ready kubernetes deployment artifacts
>>> with very minimum effort.
>>>
>>> Thanks,
>>> Dilan.
>>>
>>> *Dilan U. Ariyaratne*
>>> Senior Software Engineer
>>> WSO2 Inc. <http://wso2.com/>
>>> Mobile: +94766405580 <%2B94766405580>
>>> lean . enterprise . middleware
>>>
>>>
>>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048 <071%20635%208048>* <http://wso2.com/>*
>>
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048 <+94%2071%20635%208048>* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-04 Thread Imesh Gunaratne
Hi All,

It seems that Kubernetes ConfigMaps can be used for managing product
configurations without having to build Docker images for each pattern with
specific configurations. This was recently tested with Enterprise
Integrator and its K8S resources used can be found below:

https://github.com/wso2/kubernetes-ei

*Approach:*

   - Create a ConfigMap for each configuration folder
   - Use a volume mount to map the ConfigMap to a folder in the Pod
   - Copy configuration files mapped to above folders to the product config
   folders at the container startup. The reason for doing this is that
   currently, K8S ConfigMaps does not support mapping nested folder structures.

Please share your thoughts on this.

Thanks
Imesh

-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-06 Thread Imesh Gunaratne
On Tue, Sep 5, 2017 at 10:28 AM, Youcef HILEM 
 wrote:

> Hi Imesh,
>
> That's what I was looking for.
>

Thanks for your feedback Youcef!


> Before this solution, to avoid creating as many docker images as
> environments and components, and taking into account the current limit
> (https://github.com/wso2/kubernetes-apim/issues/15), I planned to use the
> solution (https://github.com/eleks/wso2-dockers)
>

Yes, I went through the above solution and it looks neat! We also
implemented something similar sometime back called Configurator in Python
and later deprecated it:

https://github.com/wso2/private-paas-cartridges/tree/
master/common/configurator/modules/distribution

Please try out the ConfigMaps approach and let us know how it works for
you. Thanks again!

On Wed, Sep 6, 2017 at 8:19 PM, Pubudu Gunatilaka  wrote:

> Hi,
>
> I tried ConfigMaps for APIM and encountered the following issues.
>
> 1. When we provide a file as a config, at runtime this file permission is
> set to root:root. As we use wso2user which does not have root permissions,
> we cannot execute the file. This issue came for wso2server.sh file. For
> ConfigMaps, we have only read and write permissions. I was able to set the
> required permission to the file using the init script at server startup. [1]
>

​I did not try to map the bin folder, let me try it out and get back.​

>
> 2. Lose all the files except the added configuration files.
>
> I mounted the bin directory and following is the output of the 'ls-al'
> command for bin directory. I could see only the wso2server.sh file in the
> bin directory.
>

​Yes, that's the expected behaviour.

Since configmaps use volumes to map files to pods, we would need to use
separate mount paths instead of the actual product folders. If we need to
use the actual product folders as the mount paths, we would need to include
all the files in the configmaps.

In EI we have used following folders for mounting config files:

/home/wso2user/wso2ei-6.1.1-conf/integrator/conf
/home/wso2user/wso2ei-6.1.1-conf/integrator/conf-axis2
/home/wso2user/wso2ei-6.1.1-conf/integrator/conf-datasources

https://github.com/wso2/kubernetes-ei/blob/master/pattern-1/integrator-
deployment.yaml#L39

At the container startup we copy them to the relevant product folders:

https://github.com/wso2/kubernetes-ei/blob/master/dockerfiles/integrator/
Dockerfile#L24

Thanks
Imesh

As you can see, this has created a symlink for wso2server.sh file from
> ..data/wso2server.sh location.
>
> drwxrwxrwx.  2 wso2user root  26 Sep  6 14:12
> ..9989_06_09_14_12_05.054012650
> lrwxrwxrwx.  1 wso2user root  31 Sep  6 14:12 ..data ->
> ..9989_06_09_14_12_05.054012650
> lrwxrwxrwx.  1 wso2user root  20 Sep  6 14:12 wso2server.sh ->
> ..data/wso2server.sh
>
> In the sample, I noticed mount paths are not correct [2][3].
>
> It is good if we can limit the number of docker images to 1 and use
> configmaps. But due to the above limitations, I think we need to reconsider
> this approach.
>
> [1] - https://github.com/wso2/kubernetes-apim/blob/2.1.0/base/
> apim/change_ownership.sh
> [2] - https://github.com/wso2/kubernetes-ei/blob/master/pattern-
> 1/integrator-deployment.yaml#L43
> [3] - https://github.com/wso2/kubernetes-ei/blob/master/pattern-
> 1/integrator-deployment.yaml#L45
>
> Thank you!
>
> On Tue, Sep 5, 2017 at 10:28 AM, Youcef HILEM 
> wrote:
>
>> Hi Imesh,
>>
>> That's what I was looking for.
>> Before this solution, to avoid creating as many docker images as
>> environments and components, and taking into account the current limit
>> (https://github.com/wso2/kubernetes-apim/issues/15), I planned to use the
>> solution (https://github.com/eleks/wso2-dockers)
>> I will start with APIM 2.1.0
>> (https://github.com/wso2/kubernetes-apim/tree/2.1.0).
>>
>> Thanks,
>> Youcef HILEM
>>
>>
>>
>> --
>> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-Architectur
>> e-f62919.html
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>
>
>
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-06 Thread Imesh Gunaratne
Hi Pubudu,

On Wed, Sep 6, 2017 at 10:25 PM, Pubudu Gunatilaka  wrote:

> Hi Imesh,
>
> I actually used the product paths as the mount paths. If we are using a
> separate folder and copy the files, then it will work. I will check the bin
> folder mounting scenario again. If we are going in this direction, we may
> have to do the following for the docker image.
>

​I just checked mounting the bin folder with EI and it seems to be working
fine. I will share the changes soon.


> 1. Include all the artifacts such as mysql connector, Kubernetes
> membership scheme, etc
>

​Yes, I think this would be helpful for the users as they would not need to
re-create the vanilla product images for different requirements.
​

> 2. Specify file copying from configuration mounts to product folders - I
> think we can parameterize this using an env value.
>

​Do we really need to parameterize these? Wouldn't it be better to add all
required folders?​ May be what we need is the bin, conf and deployment
folders (including sub folders)?

Thanks
Imesh


> Thank you!
>
> On Wed, Sep 6, 2017 at 9:44 PM, Imesh Gunaratne  wrote:
>
>> On Tue, Sep 5, 2017 at 10:28 AM, Youcef HILEM 
>>  wrote:
>>
>>> Hi Imesh,
>>>
>>> That's what I was looking for.
>>>
>>
>> Thanks for your feedback Youcef!
>>
>>
>>> Before this solution, to avoid creating as many docker images as
>>> environments and components, and taking into account the current limit
>>> (https://github.com/wso2/kubernetes-apim/issues/15), I planned to use
>>> the
>>> solution (https://github.com/eleks/wso2-dockers)
>>>
>>
>> Yes, I went through the above solution and it looks neat! We also
>> implemented something similar sometime back called Configurator in Python
>> and later deprecated it:
>>
>> https://github.com/wso2/private-paas-cartridges/tree/master/
>> common/configurator/modules/distribution
>>
>> Please try out the ConfigMaps approach and let us know how it works for
>> you. Thanks again!
>>
>> On Wed, Sep 6, 2017 at 8:19 PM, Pubudu Gunatilaka 
>> wrote:
>>
>>> Hi,
>>>
>>> I tried ConfigMaps for APIM and encountered the following issues.
>>>
>>> 1. When we provide a file as a config, at runtime this file permission
>>> is set to root:root. As we use wso2user which does not have root
>>> permissions, we cannot execute the file. This issue came for wso2server.sh
>>> file. For ConfigMaps, we have only read and write permissions. I was able
>>> to set the required permission to the file using the init script at server
>>> startup. [1]
>>>
>>
>> ​I did not try to map the bin folder, let me try it out and get back.​
>>
>>>
>>> 2. Lose all the files except the added configuration files.
>>>
>>> I mounted the bin directory and following is the output of the 'ls-al'
>>> command for bin directory. I could see only the wso2server.sh file in the
>>> bin directory.
>>>
>>
>> ​Yes, that's the expected behaviour.
>>
>> Since configmaps use volumes to map files to pods, we would need to use
>> separate mount paths instead of the actual product folders. If we need to
>> use the actual product folders as the mount paths, we would need to include
>> all the files in the configmaps.
>>
>> In EI we have used following folders for mounting config files:
>>
>> /home/wso2user/wso2ei-6.1.1-conf/integrator/conf
>> /home/wso2user/wso2ei-6.1.1-conf/integrator/conf-axis2
>> /home/wso2user/wso2ei-6.1.1-conf/integrator/conf-datasources
>>
>> https://github.com/wso2/kubernetes-ei/blob/master/pattern-1/
>> integrator-deployment.yaml#L39
>>
>> At the container startup we copy them to the relevant product folders:
>>
>> https://github.com/wso2/kubernetes-ei/blob/master/dockerfile
>> s/integrator/Dockerfile#L24
>>
>> Thanks
>> Imesh
>>
>> As you can see, this has created a symlink for wso2server.sh file from
>>> ..data/wso2server.sh location.
>>>
>>> drwxrwxrwx.  2 wso2user root  26 Sep  6 14:12
>>> ..9989_06_09_14_12_05.054012650
>>> lrwxrwxrwx.  1 wso2user root  31 Sep  6 14:12 ..data ->
>>> ..9989_06_09_14_12_05.054012650
>>> lrwxrwxrwx.  1 wso2user root  20 Sep  6 14:12 wso2server.sh ->
>>> ..data/wso2server.sh
>>>
>>> In the sample, I noticed mount paths are not correct [2][3].
>>>
>>> It is good if we can limit the number of docker images to 1 and use
>>> config

Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-06 Thread Imesh Gunaratne
Hi Pubudu,

On Thu, Sep 7, 2017 at 12:54 AM, Pubudu Gunatilaka  wrote:

>
> On Wed, Sep 6, 2017 at 11:06 PM, Imesh Gunaratne  wrote:
>
>>
>> ​Do we really need to parameterize these? Wouldn't it be better to add
>> all required folders?​ May be what we need is the bin, conf and deployment
>> folders (including sub folders)?
>>
>> Without mentioning all the configuration files in the dockerfile[1], we
> can copy all the content of wso2ei-integrator-conf to wso2ei product
> folder. In this way, users can dynamically add any configuration file
> without changing the base docker image.They only need to add a configmap
> and mount that to the wso2ei-integrator-conf folder.
>

Are you suggesting to use a single configmap and mount that to a single
conf folder in the pod and then copy file by file to relevant folders?

Thanks
​

> In APIM we are using an init script to start the server, configure
> localMemberHost and copy artifacts. This script can be used to copy files.
>
> [1] - https://github.com/wso2/kubernetes-ei/blob/master/dockerfile
> s/integrator/Dockerfile#L25
> <https://github.com/wso2/kubernetes-ei/blob/master/dockerfiles/integrator/Dockerfile#L25>
> [2] - https://github.com/wso2/kubernetes-apim/blob/2.1.0/base/
> apim/init_carbon.sh
>
> Thank you!
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-07 Thread Imesh Gunaratne
On Thu, Sep 7, 2017 at 10:57 AM, Pubudu Gunatilaka  wrote:

>
> On Thu, Sep 7, 2017 at 10:44 AM, Imesh Gunaratne  wrote:
>
>>
>> Are you suggesting to use a single configmap and mount that to a single
>> conf folder in the pod and then copy file by file to relevant folders?
>>
>> As config maps does not support nested folders we have to use multiple
> config maps. Rather than hard coding the folder names in the dockerfile
> [1], using a script we can copy all the files within wso2ei-integrator-conf
> to wso2ei product folder. As I mentioned before, users will be able to add
> any configuration file which resides within the product folder without
> adding that in the dockerfile.
>
> ​I discussed this offline with Pubudu and decided to create separate
folders for each config folder including sub-folders. The plan is to add
all config folders as configmaps and update the Dockerfile to include
commands to copy those if available. As a result users will not need to
re-build Docker images for adding any of the configurations.

Thanks
Imesh
​


> [1] - https://github.com/wso2/kubernetes-ei/blob/master/
> dockerfiles/integrator/Dockerfile#L25
>
> Thank you!
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-11 Thread Imesh Gunaratne
On Sun, Sep 10, 2017 at 9:54 AM, Youcef HILEM 
wrote:

> Hi All,
>
> Thank you all.
> A PR has just been submitted
> (https://github.com/wso2/kubernetes-apim/pull/27).
> I will be able to start testing on openshift 3.4.
> With this flexibility I can really adapt easily and efficiently to our
> different constraints without the cumbersome to create as many docker
> images
> as it was before.
>

​Great! Nice to hear that Youcef!

Thanks
Imesh
​

>
> Thanks again.
>
> Youcef HILEM
>
>
>
> --
> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-Architectur
> e-f62919.html
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] Supporting Thrift protocol for GW-KM communication with Load Balancing

2017-09-12 Thread Imesh Gunaratne
I had an offline discussion with Suho on supporting TCP load balancing for
Thrift.

As we see we can simply achieve it by updating the DataBridge component and
initiating a new session when the load balancer switches a TCP connection
from one backend Thrift node to another. We might not need to replicate the
sessions.

Thanks
Imesh

On Fri, Sep 1, 2017 at 12:25 AM, Asela Pathberiya  wrote:

> Hi APIM team,
>
> According to the docs; We are not recommending the thrift protocol to
> communicate with GW and KM when even TCP load balancer is used.
>
> The problem is that;  thrift connection must be authenticated & thrift
> session is not replicated among key manager nodes.
>
> IMO; we have three solution for this.
>
> 1.  Replicate thrift session in KM nodes
>
> 2.  Client side load balancing
>
> 3. Sending authentication credentials from GW to KM in every request.
> This has been implemented in WSO2IS for XACML PDP.  You can find the
> details [1] & sample thrift client [2]
>
> We can easily implement approach 3,  Shall we consider this for next APIM
> release ?
>
> [1] http://xacmlinfo.org/2014/04/11/thrift-load-balancing/
> [2] https://svn.wso2.org/repos/wso2/people/asela/xacml/pep/thrift-LB
>
> Thanks,
> Asela.
>
>
> --
> Thanks & Regards,
> Asela
>
> ATL
> Mobile : +94 777 625 933 <+94%2077%20762%205933>
>  +358 449 228 979
>
> http://soasecurity.org/
> http://xacmlinfo.org/
>



-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Collect config-docs in all featues when packaging product distribution

2017-09-12 Thread Imesh Gunaratne
On Wed, Aug 30, 2017 at 11:07 PM, Danesh Kuruppu  wrote:

> ​...​
>
> ├── distribution
>  ├── target
>
>
> * ├── config-docs ├── secure-vault.yaml
>  └── wso2.carbon.yaml*
>  └── wso2carbon-kernel-5.2.0-SNAPSHOT.zip
>
> So when generating product distribution, we automatically get all
> configuration files used in the product. This will also help when creating
> product document.
>

​+1 Wouldn't we need to ship these files?​ Are we planning to add them to
the documentation?

Thanks
Imesh


>
> Appreciate your input on this.
>
> 1. http://wso2-oxygen-tank.10903.n7.nabble.com/Carbon-C5-
> Server-Configuration-Model-td144549.html
>
> Thanks
> --
>
> *Danesh Kuruppu*
> Senior Software Engineer | WSO2
>
> Email: dan...@wso2.com
> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
> Web: WSO2 Inc <https://wso2.com/signature>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Associate Director/Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Collect config-docs in all featues when packaging product distribution

2017-09-12 Thread Imesh Gunaratne
On Tue, Sep 12, 2017 at 9:10 PM, Thusitha Thilina Dayaratne <
thusit...@wso2.com> wrote:

> Hi Imesh,
>
> ​+1 Wouldn't we need to ship these files?​ Are we planning to add them to
>> the documentation?
>
> AFAIK the idea is to product team check all the relevant configs for a
> product and create a deployment.yaml manually and ship that with the
> product.
>

​Thanks for the clarification Thusitha!​

>
> Doc team is working on a project to document all the relevant configs per
> product based on the generated config files.
>
> Thanks
> Thusitha
>
> On Wed, Sep 13, 2017 at 8:00 AM, Imesh Gunaratne  wrote:
>
>> On Wed, Aug 30, 2017 at 11:07 PM, Danesh Kuruppu  wrote:
>>
>>> ​...​
>>>
>>> ├── distribution
>>>  ├── target
>>>
>>>
>>> * ├── config-docs ├── secure-vault.yaml
>>>└── wso2.carbon.yaml*
>>>  └── wso2carbon-kernel-5.2.0-SNAPSHOT.zip
>>>
>>> So when generating product distribution, we automatically get all
>>> configuration files used in the product. This will also help when creating
>>> product document.
>>>
>>
>> ​+1 Wouldn't we need to ship these files?​ Are we planning to add them to
>> the documentation?
>>
>> Thanks
>> Imesh
>>
>>
>>>
>>> Appreciate your input on this.
>>>
>>> 1. http://wso2-oxygen-tank.10903.n7.nabble.com/Carbon-C5-Server
>>> -Configuration-Model-td144549.html
>>>
>>> Thanks
>>> --
>>>
>>> *Danesh Kuruppu*
>>> Senior Software Engineer | WSO2
>>>
>>> Email: dan...@wso2.com
>>> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
>>> Web: WSO2 Inc <https://wso2.com/signature>
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Associate Director/Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Thusitha Dayaratne
> WSO2 Inc. - lean . enterprise . middleware |  wso2.com
>
> Mobile  +94712756809 <+94%2071%20275%206809>
> Blog  alokayasoya.blogspot.com
> Abouthttp://about.me/thusithathilina
> <http://wso2.com/signature>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Using Kubernetes ConfigMaps for Managing Product Configurations

2017-09-12 Thread Imesh Gunaratne
On Tue, Sep 12, 2017 at 10:43 PM, Pubudu Gunatilaka 
wrote:

> Hi Lakmal,
>
> Yes, it is the base image. As we discussed offline, I will add the jks
> files to the base image. We will update the documentation with this
> information.
>

​+1​

>
> Thank you!
>
> On Wed, Sep 13, 2017 at 11:10 AM, Lakmal Warusawithana 
> wrote:
>
>> You mean [2] base right? +1 for option one.
>>
>> [2] https://github.com/wso2/kubernetes-apim/tree/2.1.0/base/apim
>>
>> On Wed, Sep 13, 2017 at 10:53 AM, Pubudu Gunatilaka 
>> wrote:
>>
>>> Hi,
>>>
>>> Config maps do not support binary files [1] at the moment. How do we
>>> handle binary files such as jks?
>>>
>>> *Option 1*
>>>
>>> Add those files to the base image. Then we can pass an ARG and based on
>>> that we can replace the product vanilla files.
>>>
>>> *Option 2*
>>>
>>> Create another docker image for all the patterns based on the base
>>> image. Maybe we can include any pattern specific artifacts and binary files
>>> in this.
>>>
>>> Appreciate your thoughts on this.
>>>
>>> [1] - https://github.com/kubernetes/kubernetes/issues/32432
>>>
>>> Thank you!
>>>
>>> On Tue, Sep 12, 2017 at 12:05 PM, Imesh Gunaratne 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Sun, Sep 10, 2017 at 9:54 AM, Youcef HILEM 
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> Thank you all.
>>>>> A PR has just been submitted
>>>>> (https://github.com/wso2/kubernetes-apim/pull/27).
>>>>> I will be able to start testing on openshift 3.4.
>>>>> With this flexibility I can really adapt easily and efficiently to our
>>>>> different constraints without the cumbersome to create as many docker
>>>>> images
>>>>> as it was before.
>>>>>
>>>>
>>>> ​Great! Nice to hear that Youcef!
>>>>
>>>> Thanks
>>>> Imesh
>>>> ​
>>>>
>>>>>
>>>>> Thanks again.
>>>>>
>>>>> Youcef HILEM
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Sent from: http://wso2-oxygen-tank.10903.
>>>>> n7.nabble.com/WSO2-Architecture-f62919.html
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Associate Director/Architect
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>>>> W: https://medium.com/@imesh TW: @imesh
>>>> lean. enterprise. middleware
>>>>
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *Pubudu Gunatilaka*
>>> Committer and PMC Member - Apache Stratos
>>> Senior Software Engineer
>>> WSO2, Inc.: http://wso2.com
>>> mobile : +94774078049 <%2B94772207163>
>>>
>>>
>>
>>
>> --
>> Lakmal Warusawithana
>> Senior Director - Cloud Architecture; WSO2 Inc.
>> Mobile : +94714289692 <+94%2071%20428%209692>
>> Blogs : https://medium.com/@lakwarus/
>> http://lakmalsview.blogspot.com/
>>
>>
>>
>
>
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Deployment] [Puppet] Upgrading WSO2 Puppet Modules to Puppet 5.x from 3.x

2017-11-19 Thread Imesh Gunaratne
On Fri, Nov 17, 2017 at 5:59 PM, Dilan Udara Ariyaratne 
wrote:

>
> It was possible to fix this remaining issue by accessing the manifest
> variables in .erb files using the scope['variable'] notation [1] instead of
> the existing @variable local scope notation [2].
> With this fix [3], we should now be able to successfully run WSO2 puppet
> modules on top of puppet 5.x with minimal changes [4] that would be
> compatible until the next puppet major release 6.0.
>
> ​Great work Dilan!
As we discussed offline may be we can now move these changes to WSO2 Puppet
repositories, including the Puppet Docker Compose script in the
puppet-common repository and replace existing Vagrant script with that.

Thanks
Imesh
​

> [1] - https://puppet.com/docs/puppet/5.3/lang_template_erb.html#sc
> opevariable-or-scopelookupvarvariable
> [2] - https://puppet.com/docs/puppet/5.3/lang_template_erb.html#variable
> [3] - https://github.com/DilanUA/wso2-puppet-modules-5x-upgrade/co
> mmit/a9b27970d2501d2ceb5cb424a11aa9f6a15966a3
> [4] - https://github.com/DilanUA/wso2-puppet-modules-5x-upgrade
>
> Thanks,
> Dilan.
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-22 Thread Imesh Gunaratne
Hi All,

Currently, we build Docker images for each platform (Docker, Kubernetes,
DC/OS, etc) for each WSO2 product profile (EI: Integrator, MB, BPS; API-M:
Gateway, Key Manager, Pub/Store, etc). AFAIU, the main reason to do this
was bundling platform specific JAR files (membership scheme JAR file for
clustering) and platform specific filesystem security permission management
(mainly for OpenShift).

With the recent refinements we did in Dockerfiles, Docker Compose templates
we found that the same set of Docker images can be used in all container
platforms if we follow below approach:

   - Create the product profile Docker images by including the product
   distribution, and the JDK.
   - Provide configurations using volume mounts (on Kubernetes use
   ConfigMaps)
   - Provide JAR files and other binary files using volume mounts
   - Use a standard permission model for accessing volume mounts in runtime:
  - Use a none root user to start the container: wos2carbon (uid: 200)
  - Use a none root user group: wso2 (gid: 200) and add wso2carbon user
  to wso2 group
  - Grant required filesystem access to wso2 user group to the product
  home directory
  - Use wso2 user group (using gid: 200) to provide access to the
  volume mounts in runtime:
 - On Kubernetes we can use Pod Security Policies to manage these
 permissions
 - On OpenShift this can be managed using Security Context
 Constraints
 - On DC/OS volumes can be directly granted to user group gid:200.

Really appreciate your thoughts on this proposal.

Thanks
Imesh

-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-22 Thread Imesh Gunaratne
On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka  wrote:

> Hi Imesh,
>
> It is very convenient if we can reuse the docker image. AFAIU if we follow
> the above approach we can use a single docker image in all the container
> platforms.
>
> One of the drawbacks I see with this approach is that the user has to
> update the volume mounts with the necessary jar files, JKS files, etc. If
> any user tries this approach in Kubernetes, he has to add those jar files
> and binary files to the NFS server (To the volume which holds NFS server
> data). This affects the installation experience.
>
> IMHO, we should minimize the effort in trying out the WSO2 products in
> Kubernetes or any container platform. Based on the user need, he can switch
> to their own deployment approach.
>

Thanks for the quick response Pubudu! Yes, that's a valid concern. With the
proposed approach user would need to execute an extra step to copy required
files to a set of volume mounts before executing the deployment. In a
production deployment I think that would be acceptable as there will be
more manual steps involved such as creating databases, setting up CI/CD,
deployment automation, etc. However, in an evaluation scenario when someone
is executing a demo it might become an overhead.

I also noticed that kubectl cp command can be used to copy files from a
local machine to a container. Let's check whether we can use that approach
to overcome this issue:
https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#cp

On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa  wrote:

> In API Manager K8s artifacts, what we have followed is not having an
> image-per-profile method. With the introduction of Config Maps, it has
> become only two base images - for APIM and Analytics. Its extremely helpful
> from the maintenance PoV that we have a single set of Dockerfiles, but has
> a tradeoff with automation level AFAIU, since the user might have manual
> steps to perform.
>

​Thanks Isuru for the quick response! What I meant by image per profile is
that, in products like EI and SP, we would need a Docker image per profile
due to their design.​

>
> Its would be still possible to to write a wrapper script for a single set
> of Dockerfiles so that we can copy the artifacts, etc. using a single
> Docker image, but still that script would need to be maintained.
>

​A good point! I think it would be better to have a one to one mapping
between the Docker images and the Dockerfiles to make it easier for users
to understand how Docker images are built.​

>
> What if we go for a hybrid mode - not using Dockerfile per product profile
> or a single set of Dockerfiles for all, but use a specific set of
> Dockerfiles for a platform (Kubernetes, DC/OS,  etc.)? Also we need to be
> open for any other platform that would need to support in future.
>

​Yes, I think that's what we have at the moment and with that approach
every other container platform we need to support we need to create a new
set of Docker images.

Thanks
Imesh

>
> On Mon, Jan 22, 2018 at 1:36 PM, Imesh Gunaratne  wrote:
>
>> Hi All,
>>
>> Currently, we build Docker images for each platform (Docker, Kubernetes,
>> DC/OS, etc) for each WSO2 product profile (EI: Integrator, MB, BPS; API-M:
>> Gateway, Key Manager, Pub/Store, etc). AFAIU, the main reason to do this
>> was bundling platform specific JAR files (membership scheme JAR file for
>> clustering) and platform specific filesystem security permission management
>> (mainly for OpenShift).
>>
>> With the recent refinements we did in Dockerfiles, Docker Compose
>> templates we found that the same set of Docker images can be used in all
>> container platforms if we follow below approach:
>>
>>- Create the product profile Docker images by including the product
>>distribution, and the JDK.
>>- Provide configurations using volume mounts (on Kubernetes use
>>ConfigMaps)
>>- Provide JAR files and other binary files using volume mounts
>>- Use a standard permission model for accessing volume mounts in
>>runtime:
>>   - Use a none root user to start the container: wos2carbon (uid:
>>   200)
>>   - Use a none root user group: wso2 (gid: 200) and add wso2carbon
>>   user to wso2 group
>>   - Grant required filesystem access to wso2 user group to the
>>   product home directory
>>   - Use wso2 user group (using gid: 200) to provide access to the
>>   volume mounts in runtime:
>>  - On Kubernetes we can use Pod Security Policies to manage
>>      these permissions
>>  - On OpenShift this can be managed using Security Context
>>  Constraints
>

[Architecture] Resources for Deploying WSO2 API-M on DC/OS v1.10

2018-01-24 Thread Imesh Gunaratne
Hi All,

I created resources for deploying WSO2 API-M on DC/OS v1.10:
https://github.com/wso2/dcos-apim/pull/2

This includes three Marathon applications for orchestrating one MySQL
instance, one API-M instance and one API-M Analytics instance. The MySQL
Marathon application is only intended for evaluation purposes, for
enterprise deployments a production ready RDBMS would need to be used.

Moving forward, we need to improve this and support API-M deployment
pattern 1 by adding one more API-M instance and one additional API-M
Analytics instance. Afterwards similar resources need to be created for all
five API-M deployment patterns.

Compared to DC/OS v1.7, I noticed following improvements in DC/OS v1.10:

   - Container to container communication with overlay networking
   - Service discovery with Minuteman
   - Built-in Marathon runtime

Please try this out and let me know your thoughts.

[1] https://docs.mesosphere.com/1.10/
[2] https://github.com/dcos/minuteman

Thanks
Imesh

-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-28 Thread Imesh Gunaratne
On Mon, Jan 29, 2018 at 6:58 AM, Muhammed Shariq  wrote:

> Hi Imesh / all,
>
> Personally, I think the best option going forward is to maintain a single
> set of docker image across all platforms. It's true that there is a concern
> of users having to do more work, but in reality, user's will have do quite
> a lot of config changes such as copying jdbc drivers, create key stores and
> update hostnames etc right? As long as we provide a clean option, which is
> using volume mounts or in k8s case config maps, we should be good.
>
> For the evaluation / demo case, users can use the docker-compose artifacts
> that's already preconfigured and ready to go.
>
> While it might seem attractive to maintain images per platform, I think it
> would be very costly and hard to maintain in the long run. In the future,
> we would have to do things like running some scans etc on the built images
> before making it available to pull. Having to identify issues across many
> different platforms and fixing them one by one would be cumbersome.
>
> I would suggest we go with a single set of images for all platforms and
> then create per platform images if the need arises.
>

​I completely agree with Shariq!

The reason for starting this thread was that when we started creating DC/OS
Docker images we found that we can simply use the Docker images created in
the product Docker Git repositories (github.com/wso2/docker-)
without making any changes except for adding user group id for managing
volume permissions (which would be common for any container platform).

This model will allow us to efficiently manage WSO2 Docker images by only
creating one image per product version (in EI and SP there will be one per
profile) and use those on all container platforms by following above best
practices. Most importantly, it will work well when releasing
updates/patches.

Regarding the concern of having additional steps for copying files to
volumes, let's do a quick POC and see whether we can find a better way to
overcome that problem in each platform for evaluation scenarios.

@Lakmal: Would you mind sharing your thoughts on this?

Thanks
Imesh

>
>
>
> On Mon, Jan 22, 2018 at 5:30 PM, Imesh Gunaratne  wrote:
>
>> On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> It is very convenient if we can reuse the docker image. AFAIU if we
>>> follow the above approach we can use a single docker image in all the
>>> container platforms.
>>>
>>> One of the drawbacks I see with this approach is that the user has to
>>> update the volume mounts with the necessary jar files, JKS files, etc. If
>>> any user tries this approach in Kubernetes, he has to add those jar files
>>> and binary files to the NFS server (To the volume which holds NFS server
>>> data). This affects the installation experience.
>>>
>>> IMHO, we should minimize the effort in trying out the WSO2 products in
>>> Kubernetes or any container platform. Based on the user need, he can switch
>>> to their own deployment approach.
>>>
>>
>> Thanks for the quick response Pubudu! Yes, that's a valid concern. With
>> the proposed approach user would need to execute an extra step to copy
>> required files to a set of volume mounts before executing the deployment.
>> In a production deployment I think that would be acceptable as there will
>> be more manual steps involved such as creating databases, setting up CI/CD,
>> deployment automation, etc. However, in an evaluation scenario when someone
>> is executing a demo it might become an overhead.
>>
>> I also noticed that kubectl cp command can be used to copy files from a
>> local machine to a container. Let's check whether we can use that approach
>> to overcome this issue:
>> https://kubernetes.io/docs/reference/generated/kubectl/kubec
>> tl-commands#cp
>>
>> On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa 
>> wrote:
>>
>>> In API Manager K8s artifacts, what we have followed is not having an
>>> image-per-profile method. With the introduction of Config Maps, it has
>>> become only two base images - for APIM and Analytics. Its extremely helpful
>>> from the maintenance PoV that we have a single set of Dockerfiles, but has
>>> a tradeoff with automation level AFAIU, since the user might have manual
>>> steps to perform.
>>>
>>
>> ​Thanks Isuru for the quick response! What I meant by image per profile
>> is that, in products like EI and SP, we would need a Docker image per
>> profile due to their design.​
>>
>>>
>>> Its would 

Re: [Architecture] Resources for Deploying WSO2 API-M on DC/OS v1.10

2018-01-28 Thread Imesh Gunaratne
Inline with $subject, we have now moved previously implemented Mesos
Membership Scheme to dcos-common repository with commit history:
https://github.com/wso2/dcos-common

We have also renamed the term Mesos to DC/OS in the following pull request:
https://github.com/wso2/dcos-common/pull/2

Really appreciate your thoughts on this.

Thanks
Imesh

On Wed, Jan 24, 2018 at 9:06 PM, Imesh Gunaratne  wrote:

> Hi All,
>
> I created resources for deploying WSO2 API-M on DC/OS v1.10:
> https://github.com/wso2/dcos-apim/pull/2
>
> This includes three Marathon applications for orchestrating one MySQL
> instance, one API-M instance and one API-M Analytics instance. The MySQL
> Marathon application is only intended for evaluation purposes, for
> enterprise deployments a production ready RDBMS would need to be used.
>
> Moving forward, we need to improve this and support API-M deployment
> pattern 1 by adding one more API-M instance and one additional API-M
> Analytics instance. Afterwards similar resources need to be created for all
> five API-M deployment patterns.
>
> Compared to DC/OS v1.7, I noticed following improvements in DC/OS v1.10:
>
>- Container to container communication with overlay networking
>- Service discovery with Minuteman
>- Built-in Marathon runtime
>
> Please try this out and let me know your thoughts.
>
> [1] https://docs.mesosphere.com/1.10/
> [2] https://github.com/dcos/minuteman
>
> Thanks
> Imesh
>
> --
> *Imesh Gunaratne*
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
> W: https://medium.com/@imesh TW: @imesh
> lean. enterprise. middleware
>
>


-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [MB] MQTT : support around 100K mqtt connections using WSO2 MB

2018-01-29 Thread Imesh Gunaratne
[+ Hasitha, Sumedha]

On Sun, Jan 28, 2018 at 9:20 PM, Youcef HILEM 
wrote:

> Hi,
> We have a fleet of over 10 android smartphones.
> We evaluate MQTT bokers that can manage more than 100k connections with a
> large number of topics (notification, referential data, operational data,
> ...).
> Could you give me some tips to properly size a cluster in HA and scale with
> a load of over 100K connections?
>
> Thanks
> Youcef HILEM
>
>
>
> --
> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-
> Architecture-f62919.html
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Proposal to Use a Single Set of WSO2 Docker Images for All Container Platforms

2018-01-31 Thread Imesh Gunaratne
On Tue, Jan 30, 2018 at 6:25 AM, Dilan Udara Ariyaratne 
 wrote:

> Hi Imesh,
>
> +1 to this suggestion.
>
> My personal experience is that even our users find it confusing to see
> dockerfile definitions for a product or product profile in multiple
> repositories.
> Thus, it would be quite intuitive to have a single source of truth per
> each product or product profile in the relevant docker- repository
> from this point onward.
> And with the level of generalization that we have reached now in
> dockerfile definitions, my gut feeling is that we can use the same
> definition for any container platform without any specializations even in
> future.
>

Thanks Dilan for your thoughts!

On Tue, Jan 30, 2018 at 11:08 AM, Lakmal Warusawithana 
wrote:
>
>
> On Mon, Jan 29, 2018 at 7:36 AM, Imesh Gunaratne  wrote:
>
>> ​...
>> Regarding the concern of having additional steps for copying files to
>> volumes, let's do a quick POC and see whether we can find a better way to
>> overcome that problem in each platform for evaluation scenarios.
>>
>> @Lakmal: Would you mind sharing your thoughts on this?
>>
>> Lets do a POC and then decide. IMO we should not kill optimization by
> doing generalization. Users point of view, they need optimized dokcer image
> for orchestration platform
>

​Thanks Lakmal! Yes, let's do that! Will keep the platform specific
Dockerfiles for the time being. As we progress with the POC scenarios we
should be able to get a better picture.

Thanks
Imesh
​

>
>
>> Thanks
>> Imesh
>>
>>>
>>>
>>>
>>> On Mon, Jan 22, 2018 at 5:30 PM, Imesh Gunaratne  wrote:
>>>
>>>> On Mon, Jan 22, 2018 at 2:46 PM, Pubudu Gunatilaka 
>>>> wrote:
>>>>
>>>>> Hi Imesh,
>>>>>
>>>>> It is very convenient if we can reuse the docker image. AFAIU if we
>>>>> follow the above approach we can use a single docker image in all the
>>>>> container platforms.
>>>>>
>>>>> One of the drawbacks I see with this approach is that the user has to
>>>>> update the volume mounts with the necessary jar files, JKS files, etc. If
>>>>> any user tries this approach in Kubernetes, he has to add those jar files
>>>>> and binary files to the NFS server (To the volume which holds NFS server
>>>>> data). This affects the installation experience.
>>>>>
>>>>> IMHO, we should minimize the effort in trying out the WSO2 products in
>>>>> Kubernetes or any container platform. Based on the user need, he can 
>>>>> switch
>>>>> to their own deployment approach.
>>>>>
>>>>
>>>> Thanks for the quick response Pubudu! Yes, that's a valid concern. With
>>>> the proposed approach user would need to execute an extra step to copy
>>>> required files to a set of volume mounts before executing the deployment.
>>>> In a production deployment I think that would be acceptable as there will
>>>> be more manual steps involved such as creating databases, setting up CI/CD,
>>>> deployment automation, etc. However, in an evaluation scenario when someone
>>>> is executing a demo it might become an overhead.
>>>>
>>>> I also noticed that kubectl cp command can be used to copy files from a
>>>> local machine to a container. Let's check whether we can use that approach
>>>> to overcome this issue:
>>>> https://kubernetes.io/docs/reference/generated/kubectl/kubec
>>>> tl-commands#cp
>>>>
>>>> On Mon, Jan 22, 2018 at 3:03 PM, Isuru Haththotuwa 
>>>> wrote:
>>>>
>>>>> In API Manager K8s artifacts, what we have followed is not having an
>>>>> image-per-profile method. With the introduction of Config Maps, it has
>>>>> become only two base images - for APIM and Analytics. Its extremely 
>>>>> helpful
>>>>> from the maintenance PoV that we have a single set of Dockerfiles, but has
>>>>> a tradeoff with automation level AFAIU, since the user might have manual
>>>>> steps to perform.
>>>>>
>>>>
>>>> ​Thanks Isuru for the quick response! What I meant by image per profile
>>>> is that, in products like EI and SP, we would need a Docker image per
>>>> profile due to their design.​
>>>>
>>>>>
>>>>> Its would be still possible to to write a wrapper script for a single
>>>&g

Re: [Architecture] [Deployment] [Containers] An update to WSO2 product Dockerfile generalization

2018-03-08 Thread Imesh Gunaratne
Thanks Chiranga for initiating this discussion!

On Thu, Mar 8, 2018 at 6:41 PM, Chiranga Alwis  wrote:

> ​...
> *Approach*
>
>- Allow users to mount the following files if any configuration
>changes and/or addition of any external libraries [6] are required. The
>following file mounts are expected to be allowed when deploying WSO2
>product Docker images (in general).
>
>
>1. Configuration files (*/repository/conf*)
>2. External libraries (see [6])
>3. Deployment synchronization artifacts (e.g. [7])
>
>
>- An init bash script is introduced as the Dockerfile entrypoint
>instead of the WSO2 product server startup script (*wso2server.sh*).
>The following tasks are performed during the execution of this script.
>
>
>1. Copy the changed configuration files and added external libraries
>mounted (defined in the previous point) to the appropriate directories
>within the product home.
>
>
>1. Set any configurations which need to be setup dynamically (e.g. if
>clustering is enabled, dynamically set the Docker container IP as the
>*localMemberHost* under clustering configurations in axis2.xml).
>2. Execute the product server startup script.
>
> ​I think with C4 based products we may not be able to eliminate this due
to having a default set of files in the deployment and lib folders and to
be able to mount config files which only have changes.
​

> Further, this approach has been adopted by taking into consideration the
> effort to use a single set of WSO2 Docker images across all container
> platforms (see [8]). So far, this approach has enabled us to use the same
> set of WSO2 Docker images (mentioned in [3]) in DC/OS platform, as well.
>

​As we discussed, let's first apply this concept to WSO2 DC/OS resources.​
WSO2 Kubernetes repositories already have a similar model. Later, if we can
verify whether this model works for all container cluster managers, then we
can make it generalized.

Thanks
Imesh

-- 
*Imesh Gunaratne*
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM setup in k8s

2016-03-25 Thread Imesh Gunaratne
On Fri, Mar 25, 2016 at 1:50 PM, Lakmal Warusawithana 
wrote:

> We should test this against at-least 20 GW nodes before come to a
> conclusion. If this not scale, we have to come up with a generic way to
> update API endpoint in the GW.
>
> For C5 we MUST have to fix this issue because there is no dep-sync in C5.
>
> +1 This is on our road map. Will be doing it soon.

Thanks

>
> On Fri, Mar 25, 2016 at 9:21 AM, Imesh Gunaratne  wrote:
>
>> Hi Lakmal,
>>
>> On Fri, Mar 25, 2016 at 8:33 AM, Lakmal Warusawithana 
>> wrote:
>>
>>>
>>> Do we tested $subject? How we are going to update gateways when new apis
>>> created?
>>>
>>
>> We have $subject here [1]. However we still did not do any tests on
>> synchronizing APIs generated by the publisher in the gateway cluster. Our
>> plan is to do a research on using a volume mount between the gateway
>> manager and worker nodes.
>>
>> [1] https://github.com/wso2/kubernetes-artifacts/tree/master/wso2am
>>
>> Thanks
>>
>>>
>>> thanks
>>>
>>> --
>>> Lakmal Warusawithana
>>> Director - Cloud Architecture; WSO2 Inc.
>>> Mobile : +94714289692
>>> Blog : http://lakmalsview.blogspot.com/
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Lakmal Warusawithana
> Director - Cloud Architecture; WSO2 Inc.
> Mobile : +94714289692
> Blog : http://lakmalsview.blogspot.com/
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] Implementing a Carbon Membership Scheme for Mesos

2016-03-25 Thread Imesh Gunaratne
Hi All,

Please refer [1] on the initial discussion on deploying WSO2 middleware on
Apache Mesos. It needs a Carbon Membership Scheme for automatically
dis-coverying the Hazelcast based cluster and allowing the instances to be
scaled in any order without breaking the cluster at any point.

On high level a such Carbon Membership Scheme [2] would work as follows:

   - When a Carbon server is bootstrapped, the clustering agent will
   initialize the membership scheme given in the clustering configuration in
   the axis2.xml (relates to Carbon 4.2.x, 4.4.x).
   - Then it will try to talk to an API and get the list of members
   available in the given cluster.
   - The above API response should include the hostname/ip address and the
   Hazelcast port (might be a proxy port) of each member.
   - Thereafter the membership scheme will initialize the Hazelcast
   configuration by using the above member information and let Carbon create
   the Hazelcast instance.
   - According to this model each Carbon server in the cluster will get
   connected to every other instance. As a result the cluster will be able to
   autoscaled in any order.

In Mesos, the Marathon REST API [3] can be used for the above purpose. Once
a Carbon server is deployed in Mesos via Marathon, it schedules tasks for
creating the containers. Each task would represent a container and
containers would get dynamic host ports allocated for each port it exposes.

Using the below API method [4] those tasks can be listed:

*GET /v2/apps//tasks*

HTTP/1.1 200 OKContent-Type: application/jsonServer:
Jetty(8.y.z-SNAPSHOT)Transfer-Encoding: chunked

{
"tasks": [
{
"host": "agouti.local",
"id": "my-app_1-1396592790353",
"ports": [
31336,
31337
],
"stagedAt": "2014-04-04T06:26:30.355Z",
"startedAt": "2014-04-04T06:26:30.860Z",
"version": "2014-04-04T06:26:23.051Z"
},
{
"host": "agouti.local",
"id": "my-app_0-1396592784349",
"ports": [
31382,
31383
],
"stagedAt": "2014-04-04T06:26:24.351Z",
"startedAt": "2014-04-04T06:26:24.919Z",
"version": "2014-04-04T06:26:23.051Z"
}
]}


[1] [Architecture] Deploying WSO2 Middleware on Mesos
[2]
https://github.com/wso2/kubernetes-artifacts/tree/master/common/kubernetes-membership-scheme
[3] https://mesosphere.github.io/marathon/docs/rest-api.html
[4]
https://mesosphere.github.io/marathon/docs/rest-api.html#get-v2-apps-appid-tasks


Thanks

-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa  wrote:

>
> [2].
> FROM wso2am:1.10.0
> MAINTAINER isu...@wso2.com
>
> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>

Shouldn't it better to use a simple folder structure like
"/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
Tomcat [4], JBoss [5] Dockerfiles use something similar.

[3]
https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
[3]
https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
[4] https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7

Thanks


>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Thu, Mar 31, 2016 at 7:56 PM, Isuru Haththotuwa  wrote:

>
> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>
>>
>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>> wrote:
>>
>>>
>>> [2].
>>> FROM wso2am:1.10.0
>>> MAINTAINER isu...@wso2.com
>>>
>>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>>
>> We are not using root user, and the relevant user (wso2user) has
> permission to /mnt. Technically we can give permission to /opt as well, but
> IMHO we can have this directory in /mnt. Will change the name to
> /mnt/wso2.
>

+1 May be /mnt/wso2/wso2 would be more meaningful.

Thanks

>
>> Shouldn't it better to use a simple folder structure like
>> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
>> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>>
>> [3]
>> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
>> [3]
>> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
>> [4]
>> https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>>
>> Thanks
>>
>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* <http://wso2.com/>*
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Fri, Apr 1, 2016 at 10:18 AM, Isuru Haththotuwa  wrote:

>
>
> On Fri, Apr 1, 2016 at 12:45 AM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Thu, Mar 31, 2016 at 7:56 PM, Isuru Haththotuwa 
>> wrote:
>>
>>>
>>> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>>>
>>>>
>>>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>>>> wrote:
>>>>
>>>>>
>>>>> [2].
>>>>> FROM wso2am:1.10.0
>>>>> MAINTAINER isu...@wso2.com
>>>>>
>>>>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>>>>
>>>> We are not using root user, and the relevant user (wso2user) has
>>> permission to /mnt. Technically we can give permission to /opt as well, but
>>> IMHO we can have this directory in /mnt. Will change the name to
>>> /mnt/wso2.
>>>
>>
>> +1 May be /mnt/wso2/wso2 would be more meaningful.
>>
> IMHO since we run a single product in a container, using only 'wso2' is
> enough.
>

+1


>
>> Thanks
>>
>>>
>>>> Shouldn't it better to use a simple folder structure like
>>>> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
>>>> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>>>>
>>>> [3]
>>>> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
>>>> [3]
>>>> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
>>>> [4]
>>>> https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>>>>
>>>> Thanks
>>>>
>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Thanks and Regards,
>>>>>
>>>>> Isuru H.
>>>>> +94 716 358 048* <http://wso2.com/>*
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Senior Technical Lead
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>> W: http://imesh.io
>>>> Lean . Enterprise . Middleware
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* <http://wso2.com/>*
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-04-07 Thread Imesh Gunaratne
Hi Ruwan,

On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon  wrote:

> Hi All,
> Do we really want artifact deployment coordination in C5?
> What is preventing us to build the new image with the new version of
> artifacts and let the k8s take care of deployment?
>

You are absolutely correct! We may not do artifact synchronization in C5
rather artifacts will get packaged into the containers. This feature is for
monitoring the deployment status of the artifacts. If an existing artifact
needs to be updated or new artifacts needs to be added a new container
image needs to be created. Then a rollout should be triggerred (depending
on the container cluster management system used).

Thanks

>
> Cheers,
> Ruwan
>
> On Wed, Mar 30, 2016 at 2:54 PM, Isuru Haththotuwa 
> wrote:
>
>> Hi Kasun,
>>
>> On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe 
>> wrote:
>>
>>> Hi,
>>>
>>> Given several issues we discovered with automatic artifact
>>> synchronization with DepSync in C4, we have discussed how to approach this
>>> problem in C5.
>>>
>>> We are thinking of not doing the automated artifact synchronization in
>>> C5. Rather, users should use their own mechanism to synchronize the
>>> artifacts across a cluster. Common approaches are RSync as a cron job and
>>> shell scripts.
>>>
>>> But, it is vital to know the artifact deployment status of the nodes in
>>> the entire cluster from a central place. For that, we are providing this
>>> deployment coordination feature. There will be two ways to use this.
>>>
>>> 1. JMS based publishing - the deployment status will be published by
>>> each node to a jms topic/queue
>>>
>>> 2. Log based publishing - publish the logs by using a syslog appender
>>> [1] or our own custom appender to a central location.
>>>
>> Both are push mechanisms, IMHO we would need an API to check the status
>> of a deployed artifacts on demand, WDYT?
>>
>>>
>>> The log publishing may not be limited to just the deployment
>>> coordination. In a containerized deployment, the carbon products will run
>>> in disposable containers. But sometimes, the logs need to be backed up for
>>> later reference. This will help with that.
>>>
>>> Any thoughts on this matter?
>>>
>>> [1]
>>> https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
>>>
>>> Thanks,
>>> KasunG
>>>
>>>
>>>
>>> --
>>> ~~--~~
>>> Sending this mail via my phone. Do excuse any typo or short replies
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* <http://wso2.com/>*
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> *Ruwan Abeykoon*
> *Architect,*
> *WSO2, Inc. http://wso2.com <http://wso2.com/> *
> *lean.enterprise.middleware.*
>
> email: ruw...@wso2.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Must Inject CarbonServerInfo when writing OSGi Test Cases

2016-04-07 Thread Imesh Gunaratne
Hi Aruna,

On Thu, Apr 7, 2016 at 6:21 PM, Aruna Karunarathna  wrote:

> Hi Devs,
>
> When writing OSGi Test Cases, Please Inject the CarbonServerInfo service
> [1]. Otherwise the container wont start properly and the test cases will
> fail.
>
> After injecting the service it will guarantee that the server will fully
> start, before running the test cases. e.g. [2]
>

Would you mind mentioning what CarbonServerInfo does and why containers
would not start without it?

Thanks

-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-04-07 Thread Imesh Gunaratne
On Thu, Apr 7, 2016 at 11:53 PM, Imesh Gunaratne  wrote:

>
> Hi Ruwan,
>
> On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon  wrote:
>
>> Hi All,
>> Do we really want artifact deployment coordination in C5?
>> What is preventing us to build the new image with the new version of
>> artifacts and let the k8s take care of deployment?
>>
>
> You are absolutely correct! We may not do artifact synchronization in C5
> rather artifacts will get packaged into the containers.
>

I'm sorry C5 will also support none containerized deployments (VM, physical
machines), still artifact synchronization will not be handled by Carbon.

On Wed, Apr 6, 2016 at 8:03 PM, Akila Ravihansa Perera 
 wrote:
>
>
> I've few concerns regarding artifact deployment coordination
>  - Artifact versioning support. This is important to ensure consistency
> across a cluster
>

Indded, but it may not relate to this feature I guess.


>  - REST API to query the status. I'd rather go ahead with a REST API
> before a JMS based implementation. IMO it's much simpler and easy to use.
>

A REST API might be needed in a different context, may be in a central
monitoring server. In this context the design is to let servers publish
their status to a central server. Otherwise it might not be feasible for a
client to talk to each and every server and prepare the aggregated view.


>  - Why don't we provide a REST API to deploy artifacts rather than copying
> files (whenever applicable)? We can immediately notify the client (via HTTP
> response status) whether artifact deployment was successful.
>

Might not be needed for container based deployments.

Thanks


> This feature is for monitoring the deployment status of the artifacts. If
> an existing artifact needs to be updated or new artifacts needs to be added
> a new container image needs to be created. Then a rollout should be
> triggerred (depending on the container cluster management system used).
>
> Thanks
>
>>
>> Cheers,
>> Ruwan
>>
>> On Wed, Mar 30, 2016 at 2:54 PM, Isuru Haththotuwa 
>> wrote:
>>
>>> Hi Kasun,
>>>
>>> On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Given several issues we discovered with automatic artifact
>>>> synchronization with DepSync in C4, we have discussed how to approach this
>>>> problem in C5.
>>>>
>>>> We are thinking of not doing the automated artifact synchronization in
>>>> C5. Rather, users should use their own mechanism to synchronize the
>>>> artifacts across a cluster. Common approaches are RSync as a cron job and
>>>> shell scripts.
>>>>
>>>> But, it is vital to know the artifact deployment status of the nodes in
>>>> the entire cluster from a central place. For that, we are providing this
>>>> deployment coordination feature. There will be two ways to use this.
>>>>
>>>> 1. JMS based publishing - the deployment status will be published by
>>>> each node to a jms topic/queue
>>>>
>>>> 2. Log based publishing - publish the logs by using a syslog appender
>>>> [1] or our own custom appender to a central location.
>>>>
>>> Both are push mechanisms, IMHO we would need an API to check the status
>>> of a deployed artifacts on demand, WDYT?
>>>
>>>>
>>>> The log publishing may not be limited to just the deployment
>>>> coordination. In a containerized deployment, the carbon products will run
>>>> in disposable containers. But sometimes, the logs need to be backed up for
>>>> later reference. This will help with that.
>>>>
>>>> Any thoughts on this matter?
>>>>
>>>> [1]
>>>> https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
>>>>
>>>> Thanks,
>>>> KasunG
>>>>
>>>>
>>>>
>>>> --
>>>> ~~--~~
>>>> Sending this mail via my phone. Do excuse any typo or short replies
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* <http://wso2.com/>*
>>>
>>>
>>>
>>> _

Re: [Architecture] [C5] Must Inject CarbonServerInfo when writing OSGi Test Cases

2016-04-08 Thread Imesh Gunaratne
Thanks Sameera & Aruna!!

On Fri, Apr 8, 2016 at 10:20 AM, Aruna Karunarathna  wrote:

> On Fri, Apr 8, 2016 at 7:07 AM, Sameera Jayasoma  wrote:
>
>>
>>
>> On Fri, Apr 8, 2016 at 12:13 AM, Imesh Gunaratne  wrote:
>>
>>> Hi Aruna,
>>>
>>> On Thu, Apr 7, 2016 at 6:21 PM, Aruna Karunarathna 
>>> wrote:
>>>
>>>> Hi Devs,
>>>>
>>>> When writing OSGi Test Cases, Please Inject the CarbonServerInfo
>>>> service [1]. Otherwise the container wont start properly and the test cases
>>>> will fail.
>>>>
>>>> After injecting the service it will guarantee that the server will
>>>> fully start, before running the test cases. e.g. [2]
>>>>
>>>
>>> Would you mind mentioning what CarbonServerInfo does and why containers
>>> would not start without it?
>>>
>>
> Hi Imesh,
>
> Sorry If I have confused you, as Sameera mentioned I was talking about the
> PAX Exam test cases for OSGi runtimes. :)
>
> @Kishanthan
> Sure will do the necessary to get it added to the documentation.
>
>>
>> Aruna is talking about Pax Exam test containers (OSGi runtimes) :)
>>
>>>
>>> Thanks
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.io
>>> Lean . Enterprise . Middleware
>>>
>>>
>>
>>
>> --
>> Sameera Jayasoma,
>> Software Architect,
>>
>> WSO2, Inc. (http://wso2.com)
>> email: same...@wso2.com
>> blog: http://blog.sameera.org
>> twitter: https://twitter.com/sameerajayasoma
>> flickr: http://www.flickr.com/photos/sameera-jayasoma/collections
>> Mobile: 0094776364456
>>
>> Lean . Enterprise . Middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> *Aruna Sujith Karunarathna *
> WSO2, Inc | lean. enterprise. middleware.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 71 9040362 | Work: +94 112145345
> Email: ar...@wso2.com | Web: www.wso2.com
>
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-04-14 Thread Imesh Gunaratne
On Thu, Apr 14, 2016 at 1:35 AM, Manuranga Perera  wrote:

> If an existing artifact needs to be updated or new artifacts needs to be
>> added a new container image needs to be created.
>
> In this case, why can't we ask from Kub how many pods with new artifact
> has been spun up? Why does this have to be updated at carbon kernel level
> via JMS?
>

Carbon may not handle the rollout but it will need to inform an external
entity the status of the deployed artifacts. K8S will only know about the
container image that was used for the deployment, it will have no
information on the artifacts deployed in the Carbon server.

>
>
> On Thu, Apr 7, 2016 at 2:38 PM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Thu, Apr 7, 2016 at 11:53 PM, Imesh Gunaratne  wrote:
>>
>>>
>>> Hi Ruwan,
>>>
>>> On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon  wrote:
>>>
>>>> Hi All,
>>>> Do we really want artifact deployment coordination in C5?
>>>> What is preventing us to build the new image with the new version of
>>>> artifacts and let the k8s take care of deployment?
>>>>
>>>
>>> You are absolutely correct! We may not do artifact synchronization in C5
>>> rather artifacts will get packaged into the containers.
>>>
>>
>> I'm sorry C5 will also support none containerized deployments (VM,
>> physical machines), still artifact synchronization will not be handled by
>> Carbon.
>>
>> On Wed, Apr 6, 2016 at 8:03 PM, Akila Ravihansa Perera <
>> raviha...@wso2.com> wrote:
>>>
>>>
>>> I've few concerns regarding artifact deployment coordination
>>>  - Artifact versioning support. This is important to ensure consistency
>>> across a cluster
>>>
>>
>> Indded, but it may not relate to this feature I guess.
>>
>>
>>>  - REST API to query the status. I'd rather go ahead with a REST API
>>> before a JMS based implementation. IMO it's much simpler and easy to use.
>>>
>>
>> A REST API might be needed in a different context, may be in a central
>> monitoring server. In this context the design is to let servers publish
>> their status to a central server. Otherwise it might not be feasible for a
>> client to talk to each and every server and prepare the aggregated view.
>>
>>
>>>  - Why don't we provide a REST API to deploy artifacts rather than
>>> copying files (whenever applicable)? We can immediately notify the client
>>> (via HTTP response status) whether artifact deployment was successful.
>>>
>>
>> Might not be needed for container based deployments.
>>
>> Thanks
>>
>>
>>> This feature is for monitoring the deployment status of the artifacts.
>>> If an existing artifact needs to be updated or new artifacts needs to be
>>> added a new container image needs to be created. Then a rollout should be
>>> triggerred (depending on the container cluster management system used).
>>>
>>> Thanks
>>>
>>>>
>>>> Cheers,
>>>> Ruwan
>>>>
>>>> On Wed, Mar 30, 2016 at 2:54 PM, Isuru Haththotuwa 
>>>> wrote:
>>>>
>>>>> Hi Kasun,
>>>>>
>>>>> On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe 
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Given several issues we discovered with automatic artifact
>>>>>> synchronization with DepSync in C4, we have discussed how to approach 
>>>>>> this
>>>>>> problem in C5.
>>>>>>
>>>>>> We are thinking of not doing the automated artifact synchronization
>>>>>> in C5. Rather, users should use their own mechanism to synchronize the
>>>>>> artifacts across a cluster. Common approaches are RSync as a cron job and
>>>>>> shell scripts.
>>>>>>
>>>>>> But, it is vital to know the artifact deployment status of the nodes
>>>>>> in the entire cluster from a central place. For that, we are providing 
>>>>>> this
>>>>>> deployment coordination feature. There will be two ways to use this.
>>>>>>
>>>>>> 1. JMS based publishing - the deployment status will be published by
>>>>>> each node to a jms topic/queue
>>>>>>
>>>>>> 2. Log based publishing - publish the logs by using

Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-04-17 Thread Imesh Gunaratne
On Thu, Apr 14, 2016 at 10:54 PM, Manuranga Perera  wrote:

>  K8S will only know about the container image that was used for the
>> deployment
>
> Ok, But form the image, don't we know what are the artifacts (since
> immutable servers)?
>

We know that, but I don't think we can assume that all the artifacts found
in the image will get deployed properly. We may need to expose the actual
status of the artifacts via an API.

On Fri, Apr 15, 2016 at 1:44 AM, Susankha Nirmala  wrote:

> Why we can't copy new artifacts (or updated  artifacts) to the deployment
> directory of the carbon servers, running on the containers?
>
> That's exactly what we do.


> On Thu, Apr 14, 2016 at 1:18 PM, Frank Leymann  wrote:
>
>> Sorry for jumping in so late in the thread:  is technology like HEAT/HOT
>> (OpenStack) or TOSCA (OASIS) too encompassing? I am happy to provide on
>> overview of their features...
>>
>> I am not suggesting to use the corresponding implementations (they have
>> their pros/cons) but we may learn from the concepts behind them.
>>
>>
>> Best regards,
>> Frank
>>
>> 2016-04-14 12:06 GMT+02:00 Imesh Gunaratne :
>>
>>>
>>>
>>> On Thu, Apr 14, 2016 at 1:35 AM, Manuranga Perera  wrote:
>>>
>>>> If an existing artifact needs to be updated or new artifacts needs to
>>>>> be added a new container image needs to be created.
>>>>
>>>> In this case, why can't we ask from Kub how many pods with new artifact
>>>> has been spun up? Why does this have to be updated at carbon kernel level
>>>> via JMS?
>>>>
>>>
>>> Carbon may not handle the rollout but it will need to inform an external
>>> entity the status of the deployed artifacts. K8S will only know about the
>>> container image that was used for the deployment, it will have no
>>> information on the artifacts deployed in the Carbon server.
>>>
>>>>
>>>>
>>>> On Thu, Apr 7, 2016 at 2:38 PM, Imesh Gunaratne  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 7, 2016 at 11:53 PM, Imesh Gunaratne 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> Hi Ruwan,
>>>>>>
>>>>>> On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>> Do we really want artifact deployment coordination in C5?
>>>>>>> What is preventing us to build the new image with the new version of
>>>>>>> artifacts and let the k8s take care of deployment?
>>>>>>>
>>>>>>
>>>>>> You are absolutely correct! We may not do artifact synchronization in
>>>>>> C5 rather artifacts will get packaged into the containers.
>>>>>>
>>>>>
>>>>> I'm sorry C5 will also support none containerized deployments (VM,
>>>>> physical machines), still artifact synchronization will not be handled by
>>>>> Carbon.
>>>>>
>>>>> On Wed, Apr 6, 2016 at 8:03 PM, Akila Ravihansa Perera <
>>>>> raviha...@wso2.com> wrote:
>>>>>>
>>>>>>
>>>>>> I've few concerns regarding artifact deployment coordination
>>>>>>  - Artifact versioning support. This is important to ensure
>>>>>> consistency across a cluster
>>>>>>
>>>>>
>>>>> Indded, but it may not relate to this feature I guess.
>>>>>
>>>>>
>>>>>>  - REST API to query the status. I'd rather go ahead with a REST API
>>>>>> before a JMS based implementation. IMO it's much simpler and easy to use.
>>>>>>
>>>>>
>>>>> A REST API might be needed in a different context, may be in a central
>>>>> monitoring server. In this context the design is to let servers publish
>>>>> their status to a central server. Otherwise it might not be feasible for a
>>>>> client to talk to each and every server and prepare the aggregated view.
>>>>>
>>>>>
>>>>>>  - Why don't we provide a REST API to deploy artifacts rather than
>>>>>> copying files (whenever applicable)? We can immediately notify the client
>>>>>> (via HTTP response status) whether artifact deployment was suc

Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-04-18 Thread Imesh Gunaratne
On Mon, Apr 18, 2016 at 9:50 AM, Susankha Nirmala  wrote:

>
> On Sun, Apr 17, 2016 at 9:14 PM, Imesh Gunaratne  wrote:
>>
>>
>> On Fri, Apr 15, 2016 at 1:44 AM, Susankha Nirmala 
>>  wrote:
>>
>>> Why we can't copy new artifacts (or updated  artifacts) to the
>>> deployment directory of the carbon servers, running on the containers?
>>>
>>> That's exactly what we do.
>>
>
> Without recreating the docker image with new or updated artifacts
> (just copy the artifacts to the deployment directory of the running server)?
>

I did not get your question, can you please rephrase?

Thanks

>
>
>>
>>
>>> On Thu, Apr 14, 2016 at 1:18 PM, Frank Leymann  wrote:
>>>
>>>> Sorry for jumping in so late in the thread:  is technology like
>>>> HEAT/HOT (OpenStack) or TOSCA (OASIS) too encompassing? I am happy to
>>>> provide on overview of their features...
>>>>
>>>> I am not suggesting to use the corresponding implementations (they have
>>>> their pros/cons) but we may learn from the concepts behind them.
>>>>
>>>>
>>>> Best regards,
>>>> Frank
>>>>
>>>> 2016-04-14 12:06 GMT+02:00 Imesh Gunaratne :
>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 14, 2016 at 1:35 AM, Manuranga Perera 
>>>>> wrote:
>>>>>
>>>>>> If an existing artifact needs to be updated or new artifacts needs
>>>>>>> to be added a new container image needs to be created.
>>>>>>
>>>>>> In this case, why can't we ask from Kub how many pods with new
>>>>>> artifact has been spun up? Why does this have to be updated at carbon
>>>>>> kernel level via JMS?
>>>>>>
>>>>>
>>>>> Carbon may not handle the rollout but it will need to inform an
>>>>> external entity the status of the deployed artifacts. K8S will only know
>>>>> about the container image that was used for the deployment, it will have 
>>>>> no
>>>>> information on the artifacts deployed in the Carbon server.
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 7, 2016 at 2:38 PM, Imesh Gunaratne 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Apr 7, 2016 at 11:53 PM, Imesh Gunaratne 
>>>>>>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> Hi Ruwan,
>>>>>>>>
>>>>>>>> On Thu, Mar 31, 2016 at 3:07 PM, Ruwan Abeykoon 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi All,
>>>>>>>>> Do we really want artifact deployment coordination in C5?
>>>>>>>>> What is preventing us to build the new image with the new version
>>>>>>>>> of artifacts and let the k8s take care of deployment?
>>>>>>>>>
>>>>>>>>
>>>>>>>> You are absolutely correct! We may not do artifact synchronization
>>>>>>>> in C5 rather artifacts will get packaged into the containers.
>>>>>>>>
>>>>>>>
>>>>>>> I'm sorry C5 will also support none containerized deployments (VM,
>>>>>>> physical machines), still artifact synchronization will not be handled 
>>>>>>> by
>>>>>>> Carbon.
>>>>>>>
>>>>>>> On Wed, Apr 6, 2016 at 8:03 PM, Akila Ravihansa Perera <
>>>>>>> raviha...@wso2.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> I've few concerns regarding artifact deployment coordination
>>>>>>>>  - Artifact versioning support. This is important to ensure
>>>>>>>> consistency across a cluster
>>>>>>>>
>>>>>>>
>>>>>>> Indded, but it may not relate to this feature I guess.
>>>>>>>
>>>>>>>
>>>>>>>>  - REST API to query the status. I'd rather go ahead with a REST
>>>>>>>> API before a JMS based implementation. IMO it's much simpler and easy 
>>>>>>>> to
>>>>>>

Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-04-21 Thread Imesh Gunaratne
gt; We are basically trying to implement a program to make sure that
>>>>>>>>> the deeplearning4j library we are using is compatible with apache 
>>>>>>>>> spark
>>>>>>>>> pipeline. And also we are trying to demonstrate all the machine 
>>>>>>>>> learning
>>>>>>>>> steps with that program.
>>>>>>>>>
>>>>>>>>> We are now using aclImdb sentiment analysis data set to verify the
>>>>>>>>> accuracy of the RNN model we create.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>> Thamali
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Wed, Mar 2, 2016 at 10:38 AM, Srinath Perera 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Thamali,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>1. RNN can do both classification and predict next value. Are
>>>>>>>>>>we trying to do both?
>>>>>>>>>>2. When Upul played with it, he had trouble getting
>>>>>>>>>>deeplearning4j implementation work with predict next value 
>>>>>>>>>> scenario. Is it
>>>>>>>>>>fixed?
>>>>>>>>>>3. What are the data sets we will use to verify the accuracy
>>>>>>>>>>    of RNN after integration?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --Srinath
>>>>>>>>>>
>>>>>>>>>> On Tue, Mar 1, 2016 at 3:44 PM, Thamali Wijewardhana <
>>>>>>>>>> tham...@wso2.com> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi,
>>>>>>>>>>>
>>>>>>>>>>> Currently we are working on a project to add Recurrent Neural
>>>>>>>>>>> Network(RNN) algorithm to machine learner. RNN is one of deep 
>>>>>>>>>>> learning
>>>>>>>>>>> algorithms with record breaking accuracy. For more information on 
>>>>>>>>>>> RNN
>>>>>>>>>>> please refer link[1].
>>>>>>>>>>>
>>>>>>>>>>> We have decided to use deeplearning4j which is an open source
>>>>>>>>>>> deep learning library scalable on spark and Hadoop.
>>>>>>>>>>>
>>>>>>>>>>> Since there is a plan to add spark pipeline to machine Learner,
>>>>>>>>>>> we have decided to use spark pipeline concept to our project.
>>>>>>>>>>>
>>>>>>>>>>> I have designed an architecture for the RNN implementation.
>>>>>>>>>>>
>>>>>>>>>>> This architecture is developed to be compatible with spark
>>>>>>>>>>> pipeline.
>>>>>>>>>>>
>>>>>>>>>>> Data set is taken in csv format and then it is converted to
>>>>>>>>>>> spark data frame since apache spark works mostly with data frames.
>>>>>>>>>>>
>>>>>>>>>>> Next step is a transformer which is needed to tokenize the
>>>>>>>>>>> sequential data. A tokenizer is basically used for take a sequence 
>>>>>>>>>>> of data
>>>>>>>>>>> and break it into individual units. For example, it can be used to 
>>>>>>>>>>> break
>>>>>>>>>>> the words in a sentence to words.
>>>>>>>>>>>
>>>>>>>>>>> Next step is again a transformer used to converts tokens to
>>>>>>>>>>> vectors. This must be done because the features should be added to 
>>>>>>>>>>> spark
>>>>>>>>>>> pipeline in org.apache.spark.mllib.linlag.VectorUDT format.
>>>>>>>>>>>
>>>>>>>>>>> Next, the transformed data are fed to the data set iterator.
>>>>>>>>>>> This is an object of a class which implement
>>>>>>>>>>> org.deeplearning4j.datasets.iterator.DataSetIterator. The dataset 
>>>>>>>>>>> iterator
>>>>>>>>>>> traverses through a data set and prepares data for neural networks.
>>>>>>>>>>>
>>>>>>>>>>> Next component is the RNN algorithm model which is an estimator.
>>>>>>>>>>> The iterated data from data set iterator is fed to RNN and a model 
>>>>>>>>>>> is
>>>>>>>>>>> generated. Then this model can be used for predictions.
>>>>>>>>>>>
>>>>>>>>>>> We have decided to complete this project in two steps :
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>-
>>>>>>>>>>>
>>>>>>>>>>>First create a spark pipeline program containing the steps
>>>>>>>>>>>in machine learner(uploading dataset, generate model, 
>>>>>>>>>>> calculating accuracy
>>>>>>>>>>>and prediction) and check whether the project is feasible.
>>>>>>>>>>>-
>>>>>>>>>>>
>>>>>>>>>>>Next add the algorithm to ML
>>>>>>>>>>>
>>>>>>>>>>> Currently we have almost completed the first step and now we are
>>>>>>>>>>> collecting more data and tuning for hyper parameters.
>>>>>>>>>>>
>>>>>>>>>>> [1]
>>>>>>>>>>> https://docs.google.com/document/d/1edg1fdKCYR7-B1oOLy2kon179GSs6x2Zx9oSRDn_NEU/edit
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ​
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> 
>>>>>>>>>> Srinath Perera, Ph.D.
>>>>>>>>>>http://people.apache.org/~hemapani/
>>>>>>>>>>http://srinathsview.blogspot.com/
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> 
>>>>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>>>>> Site: http://home.apache.org/~hemapani/
>>>>>> Photos: http://www.flickr.com/photos/hemapani/
>>>>>> Phone: 0772360902
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> 
>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>> Site: http://home.apache.org/~hemapani/
>>> Photos: http://www.flickr.com/photos/hemapani/
>>> Phone: 0772360902
>>>
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-04-21 Thread Imesh Gunaratne
Hi Thamali,

One other point, people outside WSO2 might not be able to access the Google
Docs you have shared in this thread. You might need to export them as PDF
and share.

Thanks

On Thu, Apr 21, 2016 at 11:53 PM, Imesh Gunaratne  wrote:

> Hi Thamali,
>
> It might be better if you can share the artifacts you used to execute
> these tests in a public location. May be including a README.md file with
> the steps to be followed.
>
> Thanks
>
> On Thu, Apr 21, 2016 at 6:03 PM, Thamali Wijewardhana 
> wrote:
>
>> Hi,
>>
>> I have completed writing the article[1] containing the comparison between
>> the deeplearning4j library and Keras library considering Recurrent Neural
>> network(RNN) algorithm.
>> I also have found out the reasons for low performance of Deeplearning4j
>> library using Java Flight Recorder(JFR) and Flame Graphs and included in
>> the article.
>>
>> [1]
>> https://docs.google.com/a/wso2.com/document/d/1CGq1y5QBzW6EaHyf-UqAiatxLumb6lo_mRLjYZWD18o/edit?usp=sharing
>>
>> Thanks
>>
>>
>> On Fri, Apr 8, 2016 at 7:20 PM, Thamali Wijewardhana 
>> wrote:
>>
>>> Hi,
>>>
>>> I have used a dataset with 25000 rows and the size is 80 MB.
>>>
>>> The link to the dataset is:
>>>
>>> http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
>>>
>>>
>>>
>>>
>>> On Fri, Apr 8, 2016 at 3:07 PM, Srinath Perera  wrote:
>>>
>>>> Thamali, how big is the data set you are using?  ( give me a link to
>>>> the data set as well).
>>>>
>>>> Nirmal, shall we compare the accuracy of RNN vs. Upul's rolling window
>>>> method?
>>>>
>>>> --Srinath
>>>>
>>>> On Fri, Apr 8, 2016 at 9:23 AM, Thamali Wijewardhana 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I run the RNN algorithm using deeplearning4j library and the Keras
>>>>> python library. The dataset, hyper parameters, network architecture and 
>>>>> the
>>>>> hardware platform are the same. Given below is the time comparison
>>>>>
>>>>> Deeplearning4j library-40 minutes per 1 epoch
>>>>> Keras library- 4 minutes per 1 epoch
>>>>>
>>>>> I also compared the accuracies[1]. The deeplearning4j library gives a
>>>>> low accuracy compared to Keras library.
>>>>>
>>>>> [1]
>>>>> https://docs.google.com/spreadsheets/d/1-EvC1P7N90k1S_Ly6xVcFlEEKprh7r41Yk8aI6DiSaw/edit#gid=1050346562
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Apr 1, 2016 at 10:12 AM, Thamali Wijewardhana <
>>>>> tham...@wso2.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>> I have organized a review on Monday (4th  of April).
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On Thu, Mar 31, 2016 at 3:21 PM, Srinath Perera 
>>>>>> wrote:
>>>>>>
>>>>>>> Please setup a review. Shall we do it monday?
>>>>>>>
>>>>>>> On Thu, Mar 31, 2016 at 2:15 PM, Thamali Wijewardhana <
>>>>>>> tham...@wso2.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> we have created a spark program to prove the feasibility of adding
>>>>>>>> the RNN algorithm to machine learner.
>>>>>>>> This program demonstrates all the steps in machine learner:
>>>>>>>>
>>>>>>>> Uploading a dataset
>>>>>>>>
>>>>>>>> Selecting the hyper parameters for the model
>>>>>>>>
>>>>>>>> Creating a RNN model using data and training the model
>>>>>>>>
>>>>>>>> Calculating the accuracy of the model
>>>>>>>>
>>>>>>>> Saving the model(As a serialization object)
>>>>>>>>
>>>>>>>> predicting using the model
>>>>>>>>
>>>>>>>> This program is based on deeplearning4j and apache spark pipeline.
>>>>>>>> Deeplearning4j was used as the deep learning library for recurrent 
>>>>>>>> neural
>>>>>>>> ne

Re: [Architecture] [Kubernetes] Using MySQL as the Default DB for WSO2 K8s Artifacts

2016-04-26 Thread Imesh Gunaratne
[Moving to Architecture]

On Tue, Apr 26, 2016 at 1:20 PM, Isuru Haththotuwa  wrote:

> Hi Devs,
>
> We can do $subject in the Kubernetes Artifacts. Its possible to define the
> DB url with the relevant Kubernetes service name, hence there is no need to
> know the exact IP of the database pod.
>

The main advantage of this model is that, we can completely automate WSO2
deployments on K8S without making the user to provide any configurations.
User may only need to do the following:

- Build Docker images either using Puppet provisioning or any other.
- Import Docker images to a central Docker registry or to the K8S nodes
- Start MySQL server Pods and Services
- Deploy WSO2 product(s)
- The deployment architecture may look as follows:


​

>
> Currently in mysql docker image [1], its only possible to create one DB
> per container. Therefore to support the usual deployment clustered wso2
> products, we would need at least 3 pods:
>
>1. gov_db - shared across all products as necessary
>2. user_db - shared across all products as necessary
>3. product_db - used for config db (shared with all members of the
>same cluster) and any other product specific db tables
>
>  +1 for the approach Isuru! This may even comply with the Microdata
architecture [1].

[1]
https://medium.com/@asankama/microdata-architecture-5449596a3f6f#.locqrpdsa

Thanks

> This will allow the users to deploy the wso2 products using defined real
> distributed deployment patterns in a kubernetes environment with minimal
> effort, without doing any configuration changes.
>
> Note that the purpose of this effort is to make it easier to try out WSO2
> products in a K8s environment, and to try out K8s features like
> autoscaling, etc. without the hassle of configuring databases, etc. WDYT?
>
> [1]. https://hub.docker.com/_/mysql/
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* <http://wso2.com/>*
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-26 Thread Imesh Gunaratne
​Hi All,

This is to propose a new model for synchronizing Synapse APIs generated by
API Publisher in API Gateway worker nodes in containerized environments.
Here we are trying to avoid using SVN based deployment synchronizer. This
proposal may apply to Kubernetes, Apache Mesos or any other container
cluster management systems. Please refer the below diagram:

​
The idea is to use one of the standard API-M deployment patterns and deploy
two new components in Gateway Manager node and Gateway Worker nodes for
synchronizing APIs; API Sender WAR and API Receiver Bundle. This is how it
would work:

   - User publishes an API via API Publisher
   - API Publisher makes a service call to API Gateway Manager to generate
   the Synapse API
   - API Gateway writes the Synapse API definition to the disk
   - API Sender WAR exposes the Synapse API definition on the filesystem
   via an API endpoint (X)
   - API Receiver Bundle will poll the above API endpoint (X) with a given
   time interval, if any new APIs available those will be pulled and deployed
   in each API Gateway Worker node.

*Important*

   - We might not be able to scale API Gateway Manager node more than one
   instance because API Publisher talks to it via a service call. If we do,
   all the instances of the API Gateway Manager cluster would not have all the
   Synapse API definitions.
   - API Gateway Manager pod needs to mount a volume to persist the Synapse
   APIs. This is vital for allowing the Gateway Manager pod to auto heal
   without loosing the Synapse APIs on the filesystem.
   - This design does not depend on any native features of the container
   cluster management system.


Thanks

-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-26 Thread Imesh Gunaratne
On Tue, Apr 26, 2016 at 5:37 PM, Nuwan Dias  wrote:

> Hi Imesh,
>
> Your intention I believe is to make API Manager artifact synchronisation
> possible in containerized environments without modifying the existing
> product and without using deployment synchronizer. Correct?
>

Yes that's correct, the first attempt is that. However if it's not
straightforward, it might be better to rethink and introduce an extension
point in the product to handle it properly.

>
> Looking at the suggested approach from a high level, this is quite similar
> to using dep-sync with a periodic svn checkout on the Gateway worker nodes
> instead of relying on the cluster message to arrive to check of updates. If
> we opt for the dep-sync option without relying on the cluster message, will
> it still be a problem for containers?
>

No, we cannot see any problems of using existing dep-sync. The only concern
is maintaining a version control system just for synchronizing internal
artifacts of a product.

>
> On Tue, Apr 26, 2016 at 3:15 PM, Imesh Gunaratne  wrote:
>
>> ​Hi All,
>>
>> This is to propose a new model for synchronizing Synapse APIs generated
>> by API Publisher in API Gateway worker nodes in containerized environments.
>> Here we are trying to avoid using SVN based deployment synchronizer. This
>> proposal may apply to Kubernetes, Apache Mesos or any other container
>> cluster management systems. Please refer the below diagram:
>>
>> You will have to check for new APIs and modifications to existing APIs as
> well. Which means you will have to write something quite similar to what
> the svn diff tool does. We will have to be a bit careful here because if
> the receiver bundle consumes a lot of resources, it'll affect the API
> traffic too. And the load on the Manager will increase as the number of
> workers grow.
>

Indeed, that's the idea. Regarding the resource concern, we can handle it
using container groups. Still that's only available in K8S.

>
> *Important*
>>
>>- We might not be able to scale API Gateway Manager node more than
>>one instance because API Publisher talks to it via a service call. If we
>>do, all the instances of the API Gateway Manager cluster would not have 
>> all
>>the Synapse API definitions.
>>
>> If there are more than 1 manager nodes, we can configure the publisher to
> publish APIs to all of them.
>

That's interesting. If so we may not need a gateway manager at all. Can you
please describe how that works? Do we need to specify IP addresses of each
gateway node in publisher?

Thanks

>
>>- API Gateway Manager pod needs to mount a volume to persist the
>>Synapse APIs. This is vital for allowing the Gateway Manager pod to auto
>>heal without loosing the Synapse APIs on the filesystem.
>>
>>
>>- This design does not depend on any native features of the container
>>cluster management system.
>>
>>
>> Thanks
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io TW: @imesh
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Nuwan Dias
>
> Technical Lead - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-26 Thread Imesh Gunaratne
Hi Nuwan,

On Wed, Apr 27, 2016 at 9:18 AM, Nuwan Dias  wrote:

>
> On Wed, Apr 27, 2016 at 8:59 AM, Imesh Gunaratne  wrote:
>
>>
>> On Tue, Apr 26, 2016 at 5:37 PM, Nuwan Dias  wrote:
>>
>>>
>>> If there are more than 1 manager nodes, we can configure the publisher
>>> to publish APIs to all of them.
>>>
>>
>> That's interesting. If so we may not need a gateway manager at all. Can
>> you please describe how that works? Do we need to specify IP addresses of
>> each gateway node in publisher?
>>
>
> This capability is intended to handle having multiple gateway clusters.
> For example, if you have an internal Gateway cluster and an external
> Gateway cluster, you can specify the url of the manager node of each
> cluster on the api-manager.xml of the Publisher. Then from the Publisher
> UI, you can publish an API to a selected Gateway manager or both (by
> default it publishes to all).
>

Can you please point me to the code which handles this?

We should be able to introduce an extension point here and add an
implementation for each container cluster manager, similar to the
clustering membership schemes we implemented. This would let us dynamically
list the available gateway nodes in the Publisher and let the Publisher
sends the APIs to all the available gateway nodes. Then we would not need a
gateway manager and things would be much simple and straightforward.

Thanks


>
>> Thanks
>>
>>>
>>>>- API Gateway Manager pod needs to mount a volume to persist the
>>>>Synapse APIs. This is vital for allowing the Gateway Manager pod to auto
>>>>heal without loosing the Synapse APIs on the filesystem.
>>>>
>>>>
>>>>- This design does not depend on any native features of the
>>>>container cluster management system.
>>>>
>>>>
>>>> Thanks
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Senior Technical Lead
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>> W: http://imesh.io TW: @imesh
>>>> Lean . Enterprise . Middleware
>>>>
>>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Technical Lead - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io TW: @imesh
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Nuwan Dias
>
> Technical Lead - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-26 Thread Imesh Gunaratne
Hi Srinath,

On Wed, Apr 27, 2016 at 11:52 AM, Srinath Perera  wrote:

> Hi Imesh,
>
> Publishing to both gateways is not good because things fall apart when a
> one publication failed. Then a one node will be out of sync and will
> continue to be out of sync.
>

Indeed, that's true!

>
> Also in C5, we are dropping depsync, and ask users to use  rsync ( or some
> other method like NSF).
>
> Can't we use rsync? (e.g. write a script that will get the list of docker
> instances, and sync a give repo folder against all docker instances).
>
> Yes rsync is a good option, to use that we might need to consider
following:

   - We would need to run rsync in a separate container, this can be done
   on K8S using pods but not on Mesos at the moment.
   - Rsync would need a user account or a SSH key to be able to talk to the
   gateway manager (assuming that rsync runs on each gateway worker).

Thanks


> --Srinath
>
>
> On Wed, Apr 27, 2016 at 11:11 AM, Imesh Gunaratne  wrote:
>
>> Hi Nuwan,
>>
>> On Wed, Apr 27, 2016 at 9:18 AM, Nuwan Dias  wrote:
>>
>>>
>>> On Wed, Apr 27, 2016 at 8:59 AM, Imesh Gunaratne  wrote:
>>>
>>>>
>>>> On Tue, Apr 26, 2016 at 5:37 PM, Nuwan Dias  wrote:
>>>>
>>>>>
>>>>> If there are more than 1 manager nodes, we can configure the publisher
>>>>> to publish APIs to all of them.
>>>>>
>>>>
>>>> That's interesting. If so we may not need a gateway manager at all. Can
>>>> you please describe how that works? Do we need to specify IP addresses of
>>>> each gateway node in publisher?
>>>>
>>>
>>> This capability is intended to handle having multiple gateway clusters.
>>> For example, if you have an internal Gateway cluster and an external
>>> Gateway cluster, you can specify the url of the manager node of each
>>> cluster on the api-manager.xml of the Publisher. Then from the Publisher
>>> UI, you can publish an API to a selected Gateway manager or both (by
>>> default it publishes to all).
>>>
>>
>> Can you please point me to the code which handles this?
>>
>> We should be able to introduce an extension point here and add an
>> implementation for each container cluster manager, similar to the
>> clustering membership schemes we implemented. This would let us dynamically
>> list the available gateway nodes in the Publisher and let the Publisher
>> sends the APIs to all the available gateway nodes. Then we would not need a
>> gateway manager and things would be much simple and straightforward.
>>
>> Thanks
>>
>>
>>>
>>>> Thanks
>>>>
>>>>>
>>>>>>- API Gateway Manager pod needs to mount a volume to persist the
>>>>>>Synapse APIs. This is vital for allowing the Gateway Manager pod to 
>>>>>> auto
>>>>>>heal without loosing the Synapse APIs on the filesystem.
>>>>>>
>>>>>>
>>>>>>- This design does not depend on any native features of the
>>>>>>container cluster management system.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> --
>>>>>> *Imesh Gunaratne*
>>>>>> Senior Technical Lead
>>>>>> WSO2 Inc: http://wso2.com
>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>> W: http://imesh.io TW: @imesh
>>>>>> Lean . Enterprise . Middleware
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nuwan Dias
>>>>>
>>>>> Technical Lead - WSO2, Inc. http://wso2.com
>>>>> email : nuw...@wso2.com
>>>>> Phone : +94 777 775 729
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> *Imesh Gunaratne*
>>>> Senior Technical Lead
>>>> WSO2 Inc: http://wso2.com
>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>> W: http://imesh.io TW: @imesh
>>>> Lean . Enterprise . Middleware
>>>>
>>>>
>>>
>>>
>>> --
>>> Nuwan Dias
>>>
>>> Technical Lead - WSO2, Inc. http://wso2.com
>>> email : nuw...@wso2.com
>>> Phone : +94 777 775 729
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io TW: @imesh
>> Lean . Enterprise . Middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> 
> Srinath Perera, Ph.D.
>http://people.apache.org/~hemapani/
>http://srinathsview.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] API Manager Artifact Synchronization on Containerized Platforms

2016-04-26 Thread Imesh Gunaratne
Just to add another option we evaluated:

*Rolling Updates on Kubernetes*
- Again this is only available on Kubernetes at the moment and not
supported on Mesos.
- The idea in this is to build a new gateway Docker image on each API
change and execute a rollout via Kubernetes.
- This process requires a Docker host to do the Docker image build. As a
result it would be difficult to setup API-M on a container cluster manager
OOB. Except that we can completely automate a distributed API-M deployment
on K8S or Mesos.

Thanks


On Wed, Apr 27, 2016 at 12:09 PM, Imesh Gunaratne  wrote:

> Hi Srinath,
>
> On Wed, Apr 27, 2016 at 11:52 AM, Srinath Perera  wrote:
>
>> Hi Imesh,
>>
>> Publishing to both gateways is not good because things fall apart when a
>> one publication failed. Then a one node will be out of sync and will
>> continue to be out of sync.
>>
>
> Indeed, that's true!
>
>>
>> Also in C5, we are dropping depsync, and ask users to use  rsync ( or
>> some other method like NSF).
>>
>> Can't we use rsync? (e.g. write a script that will get the list of docker
>> instances, and sync a give repo folder against all docker instances).
>>
>> Yes rsync is a good option, to use that we might need to consider
> following:
>
>- We would need to run rsync in a separate container, this can be done
>on K8S using pods but not on Mesos at the moment.
>- Rsync would need a user account or a SSH key to be able to talk to
>the gateway manager (assuming that rsync runs on each gateway worker).
>
> Thanks
>
>
>> --Srinath
>>
>>
>> On Wed, Apr 27, 2016 at 11:11 AM, Imesh Gunaratne  wrote:
>>
>>> Hi Nuwan,
>>>
>>> On Wed, Apr 27, 2016 at 9:18 AM, Nuwan Dias  wrote:
>>>
>>>>
>>>> On Wed, Apr 27, 2016 at 8:59 AM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>>
>>>>> On Tue, Apr 26, 2016 at 5:37 PM, Nuwan Dias  wrote:
>>>>>
>>>>>>
>>>>>> If there are more than 1 manager nodes, we can configure the
>>>>>> publisher to publish APIs to all of them.
>>>>>>
>>>>>
>>>>> That's interesting. If so we may not need a gateway manager at all.
>>>>> Can you please describe how that works? Do we need to specify IP addresses
>>>>> of each gateway node in publisher?
>>>>>
>>>>
>>>> This capability is intended to handle having multiple gateway clusters.
>>>> For example, if you have an internal Gateway cluster and an external
>>>> Gateway cluster, you can specify the url of the manager node of each
>>>> cluster on the api-manager.xml of the Publisher. Then from the Publisher
>>>> UI, you can publish an API to a selected Gateway manager or both (by
>>>> default it publishes to all).
>>>>
>>>
>>> Can you please point me to the code which handles this?
>>>
>>> We should be able to introduce an extension point here and add an
>>> implementation for each container cluster manager, similar to the
>>> clustering membership schemes we implemented. This would let us dynamically
>>> list the available gateway nodes in the Publisher and let the Publisher
>>> sends the APIs to all the available gateway nodes. Then we would not need a
>>> gateway manager and things would be much simple and straightforward.
>>>
>>> Thanks
>>>
>>>
>>>>
>>>>> Thanks
>>>>>
>>>>>>
>>>>>>>- API Gateway Manager pod needs to mount a volume to persist the
>>>>>>>Synapse APIs. This is vital for allowing the Gateway Manager pod to 
>>>>>>> auto
>>>>>>>heal without loosing the Synapse APIs on the filesystem.
>>>>>>>
>>>>>>>
>>>>>>>- This design does not depend on any native features of the
>>>>>>>container cluster management system.
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> --
>>>>>>> *Imesh Gunaratne*
>>>>>>> Senior Technical Lead
>>>>>>> WSO2 Inc: http://wso2.com
>>>>>>> T: +94 11 214 5345 M: +94 77 374 2057
>>>>>>> W: http://imesh.io TW: @imesh
>>>>>>> Lean . Enterprise . Middleware
>>>>>>>
>>>>>>>
>&g

Re: [Architecture] [PET] CEP Extension for Unique window - Length, Time

2016-05-30 Thread Imesh Gunaratne
Hi Yashothara,

On Mon, May 30, 2016 at 2:08 PM, Yashothara Shanmugarajah <
yashoth...@wso2.com> wrote:
>
>
> But now my task is add two parameters time and length * unique
> (attribute, length) and ** unique (attribute,
> time). *So the output should be the unique events within the given time
> window or length window according to the unique attribute value. So there
> should be two classes extending WindowProcessor.
>

Can't we add these attributes to the existing Unique window processor [1]
(as optional) without implementing a new window processor?
[1] https://docs.wso2.com/display/CEP310/Windows#Windows-UniqueWindow

Thanks

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Dep sync strategies for Carbon 4 products in Kubernetes

2016-06-07 Thread Imesh Gunaratne
Hi Chamila,

Thanks for looking into this. Please find my comments inline:

On Tue, Jun 7, 2016 at 6:03 AM, Chamila De Alwis  wrote:
>
>
> In the push method, it's the GW manager that initiates the process.
>
>1. Add a folder watcher (inotifywatch[1]) to
>repository/deployment/server/synapse-configs
>2. When triggered
>   1. Contact Kubernetes API and get list of WORKER_SVC container IPs
>   2. for each container IP, Rsync with --delete
>
> ​It would be better if we can implement this feature without tightly
coupling with the K8S API.​ Therefore I prefer the pull based model than
this.


> The pull method works the other way, i.e. initiated by the GW worker nodes
> and has to be run continuously on a loop.
>

​This approach can be applied to API-M on any container cluster manager
(and also on VMs) with very few changes. AFAIU it's matter of changing how
SSH server and rsync command processes are run on each GW node. K8S can use
separate containers for these using pods and Mesos can use supervisord [4].

[4] https://docs.docker.com/engine/admin/using_supervisord/

Thanks


> Additionally, Kubernetes supports a volume plugin named Git Volume [2],
> which is basically an emptyDir volume with an initial "git clone" command
> done on the provided remote repository.
>
> The issue with this is that there is no push functionality when the
> contents are updated. This might be solved by extending the Git Repo volume
> plugin and writing a Carbon Volume Plugin for Kubernetes, however IMO it
> would come up with the same set of problems we have in the current SVN
> based deployment synchronization, only with an additional code base.
>
> NFS volume based approach was also considered, however because of the
> limitations in moving the mount between the nodes (solutions like Flocker
> works on Block Level storage [3]), and managing read-write capability of
> multiple containers it also seems to be a complex path.
>
> IMO out of these approaches, Rsync is the possible candidate (specifically
> the push method), although it takes a few workarounds to achieve
> functionality. GW Managers would need to synchronize artifacts between
> themselves, as well as towards the worker nodes, and the push job should
> only run from the active manager node.
>
> I highly appreciate any input on this.
>
> [1] - http://linux.die.net/man/1/inotifywatch
> [2] - http://kubernetes.io/docs/user-guide/volumes/#gitrepo
> [3] -
> https://docs.clusterhq.com/en/latest/faq/#can-i-attach-a-single-volume-to-multiple-hosts
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Software Engineer | WSO2 | +94772207163
> Blog: code.chamiladealwis.com
>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] WSO2 Siddhi Visual Editor for CEP

2016-06-10 Thread Imesh Gunaratne
Hi Nayantara,

On Wed, Jun 8, 2016 at 6:31 AM, Nayantara Jeyaraj 
wrote:

>
>
​Is this a prototype or the actual implementation? Few comments:​


   - ​It would be better to make the two pink and blue boxes in the toolbox
   more meaningful, without just making them blank.
   - Is there a reason to use pink for the streams?
   - Does that align with the other UI components which already represent
   streams in CEP?
   - Regarding the title "Properties Panel"; IMO we might not need to use
   the​ word Panel here. Saying "Properties" would be sufficient.

   - Can you please share a complete screenshot of the management console?

​Thanks
​

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [CEP] Improvement to External Time Batch Window to Allow Specifying Timeout with First Event's Time as Start Time

2016-07-11 Thread Imesh Gunaratne
Hi Charini,

A great thought!

Would it be possible for you to explain this requirement with an example
written in Siddhi? Specifically how to generate a custom event on the
timeout.

Thanks

On Monday, July 11, 2016, Charini Nanayakkara  wrote:

> Hi All,
>
> I have planned to improve the current implementation of external time
> batch window, to allow accepting first event's time as start time, when
> specifying a timeout.
>
> In the current implementation, the 3rd parameter allows user to provide a
> user defined start time (whereas the default is to use first event's time
> as start time). This value is required to be a constant. The 4th parameter
> is reserved for specifying a timeout, which is valuable in an instance
> where output needs to be given if events don't arrive for some time.
> However, this implementation disallows a user to use the default start time
> (first event's start time) and timeout together.
>
> Therefore, I intend to change the implementation such that user can either
> provide a variable or a constant as 3rd parameter. This enables the
> external time field to be given as 3rd parameter, from which Siddhi can
> retrieve 1st event's time to be used as start time. Alternatively, a
> constant value could be given if user defined start time is required.
>
> Suggestions and comments are most welcome.
>
> Thank you
> Charini
>
> --
> Charini Vimansha Nanayakkara
> Software Engineer at WSO2
> Mobile: 0714126293
>
>

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Dry Run WSO2 Servers to Test Configuration

2016-07-12 Thread Imesh Gunaratne
Hi Chamila,

A very good proposal! I think this would be really useful to reduce the
time it takes to verify a functionality after doing a configuration change.
More importantly the automation tests should be able to reuse this feature
for executing the basic tests.

On Wed, Jul 13, 2016 at 9:19 AM, Chamila De Alwis  wrote:

> Hi,
>
> It would be a good feature to allow a --test flag in wso2server.sh which
> doesn't start the server but accomplishes the following tasks.
>

​May be "--verify" would be more meaningful?​


>
>1. Validate all configuration files with respect to component
>functionality
>
> ​May be each component can introduce a new set of classes to verify its
configuration (might not be mandatory for all the components).​


>
>1. Validate if dependency libs are present
>
> Do you have any thoughts on how to implement this? I think at runtime we
may need to read OSGi bundle information to get the dependency information.​


>
>1. Provide a valid return code for success (0) or failure (more than 0)
>2. Does not cause any side effects (DB creation/modification, log
>generation etc)
>
> ​May be a new log file can be introduced for this?

Thanks

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Issues in Running DAS in HA / Spark Client Mode in Mesos in BRIDGE Networking Mode

2016-07-21 Thread Imesh Gunaratne
On Thu, Jul 21, 2016 at 6:18 PM, Isuru Haththotuwa  wrote:

>
> After patching to get over the connection issue [2], another issue is seen
> where after the connection is established between spark client and
> mesos(spark) master, where the spark client just hang executing after the
> log lines [3]. This can be noticed in both DAS and a standalone spark
> sample. Doing some research, I suspect this *might* be because of a network
> issue again. As per the discussions where the same issue has been reported
> earlier [4, 5], a possible cause might be since its needed communicate from
> spark client to master and master to client (bi-directional). I did not
> check this in full detail. But, this can cause issues in the Mesos
> environment again, since the DAS spark client will be binding to the local
> IP.
>
> ​Great work on analyzing this Isuru!
​


> Considering all these facts, shall we go for the docker HOST networking
> mode for DAS as the default option in mesos? This might introduce some
> limitations such as not being able to have more than one DAS container in a
> single mesos worker node, etc. But IMHO it might not be practical to try
> and  fix/workaround all limitations that we get in mesos. However, if an
> SDN/overlay network support is available, we can switch back to the BRIDGE
> networking mode. Please share your thoughts.
>
> ​+1 Will proceed with HOST networking in this Mesos Artifacts release. The
overlay network and service load balancing issues may get resolved in a
later Mesos DC/OS release [1].

[1] https://github.com/dcos/minuteman

Thanks
​


> [1]. [Mesos-Marathon] Issues in Creating Point to Point TCP Connections in
> Mesos-Marathon based Deployments
>
> [2]. Association with remote system [akka.tcp://sparkMaster@master.mesos:5050]
> has failed, address is now gated for [5000] ms
>
> [3].
> I0721 10:49:05.939699 19943 sched.cpp:222] Version: 0.28.1
> I0721 10:49:05.947686 19939 sched.cpp:326] New master detected at
> master@192.168.65.90:5050
> I0721 10:49:05.950932 19939 sched.cpp:336] No credentials provided.
> Attempting to register without authentication
>
> [4].
> http://stackoverflow.com/questions/31867136/spark-shell-connect-to-mesos-hangs-no-credentials-provided-attempting-to-regis
>
> [5].
> http://stackoverflow.com/questions/33727154/spark-shell-connecting-to-mesos-stuck-at-sched-cpp
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048
>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] A JavaScript based Tooling Platform for WSO2 Middleware

2016-07-22 Thread Imesh Gunaratne
Hi All,

According to an internal discussion we had, we thought of introducing
$subject for improving the overall tooling experience of WSO2 middleware.
The main goal of this effort is to build a lightweight, cross-platform,
attractive, user-oriented tooling platform with reusable visualization
components.

This has several sub goals:

   - Implementing reusable tooling components which can be used for
   building an unified IDE:
  - This would be similar to WSO2 Carbon architecture and analytics
  platform where we implement reusable components and build products by
  aggregating them.
   - Reusing visualization components in web based UIs
   - Making the tooling platform available on the web/cloud

To achieve this, we thought of implementing tooling components in HTML5,
CSS and JavaScript. This would allow us to make the tooling platform;
platform independent, reusable and web enabled.


*WSO2 JS Tooling Platform High Level Architecture*

[image: Inline image 1]
On high level, the WSO2 JS tooling platform would have above components.
Out of these we would first start with the visualization component and try
to come up with a JS library which can provide features needed for
implementing product specific tooling components.
​

*WSO2 JS Tooling Platform Component Architecture*​
[image: Inline image 3]

​According to the above concept, we would use existing JS frameworks such
as D3.js, Backbone and Lodash for implementing the core tooling framework.
In this model, D3.js will be used for utilizing basic features needed for
drawing shapes, Backbone will be used for implementing JavaScript
extendibility features (only using Model and View from its MVC
architecture) and finally Lodash will be used for utilizing utility
functions.​

On top of the core tooling framework a collection of tooling components
will be implemented according to WSO2 product requirements. Initially we
will be starting with the NextGen ESB (Integration Server) by implementing
a sequence diagramming and data mapper modules.
​
The initial source code of this effort can be found in WSO2 Incubator [1].
Please feel free to try this out and share your thoughts.​

​[1] ​
https://github.com/wso2-incubator/js-tooling-framework

​Thanks​

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] A JavaScript based Tooling Platform for WSO2 Middleware

2016-07-22 Thread Imesh Gunaratne
Hi Chathura,

On Fri, Jul 22, 2016 at 1:51 PM, Chathura Ekanayake 
wrote:

>
> Are we integrating this with VSC?
>

We are currently investigating Electron based solutions for the IDE. VSC is
an option.

Thanks​

>
> - Chathura
>
> On Fri, Jul 22, 2016 at 12:40 PM, Imesh Gunaratne  wrote:
>
>> Hi All,
>>
>> According to an internal discussion we had, we thought of introducing
>> $subject for improving the overall tooling experience of WSO2 middleware.
>> The main goal of this effort is to build a lightweight, cross-platform,
>> attractive, user-oriented tooling platform with reusable visualization
>> components.
>>
>> This has several sub goals:
>>
>>- Implementing reusable tooling components which can be used for
>>building an unified IDE:
>>   - This would be similar to WSO2 Carbon architecture and analytics
>>   platform where we implement reusable components and build products by
>>   aggregating them.
>>- Reusing visualization components in web based UIs
>>- Making the tooling platform available on the web/cloud
>>
>> To achieve this, we thought of implementing tooling components in HTML5,
>> CSS and JavaScript. This would allow us to make the tooling platform;
>> platform independent, reusable and web enabled.
>>
>>
>> *WSO2 JS Tooling Platform High Level Architecture*
>>
>> [image: Inline image 1]
>> On high level, the WSO2 JS tooling platform would have above components.
>> Out of these we would first start with the visualization component and try
>> to come up with a JS library which can provide features needed for
>> implementing product specific tooling components.
>> ​
>>
>> *WSO2 JS Tooling Platform Component Architecture*​
>> [image: Inline image 3]
>>
>> ​According to the above concept, we would use existing JS frameworks such
>> as D3.js, Backbone and Lodash for implementing the core tooling framework.
>> In this model, D3.js will be used for utilizing basic features needed for
>> drawing shapes, Backbone will be used for implementing JavaScript
>> extendibility features (only using Model and View from its MVC
>> architecture) and finally Lodash will be used for utilizing utility
>> functions.​
>>
>> On top of the core tooling framework a collection of tooling components
>> will be implemented according to WSO2 product requirements. Initially we
>> will be starting with the NextGen ESB (Integration Server) by implementing
>> a sequence diagramming and data mapper modules.
>> ​
>> The initial source code of this effort can be found in WSO2 Incubator
>> [1]. Please feel free to try this out and share your thoughts.​
>>
>> ​[1] ​
>> https://github.com/wso2-incubator/js-tooling-framework
>>
>> ​Thanks​
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] A JavaScript based Tooling Platform for WSO2 Middleware

2016-07-24 Thread Imesh Gunaratne
Hi Manu,

On Fri, Jul 22, 2016 at 11:55 PM, Manuranga Perera  wrote:

> Hi Imesh,
>
> Will it run from a browser or will it be an app (embedded browser)?
>

​It would need to run on a browser to be able to accessible in the cloud,
the desktop version will run as an app. Those are the two high level
requirements we have at the moment.

Still we are at very early stages of its design.
At the moment we are evaluating
​ how we can integrate the web app with an electron based IDE starting with
VSC. ​

Thanks

>
> On Fri, Jul 22, 2016 at 3:10 AM, Imesh Gunaratne  wrote:
>
>> Hi All,
>>
>> According to an internal discussion we had, we thought of introducing
>> $subject for improving the overall tooling experience of WSO2 middleware.
>> The main goal of this effort is to build a lightweight, cross-platform,
>> attractive, user-oriented tooling platform with reusable visualization
>> components.
>>
>> This has several sub goals:
>>
>>- Implementing reusable tooling components which can be used for
>>building an unified IDE:
>>   - This would be similar to WSO2 Carbon architecture and analytics
>>   platform where we implement reusable components and build products by
>>   aggregating them.
>>- Reusing visualization components in web based UIs
>>- Making the tooling platform available on the web/cloud
>>
>> To achieve this, we thought of implementing tooling components in HTML5,
>> CSS and JavaScript. This would allow us to make the tooling platform;
>> platform independent, reusable and web enabled.
>>
>>
>> *WSO2 JS Tooling Platform High Level Architecture*
>>
>> [image: Inline image 1]
>> On high level, the WSO2 JS tooling platform would have above components.
>> Out of these we would first start with the visualization component and try
>> to come up with a JS library which can provide features needed for
>> implementing product specific tooling components.
>> ​
>>
>> *WSO2 JS Tooling Platform Component Architecture*​
>> [image: Inline image 3]
>>
>> ​According to the above concept, we would use existing JS frameworks such
>> as D3.js, Backbone and Lodash for implementing the core tooling framework.
>> In this model, D3.js will be used for utilizing basic features needed for
>> drawing shapes, Backbone will be used for implementing JavaScript
>> extendibility features (only using Model and View from its MVC
>> architecture) and finally Lodash will be used for utilizing utility
>> functions.​
>>
>> On top of the core tooling framework a collection of tooling components
>> will be implemented according to WSO2 product requirements. Initially we
>> will be starting with the NextGen ESB (Integration Server) by implementing
>> a sequence diagramming and data mapper modules.
>> ​
>> The initial source code of this effort can be found in WSO2 Incubator
>> [1]. Please feel free to try this out and share your thoughts.​
>>
>> ​[1] ​
>> https://github.com/wso2-incubator/js-tooling-framework
>>
>> ​Thanks​
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> With regards,
> *Manu*ranga Perera.
>
> phone : 071 7 70 20 50
> mail : m...@wso2.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RDBMS based coordinator election algorithm for MB

2016-08-04 Thread Imesh Gunaratne
hich have already proven results.
>>>>
>>>> [1] https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zab1.0
>>>> [2] http://libraft.io/
>>>>
>>>> Thanks.
>>>>
>>>> On Thu, Jul 28, 2016 at 1:42 PM, Nandika Jayawardana 
>>>> wrote:
>>>>
>>>>> +1 to make it a common component . We have the clustering
>>>>> implementation for BPEL component based on hazelcast.  If the coordination
>>>>> is available at RDBMS level, we can remove hazelcast dependancy.
>>>>>
>>>>> Regards
>>>>> Nandika
>>>>>
>>>>> On Thu, Jul 28, 2016 at 1:28 PM, Hasitha Aravinda 
>>>>> wrote:
>>>>>
>>>>>> Can we make it a common component, which is not hard coupled with MB.
>>>>>> BPS has the same requirement.
>>>>>>
>>>>>> Thanks,
>>>>>> Hasitha.
>>>>>>
>>>>>> On Thu, Jul 28, 2016 at 9:47 AM, Asanka Abeyweera 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> In MB, we have used a coordinator based approach to manage
>>>>>>> distributed messaging algorithm in the cluster. Currently Hazelcast is 
>>>>>>> used
>>>>>>> to elect the coordinator. But one issue we faced with Hazelcast is, 
>>>>>>> during
>>>>>>> a network segmentation (split brain), Hazelcast can elect two or more
>>>>>>> coordinators in the cluster. This affects the correctness of the
>>>>>>> distributed messaging algorithm since there are some tables in the 
>>>>>>> database
>>>>>>> that should only be edited by a single node (i.e. coordinator).
>>>>>>>
>>>>>>> As a solution to this problem we have implemented minimum node count
>>>>>>> based approach [1] to deactivate set of partitioned nodes to stop 
>>>>>>> multiple
>>>>>>> nodes becoming coordinators until the network segmentation issue is 
>>>>>>> fixed.
>>>>>>>
>>>>>>> As an alternative solution, we are thinking of implementing an RDBMS
>>>>>>> based approach to elect the coordinator node in the cluster. By doing 
>>>>>>> this
>>>>>>> we can make sure that even during a network segmentation only one node 
>>>>>>> will
>>>>>>> be elected as the coordinator node since the election is happening 
>>>>>>> through
>>>>>>> the database.
>>>>>>>
>>>>>>> The algorithm will use a polling mechanism to check the validity of
>>>>>>> the nodes. To make the election algorithm scalable, only the coordinator
>>>>>>> node will be checking status of all the nodes in the cluster and it will
>>>>>>> inform other nodes through database when a member is added/left. The 
>>>>>>> nodes
>>>>>>> will be only checking for the status of the coordinator node. When a 
>>>>>>> node
>>>>>>> detect that coordinator is invalid it will go for a election to elect a 
>>>>>>> new
>>>>>>> coordinator.
>>>>>>>
>>>>>>> We are currently working on a POC to test how this works with MB's
>>>>>>> slot based messaging algorithm.
>>>>>>>
>>>>>>> thoughts?
>>>>>>>
>>>>>>> [1] https://wso2.org/jira/browse/MB-1664
>>>>>>>
>>>>>>> --
>>>>>>> Asanka Abeyweera
>>>>>>> Senior Software Engineer
>>>>>>> WSO2 Inc.
>>>>>>>
>>>>>>> Phone: +94 712228648
>>>>>>> Blog: a5anka.github.io
>>>>>>>
>>>>>>> <https://wso2.com/signature>
>>>>>>>
>>>>>>> ___
>>>>>>> Architecture mailing list
>>>>>>> Architecture@wso2.org
>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> --
>>>>>> Hasitha Aravinda,
>>>>>> Associate Technical Lead,
>>>>>> WSO2 Inc.
>>>>>> Email: hasi...@wso2.com
>>>>>> Mobile : +94 718 210 200
>>>>>>
>>>>>> ___
>>>>>> Architecture mailing list
>>>>>> Architecture@wso2.org
>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nandika Jayawardana
>>>>> WSO2 Inc ; http://wso2.com
>>>>> lean.enterprise.middleware
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Akila Ravihansa Perera
>>>> WSO2 Inc.;  http://wso2.com/
>>>>
>>>> Blog: http://ravihansa3000.blogspot.com
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> Asanka Abeyweera
>>> Senior Software Engineer
>>> WSO2 Inc.
>>>
>>> Phone: +94 712228648
>>> Blog: a5anka.github.io
>>>
>>> <https://wso2.com/signature>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>
>
> --
> Asanka Abeyweera
> Senior Software Engineer
> WSO2 Inc.
>
> Phone: +94 712228648
> Blog: a5anka.github.io
>
> <https://wso2.com/signature>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RDBMS based coordinator election algorithm for MB

2016-08-04 Thread Imesh Gunaratne
On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne  wrote:
>
>
> You can see here [3] how K8S has implemented leader election feature for
> the products deployed on top of that to utilize.
>

​Correction: Please refer [4].​


>
>
>> On Thu, Aug 4, 2016 at 7:27 PM, Asanka Abeyweera 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> We are not implementing this to overcome a limitation in the
>>> coordination algorithm available in the Hazlecast. We are implementing this
>>> since we need an RDBMS based coordination algorithm (not a network based
>>> algorithm).
>>>
>>
> ​Are you saying that database connections do not use the same network used
> by Hazelcast?
> ​
>
>
>> The reason is, a network based election algorithm will always elect
>>> multiple leaders when the network is partitioned. But if we use a RDBMS
>>> based algorithm this will not happen.
>>>
>>
> ​I do not think your argument is correct. If there is a problem with the
> network, i​t may apply to both Hazelcast based solution and database based
> solution.
>
> [4] http://blog.kubernetes.io/2016/01/simple-leader-election
> -with-Kubernetes.html
>
> ​Thanks​
>
>>
>>>
>>> On Thu, Aug 4, 2016 at 7:16 PM, Imesh Gunaratne  wrote:
>>>
>>>> Hi Asanka,
>>>>
>>>> Do we really need to implement a leader election algorithm on our own?
>>>> AFAIU this is a complex problem which has been already solved by several
>>>> algorithms [1]. IMO it would be better to go ahead with an existing well
>>>> established implementation on etcd [1] or Consul [2].
>>>>
>>>> Those provide HTTP APIs for clients to make leader election calls. [3]
>>>> is a client library written in Node.js for etcd based leader election.
>>>>
>>>> [1] https://www.projectcalico.org/using-etcd-for-elections
>>>> [2] https://www.consul.io/docs/guides/leader-election.html
>>>> [3] https://www.npmjs.com/package/etcd-leader
>>>>
>>>> Thanks
>>>>
>>>> On Wed, Aug 3, 2016 at 5:12 PM, Asanka Abeyweera 
>>>> wrote:
>>>>
>>>>> Hi Maninda,
>>>>>
>>>>> Since we are using RDBMS to poll the node status, the cluster will not
>>>>> end up in situation 1,2 or 3. With this approach we consider a node
>>>>> unreachable when it cannot access the database. Therefore an unreachable
>>>>> node can never be the leader.
>>>>>
>>>>> As you have mentioned, we are currently using the RDBMS as an atomic
>>>>> global variable to create the coordinator entry.
>>>>>
>>>>> On Tue, Aug 2, 2016 at 5:22 PM, Maninda Edirisooriya >>>> > wrote:
>>>>>
>>>>>> Hi Asanka,
>>>>>>
>>>>>> As I understand the accuracy of electing the leader correctly is
>>>>>> dependent on the election mechanism with RDBMS because there can be edge
>>>>>> cases like,
>>>>>>
>>>>>> 1. Unreachable leader activates during the election process: Then who
>>>>>> becomes the leader?
>>>>>> 2. The elected leader becomes unreachable before the election is
>>>>>> completed: Then will there be a situation where there is no leader?
>>>>>> 3. A leader and a set of nodes are disconnected from the other part
>>>>>> of the cluster and while the leader is trying to remove unreachable 
>>>>>> members
>>>>>> other part is calling an election to make a leader: Who will win?
>>>>>>
>>>>>> RDBMS based election algorithm should handle such cases without
>>>>>> bringing the cluster to an inconsistent state or dead lock in all
>>>>>> concurrent cases. If all these kind of cases cannot be handled isn't it
>>>>>> better to keep the current hazelcast clustering and use the RDBMS only to
>>>>>> handle the split brain scenario? In other words when a new hazelcast 
>>>>>> leader
>>>>>> is elected it should be updated in the RDBMS. If another split party has
>>>>>> already elected a leader, the node who is going to write it to RDBMS 
>>>>>> should
>>>>>> avoid updating it. Simply, the RDBMS can be used as an atomic global
>>>>>> variable to keep the leader name by modifying the hazelcast clustering.
>&g

Re: [Architecture] RDBMS based coordinator election algorithm for MB

2016-08-04 Thread Imesh Gunaratne
Hi Asitha/Asanka,

I think it is clear that the issue we have here is mostly related to
Hazelcast.

Now to solve that problem I think it would be better to go ahead with a
generic leader election system for the entire platform rather than writing
one specific to MB. This requirement is there in several other products and
for some a database driven approach might not work.

Therefore it would be better if we can decouple this from the product and
use an interface to talk to a leader election module. This module can
either be implemented as a separate component or utilize an existing system
such as etcd.

To start with I think it would be better to evaluate what etcd and consul
has to offer and check whether they fit to our requirement.

Thanks

On Fri, Aug 5, 2016 at 10:12 AM, Asanka Abeyweera  wrote:

> Hi Imesh,
>
> On Fri, Aug 5, 2016 at 7:33 AM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Fri, Aug 5, 2016 at 7:31 AM, Imesh Gunaratne  wrote:
>>>
>>>
>>> You can see here [3] how K8S has implemented leader election feature for
>>> the products deployed on top of that to utilize.
>>>
>>
>> ​Correction: Please refer [4].​
>>
>>
>>>
>>>
>>>> On Thu, Aug 4, 2016 at 7:27 PM, Asanka Abeyweera 
>>>> wrote:
>>>>
>>>>> Hi Imesh,
>>>>>
>>>>> We are not implementing this to overcome a limitation in the
>>>>> coordination algorithm available in the Hazlecast. We are implementing 
>>>>> this
>>>>> since we need an RDBMS based coordination algorithm (not a network based
>>>>> algorithm).
>>>>>
>>>>
>>> ​Are you saying that database connections do not use the same network
>>> used by Hazelcast?
>>>
>>
> Yes, This is most problematic when two interfaces are used for Hazelcast
> communication and RDBMS communication. Additionally there is an edge case
> even when a single interface is used for both Hazelcast and RDBMS
> communication. When a cluster merge after a network segmentation, there can
> be a delay in Hazelcast detecting the cluster merge. If a database is
> accessed by multiple coordinators during this time, there can be message
> delivery issues like message duplication. Therefore we cannot ignore this
> issue even when the same network is used for Hazelcast and database
> connections.
> ​
>
>
>> The reason is, a network based election algorithm will always elect
>>>>> multiple leaders when the network is partitioned. But if we use a RDBMS
>>>>> based algorithm this will not happen.
>>>>>
>>>>
>>> ​I do not think your argument is correct. If there is a problem with the
>>> network, i​t may apply to both Hazelcast based solution and database based
>>> solution.
>>>
>>> [4] http://blog.kubernetes.io/2016/01/simple-leader-election
>>> -with-Kubernetes.html
>>>
>>> ​Thanks​
>>>
>>>>
>>>>>
>>>>> On Thu, Aug 4, 2016 at 7:16 PM, Imesh Gunaratne 
>>>>> wrote:
>>>>>
>>>>>> Hi Asanka,
>>>>>>
>>>>>> Do we really need to implement a leader election algorithm on our
>>>>>> own? AFAIU this is a complex problem which has been already solved by
>>>>>> several algorithms [1]. IMO it would be better to go ahead with an 
>>>>>> existing
>>>>>> well established implementation on etcd [1] or Consul [2].
>>>>>>
>>>>>> Those provide HTTP APIs for clients to make leader election calls.
>>>>>> [3] is a client library written in Node.js for etcd based leader 
>>>>>> election.
>>>>>>
>>>>>> [1] https://www.projectcalico.org/using-etcd-for-elections
>>>>>> [2] https://www.consul.io/docs/guides/leader-election.html
>>>>>> [3] https://www.npmjs.com/package/etcd-leader
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On Wed, Aug 3, 2016 at 5:12 PM, Asanka Abeyweera 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Maninda,
>>>>>>>
>>>>>>> Since we are using RDBMS to poll the node status, the cluster will
>>>>>>> not end up in situation 1,2 or 3. With this approach we consider a node
>>>>>>> unreachable when it cannot access the database. Therefore an unreachable
>>>>>>> node can never be the leader.
>>>>>>&

Re: [Architecture] Redesigning Deployment Artifacts Repos Structure - Puppet, Dockerfiles, Mesos, K8S

2016-08-05 Thread Imesh Gunaratne
Hi Anuruddha,

On Fri, Aug 5, 2016 at 7:30 PM, Anuruddha Liyanarachchi  wrote:
>
>
> - Submodule will always be cloned into an uneditable directory :
> By default, this directory name will be same as the repo name of submodule
> [3]. This can be changed by specifying a relative path, but the submodule
> will always be cloned into a separate directory.
>
> This directory cannot be modified and partial cloning is also not possible
> [4].
>

​Yes, that's by design.​


>
> In order for puppet apply to work we need to add wso2esb modules folder
> inside  /moduels folder. Similarly, hieradata
> should be merged.
>

​Hieradata can be kept inside the puppet- repository for the time
being. Will move them to the paas-artifacts repositories later on once we
decouple hieradata from the puppet module.

>
> AFAIU it is not straight forward to create correct puppet structure due to
> these limitations in sub-modules.
> Appreciate your thoughts on this.
>

​Please see [5] to see how I created puppet-base and puppet-esb
repositories without any problem:

[5] https://github.com/imesh/puppet-base
​[6] https://github.com/imesh/puppet-esb

Thanks

>
> On Fri, Aug 5, 2016 at 1:25 PM, Akila Ravihansa Perera  > wrote:
>
>> Hi,
>>
>> We have come across several issues in current repository structure and
>> release model of Puppet, Dockerfiles, Mesos artifacts, Kubernetes artifacts
>> etc. (deployment artifacts). To name a few;
>>  - Publishing Puppet modules to PuppetForge is problematic
>>  - Releasing planning is bit complicated since all the Puppet modules
>> should be released
>>  - Not possible to release a specific Puppet module for a product since
>> all the modules resides in a single repo
>>
>> To overcome these issues we can split each Puppet module, Dockerfile,
>> Mesos artifacts, K8S artifacts into its own repo. For eg:
>>
>>
>>- wso2/puppet-
>>- wso2/docker-
>>- wso2/aws-artifacts-
>>- wso2/mesos-artifacts-
>>- wso2/kubernetes-artifacts-
>>
>>
>> Now there are common Puppet resources being used by product modules, and
>> these can be hosted in wso2/puppet-common repo. Similarly we can host
>> common artifacts in wso2/mesos-artifacts-common,
>> wso2/kubernetes-artifacts-common
>>
>> Also we can host Hieradata in the same repo as platform specific repo.
>> For eg:
>>
>>
>>- mesos-artifacts-/hieradata/
>>- kubernetes-artifacts-/hieradata/
>>
>>
>> Common Hiera data for each platform can be hosted in wso2/
>> -artifacts-common repo. We can ship default Hiera data with a
>> Vagrantfile in the wso2- repo.
>>
>> Using this approach it would be much easier to do frequent releases of
>> Puppet modules, especially when a new product is released. By having common
>> repos (puppet-common, docker-common etc.) as Git sub-modules of product
>> specific repos (puppet-wso2esb, docker-wso2esb), transition will be
>> seamless for the users and no additional maintenance cost to developers.
>>
>> Another concern is release versioning for Puppet modules. As per some
>> offline discussions, having product version number + puppet version suffix
>> seems to be appropriate since it would be easier for users find the
>> compatible and latest Puppet module for a specific product.
>>
>> *Another option* is to make Puppet module for specific product
>> compatible across all the versions released under the same platform
>> version. For eg;
>> wso2esb-4.9.0 and wso2esb-5.0.0 which is released under platform version
>> 4.4.0 should be supported by puppet-wso2esb 4.4.0 family. Older versions of
>> puppet-wso2esb may not support products released after, but it should be
>> backward compatible with all the products released under the same platform
>> version.
>>
>> Please note that repo names are not finalized yet and are still open to
>> suggestions. Please do share your thoughts.
>>
>> Thanks.
>>
>> --
>> Akila Ravihansa Perera
>> WSO2 Inc.;  http://wso2.com/
>>
>> Blog: http://ravihansa3000.blogspot.com
>>
>
>
>
> --
> *Thanks and Regards,*
> Anuruddha Lanka Liyanarachchi
> Software Engineer - WSO2
> Mobile : +94 (0) 712762611
> Tel  : +94 112 145 345
> a nurudd...@wso2.com
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Redesigning Deployment Artifacts Repos Structure - Puppet, Dockerfiles, Mesos, K8S

2016-08-08 Thread Imesh Gunaratne
Hi Pubudu,

Wouldn't it be more meaningful to call "dockerfile-" instead of "docker-"?

Thanks

On Mon, Aug 8, 2016 at 12:02 PM, Pubudu Gunatilaka  wrote:

> Hi,
>
> Following are the proposed repo names for the existing puppet modules.
>
> Puppet Modules Repo Dockerfiles Repo Kubernetes Artifacts Repo Mesos
> Artifacts Repo
> Common Artifacts puppet-base docker-common kubernetes-artifacts-common
> mesos-artifacts-common
> WSO2 APIM puppet-apim docker-apim kubernetes-artifacts-apim
> mesos-artifacts-apim
> WSO2 AS puppet-as docker-as kubernetes-artifacts-as mesos-artifacts-as
> WSO2 BPS puppet-bps docker-bps kubernetes-artifacts-bps
> mesos-artifacts-bps
> WSO2 BRS puppet-brs docker-brs kubernetes-artifacts-brs
> mesos-artifacts-brs
> WSO2 CEP puppet-cep docker-cep kubernetes-artifacts-cep
> mesos-artifacts-cep
> WSO2 DAS puppet-das docker-das kubernetes-artifacts-das
> mesos-artifacts-das
> WSO2 DSS puppet-dss docker-dss kubernetes-artifacts-dss
> mesos-artifacts-dss
> WSO2 ES puppet-es docker-es kubernetes-artifacts-es mesos-artifacts-es
> WSO2 ESB puppet-esb docker-esb kubernetes-artifacts-esb
> mesos-artifacts-esb
> WSO2 GREG puppet-greg docker-greg kubernetes-artifacts-greg
> mesos-artifacts-greg
> WSO2 IS puppet-is docker-is kubernetes-artifacts-is mesos-artifacts-is
> WSO2 MB puppet-mb docker-mb kubernetes-artifacts-mb mesos-artifacts-mb
>
>
> We will include wso2greg and wso2greg_pubstore puppet modules in greg
> puppet repo. Same is applied for IS as a key manager. This is until we
> introduce patterns concept for puppet modules.
>
> Thank you!
>
> On Mon, Aug 8, 2016 at 11:54 AM, Anuruddha Liyanarachchi <
> anurudd...@wso2.com> wrote:
>
>> Hi Imesh,
>>
>> Hieradata can be kept inside the puppet- repository for the time
>>> being. Will move them to the paas-artifacts repositories later on once we
>>> decouple hieradata from the puppet module.
>>
>>
>> +1 for this until we decouple hieradata.
>>
>>
>>
>> On Sat, Aug 6, 2016 at 10:05 AM, Imesh Gunaratne  wrote:
>>
>>> Hi Anuruddha,
>>>
>>> On Fri, Aug 5, 2016 at 7:30 PM, Anuruddha Liyanarachchi <
>>> anurudd...@wso2.com> wrote:
>>>>
>>>>
>>>> - Submodule will always be cloned into an uneditable directory :
>>>> By default, this directory name will be same as the repo name of
>>>> submodule [3]. This can be changed by specifying a relative path, but the
>>>> submodule will always be cloned into a separate directory.
>>>>
>>>> This directory cannot be modified and partial cloning is also not
>>>> possible [4].
>>>>
>>>
>>> ​Yes, that's by design.​
>>>
>>>
>>>>
>>>> In order for puppet apply to work we need to add wso2esb modules folder
>>>> inside  /moduels folder. Similarly, hieradata
>>>> should be merged.
>>>>
>>>
>>> ​Hieradata can be kept inside the puppet- repository for the
>>> time being. Will move them to the paas-artifacts repositories later on once
>>> we decouple hieradata from the puppet module.
>>>
>>>>
>>>> AFAIU it is not straight forward to create correct puppet structure due
>>>> to these limitations in sub-modules.
>>>> Appreciate your thoughts on this.
>>>>
>>>
>>> ​Please see [5] to see how I created puppet-base and puppet-esb
>>> repositories without any problem:
>>>
>>> [5] https://github.com/imesh/puppet-base
>>> ​[6] https://github.com/imesh/puppet-esb
>>>
>>> Thanks
>>>
>>>>
>>>> On Fri, Aug 5, 2016 at 1:25 PM, Akila Ravihansa Perera <
>>>> raviha...@wso2.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> We have come across several issues in current repository structure and
>>>>> release model of Puppet, Dockerfiles, Mesos artifacts, Kubernetes 
>>>>> artifacts
>>>>> etc. (deployment artifacts). To name a few;
>>>>>  - Publishing Puppet modules to PuppetForge is problematic
>>>>>  - Releasing planning is bit complicated since all the Puppet modules
>>>>> should be released
>>>>>  - Not possible to release a specific Puppet module for a product
>>>>> since all the modules resides in a single repo
>>>>>
>>>>> To overcome these issues we can split each Puppet module, Dockerfile,
>>>>> Mesos artifacts, K8S arti

Re: [Architecture] Siddhi Visual Editor (Updated)

2016-08-08 Thread Imesh Gunaratne
On Mon, Aug 8, 2016 at 12:34 AM, Sriskandarajah Suhothayan 
wrote:

>
> We have not decided, it will be part of analysis tooling and it should be
> used within notebooks for realtime analytics.
>>
>>
>> We have not decided where the notebooks will be. Whether it will be a
>> separate tooling app or to use the cloud tooling framework that we are
>> evaluating.
>>
>> Please give your suggestions.
>>
>> ​At the moment we are at very early stages of next generation tooling
platform. Therefore it might not be realistic to move Siddhi visualization
editor to that at this point of time.

Technically I do not see any problems of implementing this in jsplumb and
running it on next gen tooling platform as a separate module. To do that we
would need to consider adhering to web and JavaScript standards while doing
the implementation. Once the tooling platform's initial revision is ready
will do the move. WDYT?

Thanks


> Regards
>> Suho
>>
>> On Sunday, August 7, 2016, Sanjiva Weerawarana  wrote:
>>
>>> Will this be a plugin for the new tooling platform?
>>>
>>> On Aug 4, 2016 11:33 AM, "Nayantara Jeyaraj"  wrote:
>>>
>>>> Hi all,
>>>> I'm currently working on developing the Siddhi visual editor and it
>>>> has been modified as required from the previous post. I've used the
>>>> jsPlumb library and the interact.js to implement the functionalities
>>>> specified. I've attached the new specs and functionality herewith.
>>>> Regards
>>>> Tara
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>> Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>>
>
> --
>
> *S. Suhothayan*
> Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
> *WSO2 Inc. *http://wso2.com
> * <http://wso2.com/>*
> lean . enterprise . middleware
>
>
> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Redesigning Deployment Artifacts Repos Structure - Puppet, Dockerfiles, Mesos, K8S

2016-08-08 Thread Imesh Gunaratne
On Tue, Aug 9, 2016 at 12:35 AM, Chamila De Alwis  wrote:

> "docker-" would also imply that other potential artifacts such as swarm,
> compose scripts are also there. If that's the case "docker-" makes sense.
> Otherwise "dockerfile-" is more precise IMO, since the util scripts are
> about the dockerfile itself.
>

Thanks Chamila! Shall we please go ahead with dockerfile- prefix?

Thanks​


>
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Senior Software Engineer | WSO2
> Blog: https://medium.com/@chamilad
>
>
>
> On Mon, Aug 8, 2016 at 3:03 AM, Pubudu Gunatilaka 
> wrote:
>
>> Hi Imesh,
>>
>> In those docker repositories we have the dockerfile and util scripts. IMO
>> it would be more meaningful to call the repo as docker-.
>>
>> Thank you!
>>
>> On Mon, Aug 8, 2016 at 12:46 PM, Imesh Gunaratne  wrote:
>>
>>> Hi Pubudu,
>>>
>>> Wouldn't it be more meaningful to call "dockerfile-" instead of
>>> "docker-"?
>>>
>>> Thanks
>>>
>>> On Mon, Aug 8, 2016 at 12:02 PM, Pubudu Gunatilaka 
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Following are the proposed repo names for the existing puppet modules.
>>>>
>>>> Puppet Modules Repo Dockerfiles Repo Kubernetes Artifacts Repo Mesos
>>>> Artifacts Repo
>>>> Common Artifacts puppet-base docker-common kubernetes-artifacts-common
>>>> mesos-artifacts-common
>>>> WSO2 APIM puppet-apim docker-apim kubernetes-artifacts-apim
>>>> mesos-artifacts-apim
>>>> WSO2 AS puppet-as docker-as kubernetes-artifacts-as mesos-artifacts-as
>>>> WSO2 BPS puppet-bps docker-bps kubernetes-artifacts-bps
>>>> mesos-artifacts-bps
>>>> WSO2 BRS puppet-brs docker-brs kubernetes-artifacts-brs
>>>> mesos-artifacts-brs
>>>> WSO2 CEP puppet-cep docker-cep kubernetes-artifacts-cep
>>>> mesos-artifacts-cep
>>>> WSO2 DAS puppet-das docker-das kubernetes-artifacts-das
>>>> mesos-artifacts-das
>>>> WSO2 DSS puppet-dss docker-dss kubernetes-artifacts-dss
>>>> mesos-artifacts-dss
>>>> WSO2 ES puppet-es docker-es kubernetes-artifacts-es mesos-artifacts-es
>>>> WSO2 ESB puppet-esb docker-esb kubernetes-artifacts-esb
>>>> mesos-artifacts-esb
>>>> WSO2 GREG puppet-greg docker-greg kubernetes-artifacts-greg
>>>> mesos-artifacts-greg
>>>> WSO2 IS puppet-is docker-is kubernetes-artifacts-is mesos-artifacts-is
>>>> WSO2 MB puppet-mb docker-mb kubernetes-artifacts-mb mesos-artifacts-mb
>>>>
>>>>
>>>> We will include wso2greg and wso2greg_pubstore puppet modules in greg
>>>> puppet repo. Same is applied for IS as a key manager. This is until we
>>>> introduce patterns concept for puppet modules.
>>>>
>>>> Thank you!
>>>>
>>>> On Mon, Aug 8, 2016 at 11:54 AM, Anuruddha Liyanarachchi <
>>>> anurudd...@wso2.com> wrote:
>>>>
>>>>> Hi Imesh,
>>>>>
>>>>> Hieradata can be kept inside the puppet- repository for the
>>>>>> time being. Will move them to the paas-artifacts repositories later on 
>>>>>> once
>>>>>> we decouple hieradata from the puppet module.
>>>>>
>>>>>
>>>>> +1 for this until we decouple hieradata.
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Aug 6, 2016 at 10:05 AM, Imesh Gunaratne 
>>>>> wrote:
>>>>>
>>>>>> Hi Anuruddha,
>>>>>>
>>>>>> On Fri, Aug 5, 2016 at 7:30 PM, Anuruddha Liyanarachchi <
>>>>>> anurudd...@wso2.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> - Submodule will always be cloned into an uneditable directory :
>>>>>>> By default, this directory name will be same as the repo name of
>>>>>>> submodule [3]. This can be changed by specifying a relative path, but 
>>>>>>> the
>>>>>>> submodule will always be cloned into a separate directory.
>>>>>>>
>>>>>>> This directory cannot be modified and partial cloning is also not
>>>>>>> possible [4].
>>>>>>>
>>>>>>
>>>>>> ​Yes, that's by design.​
>>>>>

Re: [Architecture] APIM Analytics - Performance Results

2016-08-16 Thread Imesh Gunaratne
Hi Supun,

On Tue, Aug 16, 2016 at 5:29 PM, Supun Sethunga  wrote:

> Hi all,
>
> We carried out some performance tests for APIM Analytics server in a
> Minimum HA setup.
>

It would be great if you can share the resource configurations of the
machines and JVM parameters of the server instances used. Have we automated
the deployment of this performance test?

Thanks
​

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Visual Editor for NEL : Current work and path forward

2016-08-16 Thread Imesh Gunaratne
, we may
>>>> need to revisit some of these when going forward with the implementation.
>>>>
>>>> *NEL serialization/deserialization module*
>>>>
>>>> Using the ANTLR grammar file
>>>> <https://github.com/wso2/carbon-gateway-framework/blob/master/gateway-core/components/org.wso2.carbon.gateway.core/src/main/antlr4/org/wso2/carbon/gateway/core/config/dsl/external/wuml/generated/WUML.g4>
>>>>  [6]
>>>> for NEL, a Javascript parser is generated [7] to parse NEL. Using this
>>>> parser, it is possible to build an AST for NEL with a Javascript data model
>>>> defined similar to the run-time data model for NE
>>>> <https://github.com/wso2/carbon-gateway-framework/tree/master/gateway-core/components/org.wso2.carbon.gateway.core/src/main/java/org/wso2/carbon/gateway/core/flow>L[8]
>>>> defined in java.
>>>> Diagramming module and this module will work together to generate NEL
>>>> from diagram and vise versa.
>>>>
>>>> *Remaining things to initiate*
>>>>
>>>> Testing, package management, build, etc. are a few aspects which are
>>>> next in line to be started.
>>>>
>>>> To get an idea about overall effort on next gen tooling, please refer
>>>> to [1].
>>>>
>>>>
>>>> [1] [Architecture] A JavaScript based Tooling Platform for WSO2
>>>> Middleware
>>>> [2] https://github.com/wso2-incubator/js-tooling-framework/t
>>>> ree/master/sequence-editor
>>>> [3] https://wso2-incubator.github.io/js-tooling-framework/se
>>>> quence-editor/
>>>> [4] http://backbonejs.org/#Collection
>>>> [5] https://github.com/wso2-incubator/js-tooling-framework/b
>>>> lob/gh-pages/sequence-editor/js/app.js
>>>> [6] https://github.com/wso2/carbon-gateway-framework/blob/ma
>>>> ster/gateway-core/components/org.wso2.carbon.gateway.core/sr
>>>> c/main/antlr4/org/wso2/carbon/gateway/core/config/dsl/extern
>>>> al/wuml/generated/WUML.g4
>>>> [7] https://github.com/wso2-incubator/js-tooling-framework/t
>>>> ree/master/sequence-editor/lib/nel
>>>> [8] https://github.com/wso2/carbon-gateway-framework/tree/ma
>>>> ster/gateway-core/components/org.wso2.carbon.gateway.core/sr
>>>> c/main/java/org/wso2/carbon/gateway/core/flow
>>>>
>>>>
>>>>
>>>> Thanks,
>>>>
>>>> *Kavith Lokuhewage*
>>>> Senior Software Engineer
>>>> WSO2 Inc. - http://wso2.com
>>>> lean . enterprise . middleware
>>>> Mobile - +94779145123
>>>> Linkedin <http://www.linkedin.com/pub/kavith-lokuhewage/49/473/419>
>>>> Twitter <https://twitter.com/KavithThiranga>
>>>>
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> Viraj Rajaguru
>> Associate Technical Lead
>> WSO2 Inc. : http://wso2.com
>>
>> Mobile: +94 77 3683068
>>
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Susinda Perera*
> Software Engineer
> B.Sc.(Eng), M.Sc(Computer Science), AMIE(SL)
> Mobile:(+94)716049075
> Blog: susinda.blogspot.com
> WSO2 Inc. http://wso2.com/
> Tel : 94 11 214 5345 Fax :94 11 2145300
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] APIM Analytics - Performance Results

2016-08-20 Thread Imesh Gunaratne
gards,
>>>> Supun
>>>>
>>>> --
>>>> *Supun Sethunga*
>>>> Senior Software Engineer
>>>> WSO2, Inc.
>>>> http://wso2.com/
>>>> lean | enterprise | middleware
>>>> Mobile : +94 716546324
>>>> Blog: http://supunsetunga.blogspot.com
>>>>
>>>
>>>
>>>
>>> --
>>> Harsha Kumara
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94775505618
>>> Blog:harshcreationz.blogspot.com
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> *Sanjeewa Malalgoda*
>> WSO2 Inc.
>> Mobile : +94713068779
>>
>> <http://sanjeewamalalgoda.blogspot.com/>blog
>> :http://sanjeewamalalgoda.blogspot.com/
>> <http://sanjeewamalalgoda.blogspot.com/>
>>
>>
>>
>
>
> --
> *Supun Sethunga*
> Senior Software Engineer
> WSO2, Inc.
> http://wso2.com/
> lean | enterprise | middleware
> Mobile : +94 716546324
> Blog: http://supunsetunga.blogspot.com
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] My GSoC experience - GSoC students please read!

2016-08-22 Thread Imesh Gunaratne
es, Bash scripting, Go language, WSO2
> codebase and many other things. It is evident from the code that I have
> written so far [1]. It is very easy to judge someone without being in their
> shoes, and I feel like my mentors have been pushing work and standards
> without caring about my experience level, which in my opinion is completely
> unfair.
>
> I am sure there are so many other great mentors in the organization and my
> experience might be just one off. However, if any other student has felt
> similar situations, it should be investigated.
>
> Thanks
> Abhishek
>
> [1]. https://github.com/abhishek0198/wso2dockerfiles-test-fr
> amework/commits/master
> [2]. https://github.com/abhishek0198/wso2dockerfiles-test-
> framework/issues/22
>
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [AppM] Best practice to re-use domain objects in the gateway handlers

2016-08-28 Thread Imesh Gunaratne
On Tuesday, August 23, 2016, Chathura Ekanayake  wrote:

> Hi Rushmin,
>
> Can't we maintain a cache of domain objects, may be in a data holder
> instance?
>

Yes, using a cache with a data holder instance would be appropriate for
this problem. Do we know how frequently these data sets get updated in the
database?

Thanks


> BTW, what are the attributes of an example domain object (e.g. webapp
> object)?
>
> - Chathura
>
> On Mon, Aug 22, 2016 at 11:54 AM, Rushmin Fernando  > wrote:
>
>> Hi Isuru, any comment on this ? :-)
>>
>> On Thu, Aug 18, 2016 at 10:54 AM, Rushmin Fernando > > wrote:
>>
>>> It depends on the applications, users invoke Isuru.
>>>
>>> If users invoke all the apps then we end up fetching all the apps by the
>>> handler instances for the apps (synapse APIs). ( A handler instance only
>>> fetched and stores only one app instance)
>>>
>>> In the current implementation the fetch happens on demand.
>>>
>>>
>>> On Thu, Aug 18, 2016 at 9:18 AM, Isuru Udana >> > wrote:
>>>
>>>> Hi Rushmin,
>>>>
>>>> Do we need to fetch domain objects from the database for all the
>>>> applications or is it only for a set of applications ?
>>>>
>>>>
>>>>
>>>> On Tue, Aug 16, 2016 at 1:35 PM, Rushmin Fernando >>> > wrote:
>>>>
>>>>>
>>>>> In App Manager we use carbon mediation engine as the gateway. Thus the
>>>>> business logic is implemented in few handlers.
>>>>>
>>>>> In order to code business logic, we need to fetch domain objects from
>>>>> the database via a service. The primary domain object the "webapp' object.
>>>>>
>>>>> (Please see the attached image)
>>>>>
>>>>> In the current implementation, we have made the webapp object, an
>>>>> instance variable of the handlers. Since the handlers are instantiated per
>>>>> API (represents an app in our case), this works fine.
>>>>>
>>>>> Upon the first call to the app, we fetch the relevant webapp object
>>>>> and store it as the aforementioned instance variable.
>>>>>
>>>>> Since there are more than one handlers which need to deal with the
>>>>> webapp object we need to do above step in each of those handlers.
>>>>>
>>>>> We are thinking of having an init handler which does the service call
>>>>> and fetch the domain object and share it with the other handlers in the
>>>>> chain. The purspose of evaluating this is to, increase the code
>>>>> maintainability and improve the performance to some extent.
>>>>>
>>>>> In order to do that we need to have the domain object in the message
>>>>> context. But then again consume a lot of memory when there is a high load.
>>>>>
>>>>> One solution to above issue to create a new thin domain object with
>>>>> only the necessary fields for the gateway.
>>>>>
>>>>> Thoughts please ?
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Best Regards*
>>>>>
>>>>> *Rushmin Fernando*
>>>>> *Technical Lead*
>>>>>
>>>>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>>>>
>>>>> mobile : +94772891266
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *Isuru Udana*
>>>> Technical Lead
>>>> WSO2 Inc.; http://wso2.com
>>>> email: isu...@wso2.com
>>>>  cell: +94 77 3791887
>>>> blog: http://mytecheye.blogspot.com/
>>>>
>>>
>>>
>>>
>>> --
>>> *Best Regards*
>>>
>>> *Rushmin Fernando*
>>> *Technical Lead*
>>>
>>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>>
>>> mobile : +94772891266
>>>
>>>
>>>
>>
>>
>> --
>> *Best Regards*
>>
>> *Rushmin Fernando*
>> *Technical Lead*
>>
>> WSO2 Inc. <http://wso2.com/> - Lean . Enterprise . Middleware
>>
>> mobile : +94772891266
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> 
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Decoupling Hiera from WSO2 Puppet Modules

2016-08-29 Thread Imesh Gunaratne
Hi Isuru,

Thanks for the offline explanation on the proposed architecture!
Technically this looks impressive! Please find my thoughts below:

I think it would be better if we can avoid introducing puppet profile
concept and try to add a separate pp file for hiera lookups within the same
puppet module. This file can be either shipped with the hiera-data
distribution or kept inside the puppet module with a switch which can be
changed using a facter variable.

If so, users might only need to do following to set up a puppet environment:

   1. Install puppet server
   2. Install wso2 puppet module (example: puppet module install wso2esb
   --version 5.0.0)
   3. If hiera-data is needed
  1. Extract wso2 product hiera-data distribution to puppet-home
  2. Set facter variable (use_hieradata=true) on client/puppet agent
  side
  3. Update hiera-data files with required configuration values
   4. If not, set configuration values in params.pp file
   5. Trigger puppet agent

Thanks

On Tue, Aug 23, 2016 at 10:24 AM, Akila Ravihansa Perera  wrote:

> Hi Isuru,
>
> This is great!
>
> Currently, WSO2 Puppet Modules are tightly coupled to hiera to manage the
>> data component. This brings a few limitations, and the main problem is not
>> being able to the modules to puppet forge [1]. Ideally it should be
>> possible to push the released modules to puppet forge and a user should be
>> able to install those and refer them from their manifests.
>>
>> One possible way to perform this decoupling is to use the roles and
>> profiles pattern [2, 3]. In a nutshell, it adds abstractions on top of
>> component modules, named as a 'profile' layer and a 'role' layer. If we
>> apply the same concept to WSO2 puppet modules, we get a hierarchy similar
>> to the following, using API Manager as an example:
>>
>
> +1 for decoupling Hiera from Puppet modules. Data backend should be
> pluggable to a Puppet module.
>
>
>> As a possible solution, I suggest the following:
>>
>>- aggregate the many fine grained defined types to a few types by
>>introducing wrappers; typically these should be for cleaning +
>>installation, configuring and starting the server. Basically these are 
>> like
>>three stages in starting a carbon server from puppet [6].
>>- Add extension points to manage any type of resource which puppet
>>supports at these stages (at installing, configuring and starting the
>>server) [7]
>>- Call the relevant module from the profile layer, passing all the
>>data obtained via lookups/explicit data passing
>>
>> This will help to keep the puppet modules easy to use; a user needs to
>> just include them in a manifest and run. Also it allows a certain amount of
>> extendability as well.
>>
> +1. This approach gives the users a much balanced control + flexibility.
> But will there be any use of invoking a resource type after wso2 server is
> started? If there is such case, users can add that logic directly to the
> relevant module right?
> I was earlier under the opinion that we should make use of Puppet
> containment [1] to handle dependencies at the run time. But thinking again,
> it makes me wonder whether we need that kind of complexity at this point.
> Better to start with the simplest approach and let it evolve.
>
>
> --
> Akila Ravihansa Perera
> WSO2 Inc.;  http://wso2.com/
>
> Blog: http://ravihansa3000.blogspot.com
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Introduce API Product concept for WSO2 API Manager

2016-08-29 Thread Imesh Gunaratne
Hi Sanjeewa,

On Tue, Aug 23, 2016 at 3:07 PM, Sanjeewa Malalgoda 
wrote:

>
> *Requirement*
>
> As an API provider(creator or publisher), you need to create an API
> product. The API product is the mechanism through which your APIs are
> bundled and published so that developers can consume them. An API product
> is a collection of APIs combined with a predefined policy set presented to
> developers as a bundle(in a way they can subscribe to product and use it).
>

​What would be the reason for choosing​

​the term "product"​ for grouping a set of APIs? Is that something already
used in the industry in the API-M context?

If not, ​I think it would it be more

​meaningful
 to call it
​ an API group because the term "product" may conflict with Carbon products
and would be difficult to understand at first sight.
​
Thanks

>
> *Proposed Solution*
>
> The API product can also include some information specific to your
> business/product for monitoring or analytics. You can create different
> products to provide features for different use cases. So instead of just
> giving developers a list of APIs, you can bundle specific resources
> together to create a product that solves a specific user need. As example
> we can consider following use case.
>
> Example:
> Let say we have user information API, credit service API, leasing API. And
> let say we need to have 2 mobile applications for credit and leasing. Then
> we can create 2 API products named credit API product and leasing API
> product(both will share user information API). API products are also a good
> way to control access to a specific bundle of APIs. For example, you can
> bundle APIs that can only be accessed by internal developers, or bundle
> APIs that can only be accessed by paying customers. Please see following
> diagram to understand this scenario.
>
>
>
> ​
>
>
>
>
> *Implementation Details.**From Publisher's side.*
> From publisher side we need to let users to create API products like we do
> for APIs. To do that we may need to provide user interface similar to API
> create. In this API product creation process we collect following
> information from product creator.
>
>- Product Name and product specific meta-data.
>- List of APIs belong to that product and their tiers used for product.
>- Visibility and subscription availability.
>- Tiers and access control related information.
>- Life-cycle management for API product.
>
> *From store side.*
> List products same way we list APIs and then let users to subscribe for
> API Products. Once we go to subscription users should be able to see apis
> and api products belong to application. Also when we go to specific API
> product then we should be able to see all APIs belong to that API and
> selectively go through them. We will not be able to have single swagger
> file or wadl for complete API product as it shares multiple APIs.
>
> *Gateway Side.*
> For the throttling we need to do some improvements to throttle API product
> level requests. While doing throttling we need to consider number of
> requests allocated for API product as well and then consider that for
> throttling.
>
> *Key Manager side.*
> While validating subscription we can check API level subscription and API
> product level subscription both.
>
> Please share your thoughts on this.
>
> Thanks,
> sanjeewa.
>
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> <http://sanjeewamalalgoda.blogspot.com/>blog :http://sanjeewamalalgoda.
> blogspot.com/ <http://sanjeewamalalgoda.blogspot.com/>
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Introduce API Product concept for WSO2 API Manager

2016-08-30 Thread Imesh Gunaratne
Thanks, Uvindra! I can only see this concept being used at [1]. Do you have
any other references?
Still, I don't think the product term is well suited for grouping APIs.

[1] http://docs.apigee.com/developer-services/content/what-api-product


On Tue, Aug 30, 2016 at 11:52 AM, Uvindra Dias Jayasinha 
wrote:

> Hi Imesh,
>
> Yes the term API Product has already been coined by the industry. It makes
> sense to use this term because we allow different API products to have
> different policies associated with them. So its not just a grouping of
> APIs, you can have the same set of APIs grouped together with different
> policies associated with them as different API product instances.
>
> On 30 August 2016 at 07:19, Imesh Gunaratne  wrote:
>
>> Hi Sanjeewa,
>>
>> On Tue, Aug 23, 2016 at 3:07 PM, Sanjeewa Malalgoda 
>> wrote:
>>
>>>
>>> *Requirement*
>>>
>>> As an API provider(creator or publisher), you need to create an API
>>> product. The API product is the mechanism through which your APIs are
>>> bundled and published so that developers can consume them. An API product
>>> is a collection of APIs combined with a predefined policy set presented to
>>> developers as a bundle(in a way they can subscribe to product and use it).
>>>
>>
>> ​What would be the reason for choosing​
>>
>> ​the term "product"​ for grouping a set of APIs? Is that something
>> already used in the industry in the API-M context?
>>
>> If not, ​I think it would it be more
>>
>> ​meaningful
>>  to call it
>> ​ an API group because the term "product" may conflict with Carbon
>> products and would be difficult to understand at first sight.
>> ​
>> Thanks
>>
>>>
>>> *Proposed Solution*
>>>
>>> The API product can also include some information specific to your
>>> business/product for monitoring or analytics. You can create different
>>> products to provide features for different use cases. So instead of just
>>> giving developers a list of APIs, you can bundle specific resources
>>> together to create a product that solves a specific user need. As example
>>> we can consider following use case.
>>>
>>> Example:
>>> Let say we have user information API, credit service API, leasing API.
>>> And let say we need to have 2 mobile applications for credit and leasing.
>>> Then we can create 2 API products named credit API product and leasing API
>>> product(both will share user information API). API products are also a good
>>> way to control access to a specific bundle of APIs. For example, you can
>>> bundle APIs that can only be accessed by internal developers, or bundle
>>> APIs that can only be accessed by paying customers. Please see following
>>> diagram to understand this scenario.
>>>
>>>
>>>
>>> ​
>>>
>>>
>>>
>>>
>>> *Implementation Details.**From Publisher's side.*
>>> From publisher side we need to let users to create API products like we
>>> do for APIs. To do that we may need to provide user interface similar to
>>> API create. In this API product creation process we collect following
>>> information from product creator.
>>>
>>>- Product Name and product specific meta-data.
>>>- List of APIs belong to that product and their tiers used for
>>>product.
>>>- Visibility and subscription availability.
>>>- Tiers and access control related information.
>>>- Life-cycle management for API product.
>>>
>>> *From store side.*
>>> List products same way we list APIs and then let users to subscribe for
>>> API Products. Once we go to subscription users should be able to see apis
>>> and api products belong to application. Also when we go to specific API
>>> product then we should be able to see all APIs belong to that API and
>>> selectively go through them. We will not be able to have single swagger
>>> file or wadl for complete API product as it shares multiple APIs.
>>>
>>> *Gateway Side.*
>>> For the throttling we need to do some improvements to throttle API
>>> product level requests. While doing throttling we need to consider number
>>> of requests allocated for API product as well and then consider that for
>>> throttling.
>>>
>>> *Key Manager side.*
>>> While validating subscription we can check API level subscription and
>>> API product level subsc

Re: [Architecture] NextGen Tooling - Tool Palette

2016-08-30 Thread Imesh Gunaratne
Great work Susinda! Few comments below:

   - I think initially we may not need multiple groups inside the tool
   palette for sequence diagramming module.
   - Maybe we can directly use the exact tool palette elements we need;
   Lifecycles, Mediators, and Arrows.
   - The term Tool used at Tools.Models.Tool, might not suit well. Shall we
   call it an Element/Item (a tool palette element/item)?
   - The images used inside the tool palette may need to have transparent
   backgrounds.
   - Overall, we may also need to consider following:
  - As we do not store positioning data, we may need to do the
  positioning for the user. If so we may not need to provide user to change
  the positioning of elements as needed.
  - We might need to update the colours of the elements and the tool
  palette according to a colour scheme.

Thanks


On Tue, Aug 30, 2016 at 1:04 PM, Susinda Perera  wrote:

> The same webpage can be seen at here[1]
> [1] - https://wso2-incubator.github.io/js-tooling-framework/
> sequence-editor/index.html
>
> Thanks
> Susinda
>
>
> On Tue, Aug 30, 2016 at 11:34 AM, Susinda Perera  wrote:
>
>> Hi All
>>
>> I have started implementing $subject. Figure[1] below is the screenshot
>> of the current implementation.
>> The implementation pf Tool palette is devided to following models and
>> views (considering extendability) and the js code[2] describes the
>> connectivity of the modules.
>>
>> ToolPalette / ToolPaletteView
>> --ToolGroupWraper / ToolGroupWraperView
>> --ToolGroup / ToolgroupView
>>--Tool / ToolView
>>
>> TODO
>> - Implement the drag and drop support
>> - Implement the collapse of tool palette
>> - Externalize the templates for toolView, toolgroupWrapperView and
>> toolPaletteView
>>
>> Please give your inputs
>>
>>
>>
>> [2] -js code
>> //create tools
>> var calcTool = new Tools.Models.Tool({
>> toolId: "tool1",
>> toolImage:"images/icon1.png"
>> });
>>
>> var calcTool2 = new Tools.Models.Tool({
>> toolId: "tool2",
>> toolImage:"images/icon1.png"
>> });
>>
>> //create tool group
>> var group = new Tools.Models.ToolGroup();
>> group.add(calcTool);
>> group.add(calcTool2);
>>
>> //create tool palette
>> var toolPalette = new Tools.Models.ToolPalatte();
>> var toolGroupWrapper = new Tools.Models.ToolGroupWrapper({toolGroupName:
>> "LifeLines", toolGroup:group});
>> toolPalette.add(toolGroupWrapper);
>>
>> //render the palate View
>> var paletteView = new Tools.Views.ToolPalatteView({c
>> ollection:toolPalatte});
>> paletteView.render();
>>
>> [1] - screenshot
>>
>>
>>
>> ​
>>
>> --
>> *Susinda Perera*
>> Software Engineer
>> B.Sc.(Eng), M.Sc(Computer Science), AMIE(SL)
>> Mobile:(+94)716049075
>> Blog: susinda.blogspot.com
>> WSO2 Inc. http://wso2.com/
>> Tel : 94 11 214 5345 Fax :94 11 2145300
>> ​
>>
>
>
>
> --
> *Susinda Perera*
> Software Engineer
> B.Sc.(Eng), M.Sc(Computer Science), AMIE(SL)
> Mobile:(+94)716049075
> Blog: susinda.blogspot.com
> WSO2 Inc. http://wso2.com/
> Tel : 94 11 214 5345 Fax :94 11 2145300
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] NextGen Tooling - Tool Palette

2016-08-30 Thread Imesh Gunaratne
On Wed, Aug 31, 2016 at 9:31 AM, Chathura Ekanayake 
wrote:

> Hi Susinda / Imesh,
>
> If positioning information is not stored with the diagram, does it auto
> adjust size and positioning of elements based on the length of text
> provided as lifeline labels and text on lines connecting lifelines?
> Further, are we allowing users to change the order of lifelines?
>

​Yes, what we thought was to handle positioning​

​of elements, size (widh & height), text warapping, etc by the diagramming
tool​. The main reason for this is to avoid saving metadata files for the
visual representations and render diagrams in the same way for given
langugage definitions.

Nevertheless the user will be able to change the order of the elements as
needed. That information will be directly reflected in the langugage
definition.

>
> I think, although we might not need for sequence diagrams, in general it
> may be better to store positioning information with diagrams as auto
> layouting can be a complex problem. In addition, users may feel lack of
> control over the diagram, if they cannot position elements. For example,
> BPMN stores positioning information for each task and sequence in a
> separate xml section, which is referred by the corresponding logical
> construct when drawing the diagram in a canvas.
>

​Yes this requirement is understandable. At the initial phase of the
project we thought of following a simpler approach. Let's have a detailed
discussion on this and analyze how it would affect BPS.

Thanks​


>
> Regards,
> Chathura
>
> On Tue, Aug 30, 2016 at 4:24 PM, Susinda Perera  wrote:
>
>> Hi Imesh
>>
>> +1 for all other suggestions and comments
>> Multiple groups added only to demonstrate the tool-palette features, not
>> needed for this editor but may be useful for datamapper editor.
>> We need to finalize the icons for the various tools/actions - Hope ESb
>> team can give some input
>> We will have UX review fix the other UI issues, +1 for go with a theme
>> approach (like dark/white)
>> I'll do the code refactoring for the name changes and positioning issues.
>>
>> Thanks
>> Susinda
>>
>>
>> On Tue, Aug 30, 2016 at 3:35 PM, Imesh Gunaratne  wrote:
>>
>>> Great work Susinda! Few comments below:
>>>
>>>- I think initially we may not need multiple groups inside the tool
>>>palette for sequence diagramming module.
>>>- Maybe we can directly use the exact tool palette elements we need;
>>>Lifecycles, Mediators, and Arrows.
>>>- The term Tool used at Tools.Models.Tool, might not suit well.
>>>Shall we call it an Element/Item (a tool palette element/item)?
>>>- The images used inside the tool palette may need to have
>>>transparent backgrounds.
>>>- Overall, we may also need to consider following:
>>>   - As we do not store positioning data, we may need to do the
>>>   positioning for the user. If so we may not need to provide user to 
>>> change
>>>   the positioning of elements as needed.
>>>   - We might need to update the colours of the elements and the
>>>   tool palette according to a colour scheme.
>>>
>>> Thanks
>>>
>>>
>>> On Tue, Aug 30, 2016 at 1:04 PM, Susinda Perera 
>>> wrote:
>>>
>>>> The same webpage can be seen at here[1]
>>>> [1] - https://wso2-incubator.github.io/js-tooling-framework/sequen
>>>> ce-editor/index.html
>>>>
>>>> Thanks
>>>> Susinda
>>>>
>>>>
>>>> On Tue, Aug 30, 2016 at 11:34 AM, Susinda Perera 
>>>> wrote:
>>>>
>>>>> Hi All
>>>>>
>>>>> I have started implementing $subject. Figure[1] below is the
>>>>> screenshot of the current implementation.
>>>>> The implementation pf Tool palette is devided to following models and
>>>>> views (considering extendability) and the js code[2] describes the
>>>>> connectivity of the modules.
>>>>>
>>>>> ToolPalette / ToolPaletteView
>>>>> --ToolGroupWraper / ToolGroupWraperView
>>>>> --ToolGroup / ToolgroupView
>>>>>--Tool / ToolView
>>>>>
>>>>> TODO
>>>>> - Implement the drag and drop support
>>>>> - Implement the collapse of tool palette
>>>>> - Externalize the templates for toolView, toolgroupWrapperView and
>>>>> toolPaletteView
>>>>>
>>>>> Please give your in

Re: [Architecture] NextGen Tooling - Tool Palette

2016-08-31 Thread Imesh Gunaratne
Hi Susinda,

It looks like we have used toolGroupName property for generating CSS ids
[1] and also for the titles [2]. As a result, the title cannot contain any
spaces. I think we may need to introduce a separate properly for the group
title.

[1]
https://github.com/wso2-incubator/js-tooling-framework/blob/master/sequence-editor/js/tool_palette/toolgroupwrapper-view.js#L36
[2]
https://github.com/wso2-incubator/js-tooling-framework/blob/master/sequence-editor/index.html#L18

Thanks


On Wed, Aug 31, 2016 at 10:18 AM, Imesh Gunaratne  wrote:

> On Wed, Aug 31, 2016 at 9:31 AM, Chathura Ekanayake 
> wrote:
>
>> Hi Susinda / Imesh,
>>
>> If positioning information is not stored with the diagram, does it auto
>> adjust size and positioning of elements based on the length of text
>> provided as lifeline labels and text on lines connecting lifelines?
>> Further, are we allowing users to change the order of lifelines?
>>
>
> ​Yes, what we thought was to handle positioning​
>
> ​of elements, size (widh & height), text warapping, etc by the diagramming
> tool​. The main reason for this is to avoid saving metadata files for the
> visual representations and render diagrams in the same way for given
> langugage definitions.
>
> Nevertheless the user will be able to change the order of the elements as
> needed. That information will be directly reflected in the langugage
> definition.
>
>>
>> I think, although we might not need for sequence diagrams, in general it
>> may be better to store positioning information with diagrams as auto
>> layouting can be a complex problem. In addition, users may feel lack of
>> control over the diagram, if they cannot position elements. For example,
>> BPMN stores positioning information for each task and sequence in a
>> separate xml section, which is referred by the corresponding logical
>> construct when drawing the diagram in a canvas.
>>
>
> ​Yes this requirement is understandable. At the initial phase of the
> project we thought of following a simpler approach. Let's have a detailed
> discussion on this and analyze how it would affect BPS.
>
> Thanks​
>
>
>>
>> Regards,
>> Chathura
>>
>> On Tue, Aug 30, 2016 at 4:24 PM, Susinda Perera  wrote:
>>
>>> Hi Imesh
>>>
>>> +1 for all other suggestions and comments
>>> Multiple groups added only to demonstrate the tool-palette features, not
>>> needed for this editor but may be useful for datamapper editor.
>>> We need to finalize the icons for the various tools/actions - Hope ESb
>>> team can give some input
>>> We will have UX review fix the other UI issues, +1 for go with a theme
>>> approach (like dark/white)
>>> I'll do the code refactoring for the name changes and positioning issues.
>>>
>>> Thanks
>>> Susinda
>>>
>>>
>>> On Tue, Aug 30, 2016 at 3:35 PM, Imesh Gunaratne  wrote:
>>>
>>>> Great work Susinda! Few comments below:
>>>>
>>>>- I think initially we may not need multiple groups inside the tool
>>>>palette for sequence diagramming module.
>>>>- Maybe we can directly use the exact tool palette elements we
>>>>need; Lifecycles, Mediators, and Arrows.
>>>>- The term Tool used at Tools.Models.Tool, might not suit well.
>>>>Shall we call it an Element/Item (a tool palette element/item)?
>>>>- The images used inside the tool palette may need to have
>>>>transparent backgrounds.
>>>>- Overall, we may also need to consider following:
>>>>   - As we do not store positioning data, we may need to do the
>>>>   positioning for the user. If so we may not need to provide user to 
>>>> change
>>>>   the positioning of elements as needed.
>>>>   - We might need to update the colours of the elements and the
>>>>   tool palette according to a colour scheme.
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On Tue, Aug 30, 2016 at 1:04 PM, Susinda Perera 
>>>> wrote:
>>>>
>>>>> The same webpage can be seen at here[1]
>>>>> [1] - https://wso2-incubator.github.io/js-tooling-framework/sequen
>>>>> ce-editor/index.html
>>>>>
>>>>> Thanks
>>>>> Susinda
>>>>>
>>>>>
>>>>> On Tue, Aug 30, 2016 at 11:34 AM, Susinda Perera 
>>>>> wrote:
>>>>>
>>>>>> Hi All
>>>>>>
>>>

Re: [Architecture] Decoupling Hiera from WSO2 Puppet Modules

2016-08-31 Thread Imesh Gunaratne
On Wed, Aug 31, 2016 at 12:52 PM, Pubudu Gunatilaka 
wrote:

>
> We have done the following changes to puppet modules.
>
> 1. Users need to install java separately before installing any wso2
> product. But they can use the wso2base::java if needed [3].
> 2. We have moved creating java system preferences directories to
> wso2base::system [4]. Earlier this was done under the wso2base::java class.
> 3. We are calling wso2base resources from the product init.pp file as in
> [5]. These resources are same for all the products and we can move all
> these resources to a single resource in wso2base. Then it would be
> difficult to do any changes as all these changes should be done in
> wso2base, which is the common module.
>

​Just to clarify, how is the overall user experience? Will users be able to
do this?

1. Set up a new puppet server
2. puppet module install  --version ​
3. Update site.pp file.
​4. ​Copy hieradata files to the proper location and update configurations
as needed
5. Run puppet agent

Thanks

>
> [1] - https://github.com/wso2/puppet-base/blob/master/
> manifests/params.pp#L19
> [2] - https://github.com/wso2/puppet-as/blob/master/manifests/init.pp#L21
> [3] - https://github.com/isurulucky/puppet-modules/
> blob/master/modules/profiles/manifests/wso2am.pp#L44
> [4] - https://github.com/wso2/puppet-base/blob/master/
> manifests/system.pp#L63
> [5] - https://github.com/wso2/puppet-as/blob/master/manifests/init.pp#L76
>
> Thank you!
>
> On Mon, Aug 29, 2016 at 6:53 PM, Imesh Gunaratne  wrote:
>
>> Hi Isuru,
>>
>> Thanks for the offline explanation on the proposed architecture!
>> Technically this looks impressive! Please find my thoughts below:
>>
>> I think it would be better if we can avoid introducing puppet profile
>> concept and try to add a separate pp file for hiera lookups within the same
>> puppet module. This file can be either shipped with the hiera-data
>> distribution or kept inside the puppet module with a switch which can be
>> changed using a facter variable.
>>
>> If so, users might only need to do following to set up a puppet
>> environment:
>>
>>1. Install puppet server
>>2. Install wso2 puppet module (example: puppet module install wso2esb
>>--version 5.0.0)
>>3. If hiera-data is needed
>>   1. Extract wso2 product hiera-data distribution to puppet-home
>>   2. Set facter variable (use_hieradata=true) on client/puppet agent
>>   side
>>   3. Update hiera-data files with required configuration values
>>4. If not, set configuration values in params.pp file
>>5. Trigger puppet agent
>>
>> Thanks
>>
>> On Tue, Aug 23, 2016 at 10:24 AM, Akila Ravihansa Perera <
>> raviha...@wso2.com> wrote:
>>
>>> Hi Isuru,
>>>
>>> This is great!
>>>
>>> Currently, WSO2 Puppet Modules are tightly coupled to hiera to manage
>>>> the data component. This brings a few limitations, and the main problem is
>>>> not being able to the modules to puppet forge [1]. Ideally it should be
>>>> possible to push the released modules to puppet forge and a user should be
>>>> able to install those and refer them from their manifests.
>>>>
>>>> One possible way to perform this decoupling is to use the roles and
>>>> profiles pattern [2, 3]. In a nutshell, it adds abstractions on top of
>>>> component modules, named as a 'profile' layer and a 'role' layer. If we
>>>> apply the same concept to WSO2 puppet modules, we get a hierarchy similar
>>>> to the following, using API Manager as an example:
>>>>
>>>
>>> +1 for decoupling Hiera from Puppet modules. Data backend should be
>>> pluggable to a Puppet module.
>>>
>>>
>>>> As a possible solution, I suggest the following:
>>>>
>>>>- aggregate the many fine grained defined types to a few types by
>>>>introducing wrappers; typically these should be for cleaning +
>>>>installation, configuring and starting the server. Basically these are 
>>>> like
>>>>three stages in starting a carbon server from puppet [6].
>>>>- Add extension points to manage any type of resource which puppet
>>>>supports at these stages (at installing, configuring and starting the
>>>>server) [7]
>>>>- Call the relevant module from the profile layer, passing all the
>>>>data obtained via lookups/explicit data passing
>>>>
>>>> This will help to keep the puppet

[Architecture] Managing Multiple Deployment Patterns in PaaS Artifacts

2016-09-02 Thread Imesh Gunaratne
Hi All,

Currently, in K8S and Mesos Artifacts, we only have support for two
deployment patterns in each product. That is for single-JVM and fully
distributed deployments. We also have included files required for both
patterns inside the same directly and they are managed by the same
deploy.sh bash script. In this approach, it would be difficult to manage
multiple patterns of products as deploy.sh needs to be parameterized
accordingly.

Shankar proposed to split this into separate folders and have a folder per
deployment pattern. I'm +1 for that approach. Please find a sample folder
structure designed according to this approach below:

*/-artifacts-/*
 /
  /README.md - Include pattern information
  / - Artifacts needed for deployment
automation
  /deploy.sh
 /
  /README.md - Include pattern information
  / - Artifacts needed for deployment
automation
  /deploy.sh

Please share your thoughts on this.

Thanks

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Managing Multiple Deployment Patterns in PaaS Artifacts

2016-09-02 Thread Imesh Gunaratne
On Fri, Sep 2, 2016 at 6:17 AM, Vishanth Balasubramaniam  wrote:

> Hi,
>
> +1 to have different directories to manage deployment patterns. We can
> also have single deploy script and pass the pattern name as a parameter.
> WDYT?
>
> @Vishanth: A good thought! Our intention is to avoid parameterizing
deploy.sh and provide one for each pattern.

On Fri, Sep 2, 2016 at 6:43 PM, Chamila De Alwis  wrote:

> Hi Imesh,
>
> Wouldn't we be duplicating (say for Kubernetes) replication controller and
> service definitions across deployment patterns? If it's relatively easier
> to just have a configurable description of the deployment pattern can we
> follow something like the following?
>
> */-artifacts-/*
>  /artifacts
>   /wso2--controller.yaml
>   /wso2--service.yaml
>  /
>   /README.md - Include pattern information
>   /pattern.yaml - The ordered list of profiles to deploy
>   /deploy.sh
>  /
>   /README.md - Include pattern information
>   /pattern.yaml - The ordered list of profiles to deploy
>   /deploy.sh
>
>
​@Chamila: In each pattern, product profile might contain different
configuration values. Therefore for each pattern, profile we may need to
have a container image. Then profile yaml files may contain pattern,
profile specific values. If so we may need to parameterize the profile yaml
files with this suggestion.

By considering the number of yaml files each pattern would have, I think
duplicating them might not bring much overhead. Don't you agree?

​Thanks​


>
>> On Fri, Sep 2, 2016 at 3:46 PM, Imesh Gunaratne  wrote:
>>
>>> Hi All,
>>>
>>> Currently, in K8S and Mesos Artifacts, we only have support for two
>>> deployment patterns in each product. That is for single-JVM and fully
>>> distributed deployments. We also have included files required for both
>>> patterns inside the same directly and they are managed by the same
>>> deploy.sh bash script. In this approach, it would be difficult to manage
>>> multiple patterns of products as deploy.sh needs to be parameterized
>>> accordingly.
>>>
>>> Shankar proposed to split this into separate folders and have a folder
>>> per deployment pattern. I'm +1 for that approach. Please find a sample
>>> folder structure designed according to this approach below:
>>>
>>> */-artifacts-/*
>>>  /
>>>   /README.md - Include pattern information
>>>   / - Artifacts needed for deployment
>>> automation
>>>   /deploy.sh
>>>  /
>>>   /README.md - Include pattern information
>>>   / - Artifacts needed for deployment
>>> automation
>>>   /deploy.sh
>>>
>>> Please share your thoughts on this.
>>>
>>> Thanks
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Software Architect
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: https://medium.com/@imesh TW: @imesh
>>> lean. enterprise. middleware
>>>
>>>
>>
>>
>> --
>> *Vishanth Balasubramaniam*
>> Committer & PMC Member, Apache Stratos,
>> Software Engineer, WSO2 Inc.; http://wso2.com
>>
>> mobile: *+94 77 17 377 18*
>> about me: *http://about.me/vishanth <http://about.me/vishanth>*
>>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] NextGen Tooling - Tool Palette

2016-09-03 Thread Imesh Gunaratne
Hi Frank,

On Sat, Sep 3, 2016 at 8:35 PM, Frank Leymann  wrote:

> Let me even emphasize what Chathura wrote: users will be really annoyed if
> we don't preserve the layout/rendering of a diagram. It will be a
> show-stopper for using the tool
>

Thanks for your feedback! According to the current approach the layout and
rendering of the diagram will be preserved. The tool will render diagrams
the same way as the author drew it, for any other user. The only limitation
user will experience is that, re-positioning and re-sizing of the elements
will not be possible, those will be controlled by the tool.

If we were to persist positioning and sizing information, we might need to
introduce a diagram file inaddition to the language file. Do you think we
need to follow that model?

Thanks

>
>
> Best regards,
> Frank
>
> 2016-08-31 6:01 GMT+02:00 Chathura Ekanayake :
>
>> Hi Susinda / Imesh,
>>
>> If positioning information is not stored with the diagram, does it auto
>> adjust size and positioning of elements based on the length of text
>> provided as lifeline labels and text on lines connecting lifelines?
>> Further, are we allowing users to change the order of lifelines?
>>
>> I think, although we might not need for sequence diagrams, in general it
>> may be better to store positioning information with diagrams as auto
>> layouting can be a complex problem. In addition, users may feel lack of
>> control over the diagram, if they cannot position elements. For example,
>> BPMN stores positioning information for each task and sequence in a
>> separate xml section, which is referred by the corresponding logical
>> construct when drawing the diagram in a canvas.
>>
>> Regards,
>> Chathura
>>
>> On Tue, Aug 30, 2016 at 4:24 PM, Susinda Perera  wrote:
>>
>>> Hi Imesh
>>>
>>> +1 for all other suggestions and comments
>>> Multiple groups added only to demonstrate the tool-palette features, not
>>> needed for this editor but may be useful for datamapper editor.
>>> We need to finalize the icons for the various tools/actions - Hope ESb
>>> team can give some input
>>> We will have UX review fix the other UI issues, +1 for go with a theme
>>> approach (like dark/white)
>>> I'll do the code refactoring for the name changes and positioning issues.
>>>
>>> Thanks
>>> Susinda
>>>
>>>
>>> On Tue, Aug 30, 2016 at 3:35 PM, Imesh Gunaratne  wrote:
>>>
>>>> Great work Susinda! Few comments below:
>>>>
>>>>- I think initially we may not need multiple groups inside the tool
>>>>palette for sequence diagramming module.
>>>>- Maybe we can directly use the exact tool palette elements we
>>>>need; Lifecycles, Mediators, and Arrows.
>>>>- The term Tool used at Tools.Models.Tool, might not suit well.
>>>>Shall we call it an Element/Item (a tool palette element/item)?
>>>>- The images used inside the tool palette may need to have
>>>>transparent backgrounds.
>>>>- Overall, we may also need to consider following:
>>>>   - As we do not store positioning data, we may need to do the
>>>>   positioning for the user. If so we may not need to provide user to 
>>>> change
>>>>   the positioning of elements as needed.
>>>>   - We might need to update the colours of the elements and the
>>>>   tool palette according to a colour scheme.
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On Tue, Aug 30, 2016 at 1:04 PM, Susinda Perera 
>>>> wrote:
>>>>
>>>>> The same webpage can be seen at here[1]
>>>>> [1] - https://wso2-incubator.github.io/js-tooling-framework/sequen
>>>>> ce-editor/index.html
>>>>>
>>>>> Thanks
>>>>> Susinda
>>>>>
>>>>>
>>>>> On Tue, Aug 30, 2016 at 11:34 AM, Susinda Perera 
>>>>> wrote:
>>>>>
>>>>>> Hi All
>>>>>>
>>>>>> I have started implementing $subject. Figure[1] below is the
>>>>>> screenshot of the current implementation.
>>>>>> The implementation pf Tool palette is devided to following models and
>>>>>> views (considering extendability) and the js code[2] describes the
>>>>>> connectivity of the modules.
>>>>>>
>>>>>> ToolPalette / ToolPaletteVi

Re: [Architecture] NextGen Tooling - Tool Palette

2016-09-04 Thread Imesh Gunaratne
Hi Frank,

On Sun, Sep 4, 2016 at 4:06 PM, Frank Leymann  wrote:

> Hi Imesh,
>
> please allow for a question:  if I draw a diagram, I control the layout if
> it and as long as I don't store it I can chance the size of the elements
> etc - is that correct?  But once I filed it, I can no longer change the
> sizes and I can't change the position of the elements?  Why not, i.e. what
> is the technical problem behind it?
>

​What we thought at the initial design meeting was, not to allow the user
to position and re-size elements. Anyway I agree that it's a vital
requirement of a diagramming tool. Let's discuss and see how we can support
this.

>
> I just tried in Camunda editor:  I can change everything and save, open
> and change everything and save, open.  and the changes are all
> preserved as I made them.   Same is true for Signavio editor (tested just a
> minute ago too).
>
> What the tools are doing is, that the corresponding .bpmn filel has two
> subdocuments: one subdocument contains all the model elements to represent
> the process MODEL (in the  element), and then the rendering
> information for each model element including x-y-coordinates, height, width
> etc (in the  element).
>
> Thus, it state of the art to preserve the layouting information. I.e. we
> shouldn't be weaker than the state of the art.  Finally, you don't need to
> introduce a second file but can maintain the layouting info in the same
> file - but cleanly separate between the logic of the process model and its
> layout info.
>

Thanks for pointing this out! I think this would work very well for
diagrams which users would always use the tool to define the workflow.
However for diagrams which users would prefer to directly write the
language syntax, storing metadata on the same file may cause redability
issues.

How do you like the approach MySQL Workbench has taken for storing EER
diagrams? Those diagram files store both the language definition and
diagram metadata. When needed language definition can be generated from the
diagram file. If the database structure is first created, the diagram file
can be auto generated.

Thanks

>
>
> Best regards,
> Frank
>
> 2016-09-04 4:31 GMT+02:00 Imesh Gunaratne :
>
>> Hi Frank,
>>
>> On Sat, Sep 3, 2016 at 8:35 PM, Frank Leymann  wrote:
>>
>>> Let me even emphasize what Chathura wrote: users will be really annoyed
>>> if we don't preserve the layout/rendering of a diagram. It will be a
>>> show-stopper for using the tool
>>>
>>
>> Thanks for your feedback! According to the current approach the layout
>> and rendering of the diagram will be preserved. The tool will render
>> diagrams the same way as the author drew it, for any other user. The only
>> limitation user will experience is that, re-positioning and re-sizing of
>> the elements will not be possible, those will be controlled by the tool.
>>
>> If we were to persist positioning and sizing information, we might need
>> to introduce a diagram file inaddition to the language file. Do you think
>> we need to follow that model?
>>
>> Thanks
>>
>>>
>>>
>>> Best regards,
>>> Frank
>>>
>>> 2016-08-31 6:01 GMT+02:00 Chathura Ekanayake :
>>>
>>>> Hi Susinda / Imesh,
>>>>
>>>> If positioning information is not stored with the diagram, does it auto
>>>> adjust size and positioning of elements based on the length of text
>>>> provided as lifeline labels and text on lines connecting lifelines?
>>>> Further, are we allowing users to change the order of lifelines?
>>>>
>>>> I think, although we might not need for sequence diagrams, in general
>>>> it may be better to store positioning information with diagrams as auto
>>>> layouting can be a complex problem. In addition, users may feel lack of
>>>> control over the diagram, if they cannot position elements. For example,
>>>> BPMN stores positioning information for each task and sequence in a
>>>> separate xml section, which is referred by the corresponding logical
>>>> construct when drawing the diagram in a canvas.
>>>>
>>>> Regards,
>>>> Chathura
>>>>
>>>> On Tue, Aug 30, 2016 at 4:24 PM, Susinda Perera 
>>>> wrote:
>>>>
>>>>> Hi Imesh
>>>>>
>>>>> +1 for all other suggestions and comments
>>>>> Multiple groups added only to demonstrate the tool-palette features,
>>>>> not needed for this editor but may be useful for datamapper editor.
>&

Re: [Architecture] Managing Multiple Deployment Patterns in PaaS Artifacts

2016-09-05 Thread Imesh Gunaratne
On Sat, Sep 3, 2016 at 12:39 PM, Pubudu Gunatilaka  wrote:

> Hi,
>
> +1 for the approach and it is more simpler in this way. We need to
> consider the puppet side as well as we use puppet provision for docker
> images. At the moment we only support single JVM and fully distributed
> pattern. IMO we need to introduce patterns for puppet as well to go forward
> with this effort. Otherwise, users will have to write their own patterns by
> changing hiera data.
>

​+1 Yes Pubudu! Will introduce support for managing paterns in puppet first.

Thanks​


>
> Thank you!
>
> On Fri, Sep 2, 2016 at 8:33 PM, Chamila De Alwis 
> wrote:
>
>>
>> On Fri, Sep 2, 2016 at 9:28 AM, Imesh Gunaratne  wrote:
>>
>>> In each pattern, product profile might contain different configuration
>>> values. Therefore for each pattern, profile we may need to have a container
>>> image. Then profile yaml files may contain pattern, profile specific
>>> values. If so we may need to parameterize the profile yaml files with this
>>> suggestion.
>>>
>>> By considering the number of yaml files each pattern would have, I think
>>> duplicating them might not bring much overhead. Don't you agree?
>>>
>>
>> Yes, I didn't think of that. In that case, artifact duplication is a
>> better, simpler approach. +1
>>
>>
>> Regards,
>> Chamila de Alwis
>> Committer and PMC Member - Apache Stratos
>> Senior Software Engineer | WSO2
>> Blog: https://medium.com/@chamilad
>>
>>
>>
>
>
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] NextGen Tooling - Mediator Definition Format

2016-09-08 Thread Imesh Gunaratne
 required: true,
>
>   value: {
>
>   type: "String"
>
>   }
>
>   },
>
>  ...
>
>   ]
>
>   }
>
>
>
>
> I have attached the sample definition done for log mediator.
>
>
> Please share any ideas on other information required to mediator
> definition or better ways to define rather than this specification.
>
>
> Thanks,
>
> Nuwan
>
>
> --
> --
>
> *Nuwan Chamara Pallewela*
>
>
> *Software Engineer*
>
> *WSO2, Inc. *http://wso2.com
> *lean . enterprise . middleware*
>
> Email   *nuw...@wso2.com *
> Mobile  *+94719079739 <%2B94719079739>@*
>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Speedup traffic serving process in scalable/ containerized deployment

2016-10-11 Thread Imesh Gunaratne
On Mon, Oct 10, 2016 at 8:01 PM, Sanjeewa Malalgoda 
wrote:

>
> When we do container based deployment standard approach we discussed so
> far was,
>
>- At the first request check the tenant and service from URL and do
>lookup for running instances.
>- If matching instance available route traffic to that.
>- Else spawn new instance using template(or image).  When we spawn
>this new instance we need to let it know what is the current tenant and
>data sources, configurations it should use.
>- Then route requests to new node.
>- After some idle time this instance may terminate.
>
> ​If we were to do this with a container cluster manager, I think we would
need to implement a custom scheduler (an entity similar to HPA in K8S) to
handle the orchestration process properly. Otherwise it would be difficult
to use the built-in orchestration features such as auto-healing and
autoscaling with this feature.

By saying that this might be a feature which should be implemented at the
container cluster manager.

*Suggestion*
> If we maintain hot pool(started and ready to serve requests) of servers
> for each server type(API Gateway, Identity Server etc) then we can cutoff
> server startup time + IaaS level spawn time from above process. Then when
> requests comes to wso2.com tenants API Gateway we can pick instance from
> gateway instance pool and set wso2.com tenant context and data source
> using service call(assuming setting context and configurations is much
> faster).
>

​I think with this approach tenant isolation will become a problem. It
would be ideal to use tenancy features at the container cluster manager
level. For an example namespaces in K8S.

Thanks

>
> *Implementation*
> For this we need to implement some plug-in to instance spawn process.
> Then instead of spawning new instance it will pick one instance from the
> pool and configure it to behave as specific tenant.
> For this each instance running in pool can open up port, so load balancer
> or scaling component can call it and tell what is the tenant and
> configurations.
> Once it configured server close that configuration port and start traffic
> serving.
> After some idle time this instance may terminate.
>
> This approach will help us if we met following condition.
> (Instance loading time + Server startup time + Server Lookup) *>* (Server
> Lookup + Loading configuration and tenant of running server from external
> call)
>
> Any thoughts on this?
>
> Thanks,
> sanjeewa.
> --
>
> *Sanjeewa Malalgoda*
> WSO2 Inc.
> Mobile : +94713068779
>
> <http://sanjeewamalalgoda.blogspot.com/>blog :http://sanjeewamalalgoda.
> blogspot.com/ <http://sanjeewamalalgoda.blogspot.com/>
>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Multiple profile support for C5 based products.

2016-10-12 Thread Imesh Gunaratne
rs.
>>
>> We have started implementing the tool, please share your thoughts /
>> suggestions.
>>
>> [1] - [Architecture] How can we improve our profiles story?
>>
>> --
>> Thanks,
>> Shariq
>> Associate Technical Lead
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Nuwan Dias
>
> Software Architect - WSO2, Inc. http://wso2.com
> email : nuw...@wso2.com
> Phone : +94 777 775 729
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Configuration files in C5

2016-10-13 Thread Imesh Gunaratne
I would like to propose to use a single YAML file for each distribution
(product/profile) to make the configuration process easier.

I understand that we are trying to do something similar using a properties
file (by overriding configurations in separate files), however IMO a
properties file might not suite well for this purpose. A YAML file or any
other type of a file which is more readable and designed for managing
hierarchical data structures would work well. More importantly having a
single configuration file would make the configuration process more simpler
and clean. WDYT?

Thanks

On Thursday, October 13, 2016, Sidath Weerasinghe  wrote:

> Hi Jayanga,
>
> What are the most frequently changing configurations in C5 which are going
> to store in the deployment.properties" file ?
>
> On Thu, Oct 13, 2016 at 5:07 PM, Jayanga Dissanayake  > wrote:
>
>> Hi All,
>>
>> With C5, we introduced "ConfigResolver" which enhances the user
>> experience in changing configuration values. With the previous C4x
>> approach, users had to know where the configuration files are and to,
>> change several configuration files to get the product working in some
>> scenarios.
>>
>> With "ConfigResolver" it allows us to have more frequently changing
>> configurations in one location "deployment.properties" file.
>>
>> A product has set of configurations that are needed to be changed in the
>> deployments and there are some other configurations that we don't change
>> unless there is a complex situation. Hence, ideally, deployment.properties
>> file should contain only the configurations that are frequently used and
>> can add more entries if a requirement arise.
>>
>> But with the requirements coming in with the "profile" support [1]. we
>> have to rethink the way config resolver handle the configuration files.
>>
>> eg:
>> 1. We need to enable indexing in API store and publisher, not in other
>> profiles.
>> 2. Enabling certain handlers in particular profiles.
>>
>> At present, there is no configuration to enable/disable these features.
>> We have to rethink the way we define configurations in features in future.
>> We have to have a way to enable/disable certain features so that those
>> could be disabled in certain profiles.
>>
>> Any idea/questions/clarifications are highly appreciated as it will help
>> to model the new configurations story in C5.
>>
>> [1] "Multiple profile support for C5 based products."
>>
>> Thanks,
>> *Jayanga Dissanayake*
>> Associate Technical Lead
>> WSO2 Inc. - http://wso2.com/
>> lean . enterprise . middleware
>> email: jaya...@wso2.com
>> 
>> mobile: +94772207259
>> <http://wso2.com/signature>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> 
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Thank You,
> Best Regards,
>
> Sidath Weerasinghe
>
>
> *Intern*
>
> *WSO2, Inc. *
>
> *lean . enterprise . middleware *
>
>
> *Mobile: +94719802550*
>
> *Email: *sid...@wso2.com 
>
> Blog: https://medium.com/@sidath
>
> Linkedin: https://lk.linkedin.com/in/sidathweerasinghe
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Configuration files in C5

2016-10-14 Thread Imesh Gunaratne
On Fri, Oct 14, 2016 at 11:14 AM, Jayanga Dissanayake 
wrote:

>
> @Imesh/@Azeez: I also believe that merging all the configurations into one
> file would complicate the configuration process.
>

​Yes, it might be complicated if we were to add configurations of 30 files
into one. However the reality is bit different, please see below:

https://github.com/wso2/puppet-modules/tree/master/modules/wso2am/templates/1.10.0/repository/conf

eg: in APIM we have around 30 different xml files in the conf directory
> (excluding tomcat and axis2). So, combining all these into one file would
> complicate the user experience IMO.
>

Currently in API-M 1.10.0 there are only 14 config files in use (templated
in Puppet):
https://github.com/wso2/puppet-modules/tree/master/modules/wso2am/templates/1.10.0/repository/conf
​


​@Nuwan​: Would you like to share your thoughts on this?

​Thanks​


> Thanks,
> Jayanga.
>
> *Jayanga Dissanayake*
> Associate Technical Lead
> WSO2 Inc. - http://wso2.com/
> lean . enterprise . middleware
> email: jaya...@wso2.com
> mobile: +94772207259
> <http://wso2.com/signature>
>
> On Fri, Oct 14, 2016 at 10:50 AM, Afkham Azeez  wrote:
>
>> I think Imesh's suggestion merges all the config files and complicates
>> stuff a lot. With the deployment.properties file we are including only the
>> bits that most users will be concerned about and will provide a simple way
>> to configure such stuff.
>>
>> On Fri, Oct 14, 2016 at 9:50 AM, Isuru Perera  wrote:
>>
>>> +1 for using a YAML file instead of a properties file.
>>>
>>> On Fri, Oct 14, 2016 at 8:45 AM, Imesh Gunaratne  wrote:
>>>
>>>> I would like to propose to use a single YAML file for each distribution
>>>> (product/profile) to make the configuration process easier.
>>>>
>>>> I understand that we are trying to do something similar using a
>>>> properties file (by overriding configurations in separate files), however
>>>> IMO a properties file might not suite well for this purpose. A YAML file or
>>>> any other type of a file which is more readable and designed for managing
>>>> hierarchical data structures would work well. More importantly having a
>>>> single configuration file would make the configuration process more simpler
>>>> and clean. WDYT?
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On Thursday, October 13, 2016, Sidath Weerasinghe 
>>>> wrote:
>>>>
>>>>> Hi Jayanga,
>>>>>
>>>>> What are the most frequently changing configurations in C5 which are
>>>>> going to store in the deployment.properties" file ?
>>>>>
>>>>> On Thu, Oct 13, 2016 at 5:07 PM, Jayanga Dissanayake >>>> > wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> With C5, we introduced "ConfigResolver" which enhances the user
>>>>>> experience in changing configuration values. With the previous C4x
>>>>>> approach, users had to know where the configuration files are and to,
>>>>>> change several configuration files to get the product working in some
>>>>>> scenarios.
>>>>>>
>>>>>> With "ConfigResolver" it allows us to have more frequently changing
>>>>>> configurations in one location "deployment.properties" file.
>>>>>>
>>>>>> A product has set of configurations that are needed to be changed in
>>>>>> the deployments and there are some other configurations that we don't
>>>>>> change unless there is a complex situation. Hence, ideally,
>>>>>> deployment.properties file should contain only the configurations that 
>>>>>> are
>>>>>> frequently used and can add more entries if a requirement arise.
>>>>>>
>>>>>> But with the requirements coming in with the "profile" support [1].
>>>>>> we have to rethink the way config resolver handle the configuration 
>>>>>> files.
>>>>>>
>>>>>> eg:
>>>>>> 1. We need to enable indexing in API store and publisher, not in
>>>>>> other profiles.
>>>>>> 2. Enabling certain handlers in particular profiles.
>>>>>>
>>>>>> At present, there is no configuration to enable/disable these
>>>>>> features. We have to rethink the way we define c

Re: [Architecture] Configuration files in C5

2016-10-14 Thread Imesh Gunaratne
On Fri, Oct 14, 2016 at 1:44 PM, Imesh Gunaratne  wrote:

> On Fri, Oct 14, 2016 at 11:14 AM, Jayanga Dissanayake 
> wrote:
>
>>
>> @Imesh/@Azeez: I also believe that merging all the configurations into
>> one file would complicate the configuration process.
>>
>
> ​Yes, it might be complicated if we were to add configurations of 30 files
> into one. However the reality is bit different, please see below:
>
> [Correction]

https://github.com/wso2/puppet-modules/tree/master/hieradata/dev/wso2/wso2am/1.10.0/kubernetes
​
Thanks

>
>
>> Thanks,
>> Jayanga.
>>
>> *Jayanga Dissanayake*
>> Associate Technical Lead
>> WSO2 Inc. - http://wso2.com/
>> lean . enterprise . middleware
>> email: jaya...@wso2.com
>> mobile: +94772207259
>> <http://wso2.com/signature>
>>
>> On Fri, Oct 14, 2016 at 10:50 AM, Afkham Azeez  wrote:
>>
>>> I think Imesh's suggestion merges all the config files and complicates
>>> stuff a lot. With the deployment.properties file we are including only the
>>> bits that most users will be concerned about and will provide a simple way
>>> to configure such stuff.
>>>
>>> On Fri, Oct 14, 2016 at 9:50 AM, Isuru Perera  wrote:
>>>
>>>> +1 for using a YAML file instead of a properties file.
>>>>
>>>> On Fri, Oct 14, 2016 at 8:45 AM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>> I would like to propose to use a single YAML file for each
>>>>> distribution (product/profile) to make the configuration process easier.
>>>>>
>>>>> I understand that we are trying to do something similar using a
>>>>> properties file (by overriding configurations in separate files), however
>>>>> IMO a properties file might not suite well for this purpose. A YAML file 
>>>>> or
>>>>> any other type of a file which is more readable and designed for managing
>>>>> hierarchical data structures would work well. More importantly having a
>>>>> single configuration file would make the configuration process more 
>>>>> simpler
>>>>> and clean. WDYT?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> On Thursday, October 13, 2016, Sidath Weerasinghe 
>>>>> wrote:
>>>>>
>>>>>> Hi Jayanga,
>>>>>>
>>>>>> What are the most frequently changing configurations in C5 which are
>>>>>> going to store in the deployment.properties" file ?
>>>>>>
>>>>>> On Thu, Oct 13, 2016 at 5:07 PM, Jayanga Dissanayake <
>>>>>> jaya...@wso2.com> wrote:
>>>>>>
>>>>>>> Hi All,
>>>>>>>
>>>>>>> With C5, we introduced "ConfigResolver" which enhances the user
>>>>>>> experience in changing configuration values. With the previous C4x
>>>>>>> approach, users had to know where the configuration files are and to,
>>>>>>> change several configuration files to get the product working in some
>>>>>>> scenarios.
>>>>>>>
>>>>>>> With "ConfigResolver" it allows us to have more frequently changing
>>>>>>> configurations in one location "deployment.properties" file.
>>>>>>>
>>>>>>> A product has set of configurations that are needed to be changed in
>>>>>>> the deployments and there are some other configurations that we don't
>>>>>>> change unless there is a complex situation. Hence, ideally,
>>>>>>> deployment.properties file should contain only the configurations that 
>>>>>>> are
>>>>>>> frequently used and can add more entries if a requirement arise.
>>>>>>>
>>>>>>> But with the requirements coming in with the "profile" support [1].
>>>>>>> we have to rethink the way config resolver handle the configuration 
>>>>>>> files.
>>>>>>>
>>>>>>> eg:
>>>>>>> 1. We need to enable indexing in API store and publisher, not in
>>>>>>> other profiles.
>>>>>>> 2. Enabling certain handlers in particular profiles.
>>>>>>>
>>>>>>> At present, there is no configuration to enable/disable these

Re: [Architecture] Implementation of c5 multitenancy

2016-11-15 Thread Imesh Gunaratne
On Tue, Nov 15, 2016 at 5:00 PM, Lasantha Samarakoon 
wrote:

> Hi all,
>
> We are currently working on implementing multitenancy for Carbon-5 based
> products. In order to implement this we are creating Kubenetes namespaces
> for each tenant (namespaces provides isolation between tenants and the same
> approach has been used by WSO2 cloud as well).
>
> In most of the customer use cases, the tenants can be defined at the
> deployment time, but in order to cater SaaS requirements the tenants has to
> be created dynamically. To achieve this we have built a REST API using
> Microservices[1] (please find the attached Swaggger definition of the API).
> This API provides a endpoints for basic CRUD operations on tenants on
> Kubenetes cluster.
>

​Great work Lasantha! Can you please share the API resource/method list in
text​

​format?​

>
> So in order to proceed with this what are the options to integrate this
> with the platform? Do we need to implement a UUF component and/or a CLI as
> well?
>

​May be we can write a bash script first and later move to a CLI/UI.

​I think we would also need to expose methods for automating the deployment
process once tenants/namespaces are created. Each WSO2 product would
release required K8S defnitions together with the product releases.

​Thanks​

>
> [1] https://github.com/lasanthaS/wso2-carbon5-multitenancy-api
>
>
> Regards,
>
> *Lasantha Samarakoon* | Software Engineer
> WSO2, Inc.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 (71) 214 1576
> Email:  lasant...@wso2.com
> Web:www.wso2.com
>
> lean . enterprise . middleware
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Implementation of c5 multitenancy

2016-11-22 Thread Imesh Gunaratne
Hi Lasantha,

Shall we move what you have implemented so far to wso2 multitenancy
repository [1]? Maybe we can use a new branch called 5.0.0.

[1] https://github.com/wso2/carbon-multitenancy

Thanks

On Mon, Nov 21, 2016 at 12:53 PM, Lasantha Samarakoon 
wrote:

> Hi all,
>
> Here is the summary of REST resources available in the above Swagger
> definition (For the readability).
>
> GET /tenants Get all tenants
> POST /tenants Add a new tenant
> GET /tenants/{name} Get a tenant
> DELETE /tenants/{name} Delete a tenant
>
> Regards,
>
>
> *Lasantha Samarakoon* | Software Engineer
> WSO2, Inc.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 (71) 214 1576
> Email:  lasant...@wso2.com
> Web:www.wso2.com
>
> lean . enterprise . middleware
>
> On Mon, Nov 21, 2016 at 9:58 AM, Lahiru Cooray  wrote:
>
>> Few more suggestions to consider..
>>
>>- Get all tenants : Don't we need to add limit/offset to support
>>pagination?
>>- Get a tenant by name : Response code 400 can be introduced if the
>>name is invalid
>>- Create new tenant: Response code 400 needed to notify the errors in
>>payload.
>>- Delete tenant: Response code 400 can be introduced if the name is
>>invalid/ Can't we introduced 412 if the preconditions are failed to delete
>>a tenant?
>>
>>
>> On Mon, Nov 21, 2016 at 8:44 AM, Joseph Fonseka  wrote:
>>
>>> Hi Lashantha
>>>
>>> Few corrections according to WSO2 REST API guidelines [1].
>>>
>>> 1. The POST should return 201 Created response.
>>> 2. And as a practice we do not use 500 error codes in API interface.
>>> 3. If the tenant is already exist you can send a 400 Bad Rest with error
>>> json explaining what went wrong.
>>>
>>> If you want an example please refer [2] and [3].
>>>
>>> Best Regards
>>> Jo
>>>
>>>
>>>
>>> [1] http://wso2.com/whitepapers/wso2-rest-apis-design-guidelines/
>>> [2] https://raw.githubusercontent.com/wso2/carbon-apimgt/v6.
>>> 0.4/components/apimgt/org.wso2.carbon.apimgt.rest.api.store/
>>> src/main/resources/store-api.yaml
>>> [3] https://docs.wso2.com/display/AM200/apidocs/store/
>>>
>>> On Fri, Nov 18, 2016 at 1:27 PM, Dilan Udara Ariyaratne >> > wrote:
>>>
>>>> Hi Lasantha,
>>>>
>>>> I did go through the list of REST APIs that you have defined in the
>>>> swagger doc.
>>>> But I have not found any API for doing an update to an existing tenant
>>>> as well as deactivation.
>>>>
>>>> Are we skipping those capabilities found in C4 based multi-tenancy,
>>>> here ?
>>>>
>>>> Regards,
>>>> Dilan.
>>>>
>>>>
>>>> *Dilan U. Ariyaratne*
>>>> Senior Software Engineer
>>>> WSO2 Inc. <http://wso2.com/>
>>>> Mobile: +94766405580 <%2B94766405580>
>>>> lean . enterprise . middleware
>>>>
>>>>
>>>> On Wed, Nov 16, 2016 at 11:12 AM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>> On Tue, Nov 15, 2016 at 5:00 PM, Lasantha Samarakoon <
>>>>> lasant...@wso2.com> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> We are currently working on implementing multitenancy for Carbon-5
>>>>>> based products. In order to implement this we are creating Kubenetes
>>>>>> namespaces for each tenant (namespaces provides isolation between tenants
>>>>>> and the same approach has been used by WSO2 cloud as well).
>>>>>>
>>>>>> In most of the customer use cases, the tenants can be defined at the
>>>>>> deployment time, but in order to cater SaaS requirements the tenants has 
>>>>>> to
>>>>>> be created dynamically. To achieve this we have built a REST API using
>>>>>> Microservices[1] (please find the attached Swaggger definition of the 
>>>>>> API).
>>>>>> This API provides a endpoints for basic CRUD operations on tenants on
>>>>>> Kubenetes cluster.
>>>>>>
>>>>>
>>>>> ​Great work Lasantha! Can you please share the API resource/method
>>>>> list in text​
>>>>>
>>>>> ​format?​
>>>>>
>>>&

Re: [Architecture] Deployment automation for Carbon 5 based products

2016-12-05 Thread Imesh Gunaratne
Hi Lasantha,

Great work! Please find few comments inline:

On Mon, Dec 5, 2016 at 5:31 PM, Lasantha Samarakoon 
wrote:
>
>
> Following endpoints are available in this API (Please see the attached
> Swagger definition for detailed description).
>
> *POST /deployments *
> - Payload: Product model
>
> *DELETE /deployments*
> - Payload: Product model
>
> Product model:
> {
> "product":"esb",
> "version":"4.9.0",
> "pattern":1,
> "platform":"kubernetes"
> }
>

​I think we might need to use the same term given for the API resource for
the object model. In this scenario maybe we can call it deployment. WDYT?​


​We would also need to expose two API resources for queriing deployments:

GET /deployments - Returns all deployments
GET /deployments/{id} - Returns the deployment that matches the {id}

Note the {id} parameter in the second API resource. I think we would need
to add an id property to the deployment definition and use the identified
generated by the container cluster manager.

*How the API works?*
>
> Kubernetes artifacts which is used to deploy the product in a container
> environment needs to be hosted in the host environment. 'KUBERNETES_HOME'
> environment variable contains the path to this Kubernetes artifacts
> directory. Directory structure of the KUBERNETES_HOME is as follows.
>
> [KUBERNETES_HOME]/[PRODUCT_NAME]/[PRODUCT_VERSION]/[
> PATTERN]/[PRODUCT_PROFILE].yaml
>

​Shall we change this to read K8S artifacts from a folder inside
repository/deployment folder (need to check the exact folder path from C5
product structure)?

Thanks​


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Deployment automation for Carbon 5 based products

2016-12-06 Thread Imesh Gunaratne
Correction: s/identified generated/identifier generated/g

Thanks

On Tue, Dec 6, 2016 at 12:02 PM, Imesh Gunaratne  wrote:

> Hi Lasantha,
>
> Great work! Please find few comments inline:
>
> On Mon, Dec 5, 2016 at 5:31 PM, Lasantha Samarakoon 
> wrote:
>>
>>
>> Following endpoints are available in this API (Please see the attached
>> Swagger definition for detailed description).
>>
>> *POST /deployments *
>> - Payload: Product model
>>
>> *DELETE /deployments*
>> - Payload: Product model
>>
>> Product model:
>> {
>> "product":"esb",
>> "version":"4.9.0",
>> "pattern":1,
>> "platform":"kubernetes"
>> }
>>
>
> ​I think we might need to use the same term given for the API resource for
> the object model. In this scenario maybe we can call it deployment. WDYT?​
>
>
> ​We would also need to expose two API resources for queriing deployments:
>
> GET /deployments - Returns all deployments
> GET /deployments/{id} - Returns the deployment that matches the {id}
>
> Note the {id} parameter in the second API resource. I think we would need
> to add an id property to the deployment definition and use the identified
> generated by the container cluster manager.
>
> *How the API works?*
>>
>> Kubernetes artifacts which is used to deploy the product in a container
>> environment needs to be hosted in the host environment. 'KUBERNETES_HOME'
>> environment variable contains the path to this Kubernetes artifacts
>> directory. Directory structure of the KUBERNETES_HOME is as follows.
>>
>>     [KUBERNETES_HOME]/[PRODUCT_NAME]/[PRODUCT_VERSION]/[PATTERN]
>> /[PRODUCT_PROFILE].yaml
>>
>
> ​Shall we change this to read K8S artifacts from a folder inside
> repository/deployment folder (need to check the exact folder path from C5
> product structure)?
>
> Thanks​
>
>
> --
> *Imesh Gunaratne*
> Software Architect
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
> W: https://medium.com/@imesh TW: @imesh
> lean. enterprise. middleware
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Deployment automation for Carbon 5 based products

2016-12-07 Thread Imesh Gunaratne
Hi Lasantha,

On Thu, Dec 8, 2016 at 10:17 AM, Lasantha Samarakoon 
wrote:

> Hi Imesh,
>
> Thanks for the feedback. I modified the API so that now the following two
> GET endpoints are also available.
>
> *GET /deployments*
>
> Returns all of the current deployments.
>
>
> *GET /deployments/{id}*
>
> - Path parameter: {id} - ID generated by Kubernetes for a deployment
>
> Returns a single deployment identified by the ID
>
> WRT to the {id}, ATM we are  using the ID auto generated by the container
> cluster manager and there is no ID property introduced in the deployment
> definition. I have added the ID property into the Deployment model as well.
>

​+1 Great work! I also reviewed the PR and added some comments​

​please have a look.​

>
> In addition to that I removed the platform property from the Object model
> and moved that to environment variable as we have discussed offline. The
> reason for that is basically the container cluster manager won't get
> changed for a particular environment dynamically so that it will be a
> unnecessary property in API payload. The new Deployment model is as follows.
>
> {
> "id": ""
> "product":"esb",
> "version":"4.9.0",
> "pattern":1,
> }
>
>
​A very good point, yes we might not need to expose the platform property
from this API as it would be fixed for a given deployment. +1 for providing
it via an environment variable.

Thanks

>
>
> Thanks,
>
> *Lasantha Samarakoon* | Software Engineer
> WSO2, Inc.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 (71) 214 1576 <+94%2071%20214%201576>
> Email:  lasant...@wso2.com
> Web:www.wso2.com
>
> lean . enterprise . middleware
>
> On Thu, Dec 8, 2016 at 10:00 AM, Lasantha Samarakoon 
> wrote:
>
>> Hi Pubudu,
>>
>> Agree on your thoughts. But since the infrastructure is basically fixed
>> and there won't be multiple versions of the Kubernetes running within the
>> same environment I don't think we will have such a requirement. But
>> definitely when deploying products using this API, we will need to go with
>> the compatible Kubernetes platform.
>>
>>
>> Regards,
>>
>> *Lasantha Samarakoon* | Software Engineer
>> WSO2, Inc.
>> #20, Palm Grove, Colombo 03, Sri Lanka
>> Mobile: +94 (71) 214 1576 <+94%2071%20214%201576>
>> Email:  lasant...@wso2.com
>> Web:www.wso2.com
>>
>> lean . enterprise . middleware
>>
>> On Tue, Dec 6, 2016 at 2:18 PM, Pubudu Gunatilaka 
>> wrote:
>>
>>> Hi Lasantha,
>>>
>>> How do we handle multiple versions in K8s? There could be API changes in
>>> K8 major versions. I think we need to consider the platform version as well
>>> when deploying the products.
>>>
>>> Thank you!
>>>
>>> On Tue, Dec 6, 2016 at 12:03 PM, Imesh Gunaratne  wrote:
>>>
>>>> Correction: s/identified generated/identifier generated/g
>>>>
>>>> Thanks
>>>>
>>>> On Tue, Dec 6, 2016 at 12:02 PM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>> Hi Lasantha,
>>>>>
>>>>> Great work! Please find few comments inline:
>>>>>
>>>>> On Mon, Dec 5, 2016 at 5:31 PM, Lasantha Samarakoon <
>>>>> lasant...@wso2.com> wrote:
>>>>>>
>>>>>>
>>>>>> Following endpoints are available in this API (Please see the
>>>>>> attached Swagger definition for detailed description).
>>>>>>
>>>>>> *POST /deployments *
>>>>>> - Payload: Product model
>>>>>>
>>>>>> *DELETE /deployments*
>>>>>> - Payload: Product model
>>>>>>
>>>>>> Product model:
>>>>>> {
>>>>>> "product":"esb",
>>>>>> "version":"4.9.0",
>>>>>> "pattern":1,
>>>>>> "platform":"kubernetes"
>>>>>> }
>>>>>>
>>>>>
>>>>> ​I think we might need to use the same term given for the API resource
>>>>> for the object model. In this scenario maybe we can call it deployment.
>>>>> WDYT?​
>>>>>
>>>>>
>>>>> ​We would also need to expose two API resources for queriing
>>>>> deployments

Re: [Architecture] Deployment automation for Carbon 5 based products

2016-12-07 Thread Imesh Gunaratne
Hi Pubudu,

On Tue, Dec 6, 2016 at 2:18 PM, Pubudu Gunatilaka  wrote:

> Hi Lasantha,
>
> How do we handle multiple versions in K8s? There could be API changes in
> K8 major versions. I think we need to consider the platform version as well
> when deploying the products.
>

​A very good point! In this scenario the underlying K8S platform that hosts
the SaaS application would be fixed. Therefore we might not need to send
the K8S API version via this API.​
 We have used Fabric8 SDK in this API for talking to K8S API, therefore K8S
API version compatibility will be handled by that.
​

​Thanks​


> Thank you!
>
> On Tue, Dec 6, 2016 at 12:03 PM, Imesh Gunaratne  wrote:
>
>> Correction: s/identified generated/identifier generated/g
>>
>> Thanks
>>
>> On Tue, Dec 6, 2016 at 12:02 PM, Imesh Gunaratne  wrote:
>>
>>> Hi Lasantha,
>>>
>>> Great work! Please find few comments inline:
>>>
>>> On Mon, Dec 5, 2016 at 5:31 PM, Lasantha Samarakoon 
>>> wrote:
>>>>
>>>>
>>>> Following endpoints are available in this API (Please see the attached
>>>> Swagger definition for detailed description).
>>>>
>>>> *POST /deployments *
>>>> - Payload: Product model
>>>>
>>>> *DELETE /deployments*
>>>> - Payload: Product model
>>>>
>>>> Product model:
>>>> {
>>>> "product":"esb",
>>>> "version":"4.9.0",
>>>> "pattern":1,
>>>> "platform":"kubernetes"
>>>> }
>>>>
>>>
>>> ​I think we might need to use the same term given for the API resource
>>> for the object model. In this scenario maybe we can call it deployment.
>>> WDYT?​
>>>
>>>
>>> ​We would also need to expose two API resources for queriing deployments:
>>>
>>> GET /deployments - Returns all deployments
>>> GET /deployments/{id} - Returns the deployment that matches the {id}
>>>
>>> Note the {id} parameter in the second API resource. I think we would
>>> need to add an id property to the deployment definition and use the
>>> identified generated by the container cluster manager.
>>>
>>> *How the API works?*
>>>>
>>>> Kubernetes artifacts which is used to deploy the product in a container
>>>> environment needs to be hosted in the host environment. '
>>>> KUBERNETES_HOME' environment variable contains the path to this
>>>> Kubernetes artifacts directory. Directory structure of the KUBERNETES_HOME
>>>> is as follows.
>>>>
>>>> [KUBERNETES_HOME]/[PRODUCT_NAME]/[PRODUCT_VERSION]/[PATTERN]
>>>> /[PRODUCT_PROFILE].yaml
>>>>
>>>
>>> ​Shall we change this to read K8S artifacts from a folder inside
>>> repository/deployment folder (need to check the exact folder path from C5
>>> product structure)?
>>>
>>> Thanks​
>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Software Architect
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>>> W: https://medium.com/@imesh TW: @imesh
>>> lean. enterprise. middleware
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Pubudu Gunatilaka*
> Committer and PMC Member - Apache Stratos
> Software Engineer
> WSO2, Inc.: http://wso2.com
> mobile : +94774078049 <%2B94772207163>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM] [C5] Rest API Support for Importing and Exporting APIs between Multiple Environments

2017-01-15 Thread Imesh Gunaratne
Hi Isuru,

The proposed design looks good! One question, will we also be able to
extract and import API subscriptions similar to this from one environment
to another assuming that both environments are connected to the same user
store?

Thanks
Imesh

On Tue, Jan 10, 2017 at 11:22 AM, Isuru Haththotuwa  wrote:

> Hi Devs,
>
> This is to discuss subject.
>
> *Requirement:*
>
> Once an API is exported, its possible to be directly imported in to
> another APIM deployment in a separate environment. For an admin user, it
> should be possible to export all APIs in one deployment to another one.
>
> The following information will be available in exported data, related to a
> single API:
>
>- Docs
>- API definition (JSON formatted)
>- Swagger file (JSON formatted)
>- Gateway configuration
>- API thumbnails (image)
>
> Several new resources will be added to the publisher rest API to cater
> this, as follows:
>
> *GET **/apis/{apiId}**/export-config*
>
>- Produces a form/multipart output as a zip archive, which will have
>the following structure and which will comprise of the above mentioned
>items:
>
> *  --<*
>
> *api-version>.zip|*
>
>
> *| --- Docs||*
> *|| --- *
>* |  *
> * |*
> *|| ---* documentation
> metadata (json)
>   *  || ---* documentation
> content (optional)
>
> *|*
>
>
> *| --- Gateway-Config|
>  |*
> *|  | --- *gateway config file
>
> *|*
> *| --- *thumbnail file
>
> *|*
> *| --- *api definition (json)
>
> *|*
> *| --- *swagger definition (json)
>
>
> Note that there can be multiple docs for a single API.
>
> *GET **/apis/export-config*
>
>- Produces a zip archive comprising of the above structure for each
>API in the system. This operation will be permitted for admin users only.
>
>
> *POST **/apis**/{apiId}**/import-config*
>
>- Consumes the same zip archive produced by the /{apiId/}export-config
>resource as a form/multipart input, extracts and inserts the relevant data.
>
>
> *POST *
> */apis/import-config*
>
>- Consumes the same zip archive produced by the /export-config
>resource as a form/multipart input, extracts and inserts the relevant data
>for all APIs. Should be permitted only for admin users.
>
>
> This does not consider the endpoint information [1] yet. Would need to
> incorporate that here in a suitable way.
>
> Please share your feedback.
>
> [1]. [Architecture] [APIM][C5] - Definining Endpoint for Resource from
> Rest API
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048 <071%20635%208048>* <http://wso2.com/>*
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5, MSF4J] Carbon C5 - Server Configuration Model

2017-02-21 Thread Imesh Gunaratne
On Tue, Feb 21, 2017 at 6:33 PM, Vidura Nanayakkara 
wrote:

>
> In order to create the above configuration, I may write any of the
> below-mentioned code segments.
>
> 1)
>
> @Element(description = "Listener configurations")
> private List listenerConfigurations =
> Arrays.asList(new HttpListenerConfiguration(), new 
> HttpsListenerConfiguration());
>
>
> Since the plugin only looks at the argument type I will not be able to
> have the above-mentioned configuration (Plugin will check for annotations
> inside the "Configuration" class and therefore only the elements in the
> "Configuration" class will be written to the configuration file)
>
> Shouldn't that be the intended behaviour?

In the above sample, if *listenerConfigurations* is a repeatable element in
a configuration file, it might need to have the same set of properties in
all the rows. Which means when* @Element *annotation is added to a private
variable the mapping between the configuration file and the domain model
might need to use the data type of the above variable instead of the data
type of the assigned value(s). WDYT?

​Thanks​


> 2)
>
> @Element(description = "Listener configurations")
> private List listenerConfigurations =
> Arrays.asList(new HttpListenerConfiguration(), new 
> HttpsListenerConfiguration());
>
> Since the plugin only looks at the argument type I will not be able to
> have the above-mentioned configuration (Plugin will check for annotations
> inside the "Object" class). Furthermore since "Object" class doesn't have
> any annotation, I will not have any of the configuration elements at all.
>
> 3)
>
> @Element(description = "Listener configurations")
> private List listenerConfigurations =
> Arrays.asList(new HttpListenerConfiguration(), new 
> HttpsListenerConfiguration());
>
>
> This will throw an exception since the argument type is not stated within
> the angle brackets (should take as Object by default right?)
>
> *Suggestion*
>
>- Consider the instance rather than the reference type when creating
>the configuration file to solve the above problems.
>- Iterate through inherited classes when creating the configuration
>file. This way we can avoid duplicating the same code in multiple places
>and help solve the problem stated in (1)
>
>
> WDYT?
>
> [1] carbon-kernel issue 1285
> <https://github.com/wso2/carbon-kernel/issues/1285>
>
> Best Regards,
>
> *Vidura Nanayakkara*
> Software Engineer
>
> Email : vidu...@wso2.com
> Mobile : +94 (0) 717 919277 <+94%2071%20791%209277>
> Web : http://wso2.com
> Blog : https://medium.com/@viduran <http://wso2.com/>
> Twitter : http://twitter.com/viduranana
> LinkedIn : https://lk.linkedin.com/in/vidura-nanayakkara
> <http://wso2.com/>
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DAS Support for Mesos

2017-02-22 Thread Imesh Gunaratne
Hi Sachith,

On Wed, Feb 22, 2017 at 2:33 PM, Sachith Withana  wrote:
>
>
> My question is, are we going to support DCOS[1] or Apache Mesos for the
> Docker environment?
> DCOS is a commercialized version of Mesos/Mesosphere and seems to be
> widely used.
>

​I think it would be better to use DC/OS. Apache Mesos is the core of the
DC/OS container cluster manager and using it standalone might not be
meaningful.

BTW DC/OS is not a commericial offering, it's open source and available for
free.

Thanks
Imesh


> What are we planning to use in our docker deployments? This would dominate
> which one we choose as well.
>
> [1] https://dcos.io/
>
> Thanks,
> Sachith
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127 <+94%2071%20551%208127>
> Linked-In: <http://goog_416592669>https://lk.linkedin.com/in/
> sachithwithana
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DAS Support for Mesos

2017-02-23 Thread Imesh Gunaratne
On Thu, Feb 23, 2017 at 11:54 AM, Sachith Withana  wrote:

>
> @Imesh
> But it for DC/OS, the minimum requirement for deployment is 16GB memory
> [1].
> Is there any other way we can test this locally? How did you use it?
>

​We used DC/OS vagrant [1] on local machines and also setup a multi-node
cluser on OpenStack using [2].

Regarding the memory requirement, it would depend on the deployment pattern
that we use and what we deploy on top of it. The 1m-1a-1p pattern [3] would
only need 4 GB of memory for running DC/OS, the remaining would need to be
calculated based on the containers that we run.

[1] https://github.com/dcos/dcos-vagrant
[2] https://dcos.io/docs/1.8/administration/installing/custom/gui/
[3]
https://github.com/dcos/dcos-vagrant/blob/master/VagrantConfig-1m-1a-1p.yaml

​Thanks
Imesh


> [1] https://dcos.io/install/
>
> Thanks,
> Sachith
>
> On Wed, Feb 22, 2017 at 5:14 PM, Imesh Gunaratne  wrote:
>
>> Hi Sachith,
>>
>> On Wed, Feb 22, 2017 at 2:33 PM, Sachith Withana 
>> wrote:
>>>
>>>
>>> My question is, are we going to support DCOS[1] or Apache Mesos for the
>>> Docker environment?
>>> DCOS is a commercialized version of Mesos/Mesosphere and seems to be
>>> widely used.
>>>
>>
>> ​I think it would be better to use DC/OS. Apache Mesos is the core of the
>> DC/OS container cluster manager and using it standalone might not be
>> meaningful.
>>
>> BTW DC/OS is not a commericial offering, it's open source and available
>> for free.
>>
>> Thanks
>> Imesh
>>
>>
>>> What are we planning to use in our docker deployments? This would
>>> dominate which one we choose as well.
>>>
>>> [1] https://dcos.io/
>>>
>>> Thanks,
>>> Sachith
>>> --
>>> Sachith Withana
>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>> E-mail: sachith AT wso2.com
>>> M: +94715518127 <+94%2071%20551%208127>
>>> Linked-In: <http://goog_416592669>https://lk.linkedin.com/in/sac
>>> hithwithana
>>>
>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
>
>
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127 <+94%2071%20551%208127>
> Linked-In: <http://goog_416592669>https://lk.linkedin.com/in/
> sachithwithana
>



-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Moving Carbon Configuration and Carbon Sec-Vault to 2 Separate Repositories (Removing from Kernel)

2017-03-06 Thread Imesh Gunaratne
On Fri, Mar 3, 2017 at 12:00 PM, Thusitha Thilina Dayaratne <
thusit...@wso2.com> wrote:

> Rather than having a separate repo for utils I'll look into the
> possibility of moving that to a separate component (same level as core)
> without having cyclic dependencies. If that is possible then we can pack
> that as a new feature or core feature itself. Otherwise lets move that to a
> separate repo.
>
> Thusitha has done this change in the following PR:
https://github.com/wso2/carbon-kernel/pull/1318

Thanks

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Carbon C5 - Server Configuration Model

2017-03-07 Thread Imesh Gunaratne
use that values directly in the specific product and 
>>>>>> if
>>>>>> some other product is using that component, they have to override it in 
>>>>>> the
>>>>>> deployment.yaml. For example product-is is using component identity-mgt. 
>>>>>> So
>>>>>> what should be the default values for the config files coming from
>>>>>> identity-mgt component ? Are those should be defaulted to the product-is
>>>>>> related values or to the component related values and product-is should
>>>>>> always override them from deployment.yaml.
>>>>>>
>>>>>> Thanks!
>>>>>>
>>>>>> *Jayanga Kaushalya*
>>>>>> Software Engineer
>>>>>> Mobile: +94777860160 <+94%2077%20786%200160>
>>>>>> WSO2 Inc. | http://wso2.com
>>>>>> lean.enterprise.middleware
>>>>>>
>>>>>> On Wed, Nov 30, 2016 at 10:57 AM, Danesh Kuruppu 
>>>>>> wrote:
>>>>>>
>>>>>>> Hi Dilan,
>>>>>>>
>>>>>>> If all user-configurable properties are not readily available in the
>>>>>>>> .yaml file by default, how would a user know which
>>>>>>>> properties are configurable and which are not ?
>>>>>>>>
>>>>>>>
>>>>>>> All the configurable properties and their default values will be
>>>>>>> documented. We are going to create this config document automatically by
>>>>>>> reading the config bean class (using maven plugin).
>>>>>>> We need to decide whether we pack those config documents in the
>>>>>>> product or add to central location (doc page etc)
>>>>>>>
>>>>>>> Thanks
>>>>>>> --
>>>>>>>
>>>>>>> *Danesh Kuruppu*
>>>>>>> Senior Software Engineer | WSO2
>>>>>>>
>>>>>>> Email: dan...@wso2.com
>>>>>>> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
>>>>>>> Web: WSO2 Inc <https://wso2.com/signature>
>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Architecture mailing list
>>>>>>> Architecture@wso2.org
>>>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> *Danesh Kuruppu*
>>>>> Senior Software Engineer | WSO2
>>>>>
>>>>> Email: dan...@wso2.com
>>>>> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
>>>>> Web: WSO2 Inc <https://wso2.com/signature>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Sagara Gunathunga
>>>>
>>>> Associate Director / Architect; WSO2, Inc.;  http://wso2.com
>>>> V.P Apache Web Services;http://ws.apache.org/
>>>> Linkedin; http://www.linkedin.com/in/ssagara
>>>> Blog ;  http://ssagara.blogspot.com
>>>>
>>>>
>>>
>>>
>>> --
>>> *Afkham Azeez*
>>> Senior Director, Platform Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * <http://www.apache.org/>*
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <+94%2077%20332%200919>blog: *
>>> *http://blog.afkham.org* <http://blog.afkham.org>
>>> *twitter: **http://twitter.com/afkham_azeez*
>>> <http://twitter.com/afkham_azeez>
>>> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> <http://lk.linkedin.com/in/afkhamazeez>*
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>
>>
>>
>> --
>> Sagara Gunathunga
>>
>> Associate Director / Architect; WSO2, Inc.;  http://wso2.com
>> V.P Apache Web Services;http://ws.apache.org/
>> Linkedin; http://www.linkedin.com/in/ssagara
>> Blog ;  http://ssagara.blogspot.com
>>
>>
>
>
> --
>
> *Danesh Kuruppu*
> Senior Software Engineer | WSO2
>
> Email: dan...@wso2.com
> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
> Web: WSO2 Inc <https://wso2.com/signature>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Carbon C5 - Server Configuration Model

2017-03-07 Thread Imesh Gunaratne
Thanks for the clarification Danesh! In that situation, we might need to
maintain a default value configuration file per feature or component.

On Wed, Mar 8, 2017 at 10:37 AM, Danesh Kuruppu  wrote:

> Hi Imesh,
>
> Shall we use the same default.yaml to define datasources with default
>>> configuration of the product. because in carbon-datasources, we don't have
>>> default database configurations and there are coming from different
>>> components. but we read datasources configuration from carbon-datasources.
>>> So we need a place to get the default values, if it is not specified in
>>> deployment.yaml.
>>>
>>
>> ​According to the initial discussion we had, may be we can have the
>> default values in the code using annotations. Do we see any problems with
>> that?
>>
>
> The Problem here is, bean classes related to datasources are defined in
> carbon-datasources, but the component doesn't contain any default values.
> It creates databsource objects based on the config files in the datasources
> directory(in C4, it is based on master-datasources.xml, etc) and
> configuration files are created or modified at product level.
>
> e.g.: If APIM needs separate datasource, it adds related configuration to
> the datasource config files. So at runtime, carbon-datasources component
> reads configuration and creates related datasource objects.
>
> With the new config model, it is not mandatory to have those configuration
> in deployment.yaml. So we need to have a place where we can get the default
> values if it is not specified in the deployment.yaml.
>
> Thanks
> Danesh
> --
>
> *Danesh Kuruppu*
> Senior Software Engineer | WSO2
>
> Email: dan...@wso2.com
> Mobile: +94 (77) 1690552 <+94%2077%20169%200552>
> Web: WSO2 Inc <https://wso2.com/signature>
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] [Carbon-Feature-Plugin] Dynamic Creation of carbon.product via a Template

2017-03-08 Thread Imesh Gunaratne
Overall great effort Dilan! The new PR [1] was merged to the master branch
after reviewing.
Maybe we can try this out with IS 6.0.x [2] WDYT?

[1] https://github.com/wso2/carbon-maven-plugins/pull/61
[2] https://github.com/wso2/product-is

Thanks
Imesh

-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] C5 Distribution Model and its Impact to Deployment Process

2017-03-09 Thread Imesh Gunaratne
Hi All,

According to the new C5 packaging structure, we are now planning to ship a
single distribution for each product/solution containing a number of
runtimes:

[image: Inline image 3]

As I understood the main goals of this approach are reducing the size of
the downloadable file (without duplicating common binaries in separate
distributions) and making the tryout process much easier.

Nevertheless, if we consider production deployments of such
products/solutions, a typical deployment with HA may look as follows:

[image: Inline image 4]

As illustrated above, at the deployment time each runtime cluster would
need a dedicated distribution (by removing unnecessary files), a set of
configurations (may be using a configuration management module), a
VM/container image, a VM/container orchestrator configuration (K8S
replica-set, Marathon application, etc), etc. In this model, each runtime
would map 1:1 to these entities.

Most importantly, vendor signed runtime distributions might be needed for
deployment verifications.

Therefore, wouldn't it be better to ship runtime distributions together
with the all-in-one distribution for each product/solution? If so people
who wish to try-out products/solutions can use the all-in-one distribution
and production deployments can use the other set. WDYT?

   - product/solution.zip
   - product-solution-runtime-1.zip
   - product-solution-runtime-2.zip
   - product-solution-runtime-
   n
   .zip

T
hanks

Imesh

​
-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Moving Carbon Configuration and Carbon Sec-Vault to 2 Separate Repositories (Removing from Kernel)

2017-03-10 Thread Imesh Gunaratne
Hi Vidura,

I think it would be better if we can first move the secure vault code from
carbon-kernel repository to the new repository with commit history and then
apply the changes you have done. Otherwise, we will loose all history.

I had a chat with Lakshman on this and it seems like he has extracted all
secure-vault related code into a new component and sent a PR [1] but it has
not been merged.

IMO we would need to do following:

   - First, fix conflicts and merge [1]. This would bring all secure vault
   related code to a new component/folder.
   - Then move above folder to [2] using a PR-X
   - Once the PR-X is merged, apply your changes on top of it.

[1] https://github.com/wso2/carbon-kernel/pull/1266
[2] https://github.com/wso2/carbon-secvault

Thanks

On Mon, Mar 6, 2017 at 12:15 PM, Niranjan Karunanandham 
wrote:

> Hi Vidura,
>
> On Mon, Mar 6, 2017 at 11:52 AM, Imesh Gunaratne  wrote:
>
>> On Fri, Mar 3, 2017 at 12:00 PM, Thusitha Thilina Dayaratne <
>> thusit...@wso2.com> wrote:
>>
>>> Rather than having a separate repo for utils I'll look into the
>>> possibility of moving that to a separate component (same level as core)
>>> without having cyclic dependencies. If that is possible then we can pack
>>> that as a new feature or core feature itself. Otherwise lets move that to a
>>> separate repo.
>>>
>>> Thusitha has done this change in the following PR:
>> https://github.com/wso2/carbon-kernel/pull/1318
>>
> Once this PR is merged, we need to add the support to provide an API which
> can return the current config folder for a particular runtime.
>
>
>>
>>
>> Thanks
>>
>> --
>> *Imesh Gunaratne*
>> Software Architect
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057 <+94%2077%20374%202057>
>> W: https://medium.com/@imesh TW: @imesh
>> lean. enterprise. middleware
>>
>>
> Regards,
> Nira
>
> --
>
>
> *Niranjan Karunanandham*
> Associate Technical Lead - WSO2 Inc.
> WSO2 Inc.: http://www.wso2.com
>
>


-- 
*Imesh Gunaratne*
Software Architect
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: https://medium.com/@imesh TW: @imesh
lean. enterprise. middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Moving Carbon Configuration and Carbon Sec-Vault to 2 Separate Repositories (Removing from Kernel)

2017-03-12 Thread Imesh Gunaratne
On Fri, Mar 10, 2017 at 7:39 PM, Niranjan Karunanandham 
wrote:

> Hi Vidura,
>
> On Fri, Mar 10, 2017 at 7:27 PM, Vidura Nanayakkara 
> wrote:
>
>> Hi All,
>>
>> We can create a tempory branch out from the master in Carbon Kernel [1]
>> <https://github.com/wso2/carbon-kernel>, merge Lakshman's PR to that
>> branch and then move it to the Carbon SecVault [2]
>> <https://github.com/wso2/carbon-secvault> (not the master branch - we
>> need to create another new branch here). This way we can we can preserve
>> the commit history. I will create my pull request to the new branch in
>> Carbon Sec Vault.
>>
>
> Noted. I have created a separate branch[1] in kernel and merged Lakshman's
> PR so that it can be moved carbon-secvault. I have create another branch[2]
> in carbon-secvault, so that you can move this component from kernel.
>

​+1 Great! Thanks Niranjan!​

>
>
>>
>> [1] Carbon Kernel <https://github.com/wso2/carbon-kernel>
>> [2] Carbon Secure Vault <https://github.com/wso2/carbon-secvault>
>>
>>
>> On Fri, Mar 10, 2017 at 7:11 PM, Niranjan Karunanandham <
>> niran...@wso2.com> wrote:
>>
>>> Hi all,
>>>
>>> On Fri, Mar 10, 2017 at 6:55 PM, Lakshman Udayakantha <
>>> lakshm...@wso2.com> wrote:
>>>
>>>> Hi Imesh,
>>>>
>>>> On Fri, Mar 10, 2017 at 3:54 PM, Imesh Gunaratne 
>>>> wrote:
>>>>
>>>>> Hi Vidura,
>>>>>
>>>>> I think it would be better if we can first move the secure vault code
>>>>> from carbon-kernel repository to the new repository with commit history 
>>>>> and
>>>>> then apply the changes you have done. Otherwise, we will loose all 
>>>>> history.
>>>>>
>>>> +1 to preserve history.
>>>>
>>>>>
>>>>> I had a chat with Lakshman on this and it seems like he has extracted
>>>>> all secure-vault related code into a new component and sent a PR [1] but 
>>>>> it
>>>>> has not been merged.
>>>>>
>>>>> IMO we would need to do following:
>>>>>
>>>>>- First, fix conflicts and merge [1]. This would bring all secure
>>>>>vault related code to a new component/folder.
>>>>>
>>>>> I have solved the conflicts and updated the PR.
>>>>
>>> I am -1 for merging this in kernel master branch because this PR is not
>>> complete and there were couple of changes requested for this. Lakshman had
>>> moved the PR to the new repo as suggested during the code review and it has
>>> been merged in a separate branch [1]. AFAIR we discussed that Vidura could
>>> continue making the fixes as suggested in the review and send the PR to the
>>> branch. Once it is in a done done state, we can move it to the master
>>> branch (this is because once a PR is merged to master branch it will be
>>> released since CI/CD is configured).
>>>
>>> @Imesh: +1 if we can preseve the commits from the kernel and move it to
>>> Carbon-secvault. (if we need to merge the PR then we can merge it in a
>>> separate branch. Currently the PR to kernel is sent to the master branch)
>>>
>>> @Vidura: If the commits can be preserved then please coordinate with
>>> Lakshman.
>>>
>>>
>>>> Thanks,
>>>> Lakshman.
>>>>
>>>>>
>>>>>- Then move above folder to [2] using a PR-X
>>>>>- Once the PR-X is merged, apply your changes on top of it.
>>>>>
>>>>> [1] https://github.com/wso2/carbon-kernel/pull/1266
>>>>> [2] https://github.com/wso2/carbon-secvault
>>>>>
>>>>> Thanks
>>>>>
>>>>> On Mon, Mar 6, 2017 at 12:15 PM, Niranjan Karunanandham <
>>>>> niran...@wso2.com> wrote:
>>>>>
>>>>>> Hi Vidura,
>>>>>>
>>>>>> On Mon, Mar 6, 2017 at 11:52 AM, Imesh Gunaratne 
>>>>>> wrote:
>>>>>>
>>>>>>> On Fri, Mar 3, 2017 at 12:00 PM, Thusitha Thilina Dayaratne <
>>>>>>> thusit...@wso2.com> wrote:
>>>>>>>
>>>>>>>> Rather than having a separate repo for utils I'll look into the
>>>>>>>> possibility of moving that to a separate component (same level as core)
>>>&

  1   2   >