Re: [Architecture] Circuit Breaker Pattern for MSF4J

2016-03-31 Thread Sanjiva Weerawarana
Agreed. However I had understood that the circuit breaker pattern was
advocated primarily for service clients in MSA (and of course it has
nothing do with being micro).

The general story of better failure handling applies to all code and is of
course not MSA specific.

Anyway .. Sample is fine.
On Mar 31, 2016 9:19 AM, "Afkham Azeez"  wrote:

>
>
> On Thu, Mar 31, 2016 at 9:04 AM, Sanjiva Weerawarana 
> wrote:
>
>> That's why I said "fancy try catch" :-).
>>
>> However, are you SERIOUSLY saying that we for example should be wrapping
>> all our DB access code in this stuff? If not who exactly should be doing
>> this? What are the perf implications?
>>
>
> No I am not saying that. However, there will be use cases where people
> want to use this pattern and this is a simplified sample that demonstrates
> how to use this pattern. In Nygards book about how an SQL statement
> execution failure resulted in an entire checking in system in an airline
> failing because the failure propagated is a good example of uncontrolled
> failure propagation (Release It, Chapter 2: Case study: The exception that
> grounded an airline, for those of you who have the book). So my example was
> somewhat inspired by that case study and is highly simplified.
>
> If a sample is too complicated, people get lost in the implementation
> details rather than seeing how the core concept or pattern is implemented.
> I certainly can implement another sample which demonstrates client->service
> or service->service calls, it certainly would add more code but the core
> concept demonstrated would be the same.
>
>
>
>>
>> Of course wrapping remote service calls in this stuff makes sense - great
>> way to adjust to transient issues. In that case the overhead is heavily
>> masked by the latency - I'm not so convinced that is the case for
>> transactional JDBC calls but maybe it is. In that case WE must use it
>> internally.
>>
>> Sanjiva.
>>
>> On Thu, Mar 31, 2016 at 8:53 AM, Afkham Azeez  wrote:
>>
>>> Equating these fault tolerance patterns to Java 8 Optional or try-catch,
>>> is a highly oversimplified view. What Hystrix and these patterns provides
>>> is a framework for building fault tolerant systems. Something that is
>>> useful in the toolkit of an architect & developer.
>>>
>>> On Thu, Mar 31, 2016 at 8:36 AM, Sanjiva Weerawarana 
>>> wrote:
>>>
 This is almost kinda like that stupid new Java8 thing of "we removed
 null by wrapping it in a fancy object" ;-).

 On Thu, Mar 31, 2016 at 8:32 AM, Sanjiva Weerawarana 
 wrote:

> So this is not what I expected the real use case to be ... this is
> basically a fancy try catch.
>
> Don't we want to show a client side example?
>
> On Thu, Mar 31, 2016 at 6:28 AM, Afkham Azeez  wrote:
>
>> Timeout is related to the actual operation taking more time than
>> anticipated. In such a case, without waiting indefinitely, the operation
>> times out and the fallback of the Hystrix command will be invoked. The
>> circuit will be open for a fixed period of time configured by
>> https://github.com/Netflix/Hystrix/wiki/Configuration#circuitBreaker.sleepWindowInMilliseconds
>>
>> On Thu, Mar 31, 2016 at 2:53 AM, Harshan Liyanage 
>> wrote:
>>
>>> Hi Azeez,
>>>
>>> Does this timeout in open state occurs in exponentially (first
>>> timeout in 10 secs, next in 20 secs etc) or linearly when transitioning
>>> back to half-open state? For example if the state is in "Open" and now 
>>> the
>>> timeout (lets say 10secs timeout) occurs. Then the state is moved to
>>> "half-open" state. But the next request is also a failure and breaker 
>>> state
>>> is moved back to "open". In this occasion the what will be the timeout
>>> value? Is it 10 secs or 20 secs?
>>>
>>> Having an exponential timeout might be beneficiary here as it might
>>> save lot of resources if the service is continuously failing. But I 
>>> think
>>> it would be better if we can provide both options in a configurable 
>>> manner.
>>> So it is up to the developer to decide which method to use.
>>>
>>> Thanks,
>>>
>>> Harshan Liyanage
>>> Software Engineer
>>> Mobile: *+94724423048*
>>> Email: hars...@wso2.com
>>> Blog : http://harshanliyanage.blogspot.com/
>>> *WSO2, Inc. :** wso2.com *
>>> lean.enterprise.middleware.
>>>
>>> On Wed, Mar 30, 2016 at 5:05 AM, Afkham Azeez 
>>> wrote:
>>>
 I have written a sample which demonstrates circuit breaker in
 action;
 http://blog.afkham.org/2016/03/microservices-circuit-breaker.html

 On Sat, Mar 12, 2016 at 6:09 PM, Afkham Azeez 
 wrote:

> This is a feature supported by some microservices frameworks. On
> the server side, in this case MSF4J runtime, failure counts are kept 
> track
> of and then if

Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-31 Thread Srinath Perera
We had a meeting. Participants: Sanjiva, Sumedha, NuwanD, Prabath, Srinath
(Prabath please list others joined from Trace)

Problem: Bob writes a API and publish it API manager. Then Alice subscribes
to the API and write an mobile APP. Charlie uses the mobile App, which
results in an API call. We need to track the API calls via DAS. And when
Bob, Alice, and possibly Charlie come to the dashboard server, they should
possibly see the transaction from their view.

Challenge is that now there is no clear user in the above transaction.
Rather there is three. So we cannot handle this generically at the DAS
level via a user concept. Hence, the API manager needs to put the right
information when it publish data to DAS and show data only to relevant
parties when it showing and exposing data.


Solution

[image: SecuirtyLayers.png]

   1. We will keep DAS in the current state with support for tenants
   without support for users. It is aware about tenant and provide full
   isolation between tenant. However, it does not aware about users.
   2. Each product will write an extended receiver and DAL layer as, that
   will build an API catered for their use cases. This API will support login
   via OAuth tokens. Since they know the fields in the  tables that has user
   data init, then can filter the data based on the user.
   3. We will run the extended DAL layers and receivers in DAS, and they
   will talk to DAL as an OSGI call.
   4. Above layers will assume that users have access to OAuth token. In
   APIM use cases, APIM can issue tokens, and in IoT use cases, APIM that runs
   in the IoT server can issue tokens.


Also this means, we will not support users providing their own analytics
queries. Only tenant admins  can provide their own queries.
As decided in the earlier meeting,  We need APIM and IOT Server to be able
to publish events as "system user", but ask DAS to place data under Ann's (
related user) account.

Please add anything I missed.

--Srinath




On Tue, Mar 29, 2016 at 11:53 AM, Srinath Perera  wrote:
>
> I have scheduled a meeting tomorrow to discuss this.
>
> --Srinath
>
> On Tue, Mar 29, 2016 at 11:44 AM, Sachith Withana 
wrote:
>>
>> Hi all,
>>
>> I do believe it would be of great value to incorporate user level data
isolation for DAS.
>>
>> Having said that though, it wouldn't be practical to provide a complete
permission platform to DAS that would suffice all the requirements of APIM
and IOT.
>>
>> IMO, we should provide some features that would help individual products
build their own permission platform that caters to their requirements.
>>
>> Thanks,
>> Sachith
>>
>> On Tue, Mar 29, 2016 at 10:38 AM, Nuwan Dias  wrote:
>>>
>>> Please ignore my reply. It was intended for another thread :)
>>>
>>> On Mon, Mar 28, 2016 at 4:26 PM, Nuwan Dias  wrote:

 Having to publish a single event after collecting all possible data
records from the server would be good in terms of scalability aspects of
the DAS/Analytics platform. However I see that it introduces new challenges
for which we would need solutions.

 1. How to guarantee a event is always published to DAS? In the case of
API Manager, a request has multiple exit points. Such as auth failures,
throttling out, back-end failures, message processing failures, etc. So we
need a way to guarantee that an event is always sent out whatever the state.

 2. With this model, I'm assuming we only have 1 stream definition. Is
this correct? If so would this not make the analytics part complicated? For
example, say I have a spark query to summarize the throttled out events
from an App, since I can only see a single stream the query would have to
deal with null fields and have to deal with the whole bulk of data even if
in reality it might only have to deal with a few. The same complexity would
arise for the CEP based throttling engine and the new alerts we're building
as well.

 Thanks,
 NuwanD.

 On Mon, Mar 28, 2016 at 2:43 PM, Srinath Perera 
wrote:
>
> Hi Ayyoob, Ruwan, Suho,
>
> I think where to handle ( within DAS vs. at higher level API in APIM
or IoT server) is decided by what level user customizations are needed for
analytics queries.
>
> If we need individual users to write their own queries as well, then
we need to build user support into DAS. However, if queries can be changed
by tenant admins only, doing this via a high-level API is OK.
>
> Where does APIM and IoT server stands on this?
>
> --Srinath
>
>
>
> On Sat, Mar 26, 2016 at 9:28 AM, Ayyoob Hamza  wrote:
> >
> > Hi,
> > Yes we require user level separation but just wondered whether we
need this separation in DAS level or whether can we enforce it device type
API level. This is because IMO, DAS provides a low level API which we
cannot expose it directly so we need a proxy that maps this to a high level
API to expose the data. So wondered whether can we do the restriction in
the high

Re: [Architecture] RFC:Security Challenges in Analytics Story

2016-03-31 Thread Srinath Perera
Please see "Data Isolation level for Data from APIM and IoT? Tenant vs.
User" for decisions

--Srinath

On Fri, Mar 25, 2016 at 10:06 AM, Srinath Perera  wrote:

> As per meeting ( Sanjiva, Shankar, Sumedha, Anjana, Miyuru, Seshika, Suho,
> Nirmal, Nuwan)
>
> We need APIM and IOT Server to be able to publish events as "system user",
> but ask DAS to place data under Ann's ( related user) account.
>
> We need Devices to be able to *directly* send a event to DAS with an Oauth
> token.
>
> Following is the picture describing full scenario
>
> [image: DASSecuirtyScenarios.png]
> --Srinath
>
> On Thu, Mar 24, 2016 at 9:38 AM, Srinath Perera  wrote:
>
>> This thread described the authorization issue when reading data for
>> gadgets ( as I mentioned in Dashboard server product council).
>>
>> When IoT server/ API manager publish events, it need to tell DAS whose
>> data it is. ( however, server cannot login using that user, as then it will
>> need to keep passwords and also end up having to keep too many
>> connections).
>>
>> Gadget, when requesting data, has to tell DAS on whose behalf it is
>> requesting the data. DAS has to verify and show visible data. ( also DAS
>> data API need to be secured so that random users cannot call it and look at
>> other people's data).
>>
>> --Srinath
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Mar 19, 2016 at 9:13 PM, Srinath Perera  wrote:
>>
>>> Yes, and Ann can also generate a token and share with Smith, to send
>>> with his requests.
>>>
>>> Also, IMO the most Dashboard requests would come from a browser ( in a
>>> phone or PC), not from simple device. So storing or locating the token
>>> should not be a problem.
>>>
>>> On Fri, Mar 18, 2016 at 3:21 PM, Chathura Ekanayake 
>>> wrote:
>>>



> I think we should go for a taken based approach (e.g. OAuth) to handle
> these scenarios. Following are few ideas
>
>
>1.
>
>Using a token ( Ann attesting system user can do publish/ access
>to this stream on her behalf), Ann let the “system user“ publish data 
> into
>Ann’s account
>
>
 If a device can store a token, Ann can generate a token with necessary
 scope (to access Ann's event store) and store the token in the device
 itself. In that case, device can send the token with each event, so that
 IoT platform can decide permissions based on the token.


>
>1.
>
>When we give user Smith access to a gadget, we generate a token,
>which he will send when he is accessing the gadget, which the gadget 
> will
>send to the DAS backend to get access to correct tables
>2.
>
>Same token can be used for API access as well
>3.
>
>We need to manage the tokens issued to each user so this happen
>transparently to the end user as much as possible.
>
>
>

>>>
>>>
>>> --
>>> 
>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>> Site: http://people.apache.org/~hemapani/
>>> Photos: http://www.flickr.com/photos/hemapani/
>>> Phone: 0772360902
>>>
>>
>>
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Isuru Haththotuwa
Earlier the PaaS team had a discussion in architecture list thread [1] to
create a base Docker image and a profile Docker image extending the base
Docker image per WSO2 product. This was done for the ease of the users of
wso2 Dockerfiles. More details can be found in the mentioned thread.
However, we found this approach to have a few drawbacks:

   - The second image size would be comparatively large even if a simple
   config change is done. This is because docker adds an additional layer on
   top of existing layer, if we need to do a change to the existing layer.
   - The main rationale for having two images was the ease of using it; but
   a user/developer using a single Dockerfile can still do this manually, by
   extending from the existing image, for testing purposes.

Therefore, the PaaS team had another internal discussion and decided to
scrap the two Dockerfile approach and use a single Dockerfile per WSO2
product.
In development phase, user/developer can either create a simple Dockerfile
extending a product Dockerfile, and add the config/artifact changes using
ADD/COPY statements [2]. When the container is starting up, a script will
copy the relevant artifacts to the directory structure under the carbon
server before actually starting the server.
Else, another option would be to provide a host machine directory (shared
volume) when starting up a container from the provided wso2 product
Dockerfile (without creating a separate Dockerfile). This shared location
can have a directory structure similar to a carbon server, which will be
again copied to the carbon server before starting up.

Prior to moving in to production, the recommended way would be to re-build
the image with all configurations in place, using the latest ubuntu base
image. This final Dockerfile should have a minimum number of ADD/COPY/RUN
commands to reduce the image size.

Please share your thoughts on this. PaaS team will be updating the WSO2
Dockerfiles repository with this structure.

[1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products

[2].
FROM wso2am:1.10.0
MAINTAINER isu...@wso2.com

COPY artifacts/ /mnt/wso2-artifacts/carbon-home

-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-03-31 Thread Thamali Wijewardhana
Hi,

we have created a spark program to prove the feasibility of adding the RNN
algorithm to machine learner.
This program demonstrates all the steps in machine learner:

Uploading a dataset

Selecting the hyper parameters for the model

Creating a RNN model using data and training the model

Calculating the accuracy of the model

Saving the model(As a serialization object)

predicting using the model

This program is based on deeplearning4j and apache spark pipeline.
Deeplearning4j was used as the deep learning library for recurrent neural
network algorithm. As the program should be based on the Spark pipeline,
the main challenge was to use deeplearning4j library with spark pipeline.
The components used in the spark pipeline should be compatible with spark
pipeline. For other components which are not compatible with spark
pipeline, we have to wrap them with a org.apache.spark.predictionModel
object.

We have designed a pipeline with sequence of stages (transformers and
estimators):

1. Tokenizer:Transformer-Split each sequential data to tokens.(For example,
in sentiment analysis, split text into words)

2. Vectorizer :Transformer-Transforms features into vectors.

3. RNN algorithm :Estimator -RNN algorithm which trains on a data frame and
produces a RNN model

4. RNN model : Transformer- Transforms data frame with features to data
frame with predictions.

The diagrams below explains the stages of the pipeline. The first diagram
illustrates the training usage of the pipeline and the next diagram
illustrates the testing and predicting usage of a pipeline.


​


​


I also have tuned the RNN model for hyper parameters[1] and found the
values of hyper parameters which optimizes accuracy of the model.
Give below is the set of hyper parameters relevant to RNN algorithm and the
tuned values.


Number of epochs-10

Number of iterations- 1

Learning rate-0.02

We used the aclImdb sentiment analysis data set for this program and with
the above hyper parameters, we could achieve 60% accuracy. And we are
trying to improve the accuracy and efficiency of our algorithm.

[1]
https://docs.google.com/spreadsheets/d/1Wcta6i2k4Je_5l16wCVlH6zBMNGIb-d7USaWdbrkrSw/edit?ts=56fcdc9b#gid=2118685173


Thanks



On Fri, Mar 25, 2016 at 10:18 AM, Thamali Wijewardhana 
wrote:

> Hi all,
>
> One of the most important obstacles in machine learning and deep learning
> is getting data into a format that neural nets can understand. Neural nets
> understand vectors. Therefore, vectorization is an important part in
> building neural network algorithms.
>
> Canova is a Vectorization library for Machine Learning which is associated
> with deeplearning4j library. It is designed to support all major types of
> input data such as text,csv,image,audio,video and etc.
>
> In our project to add RNN for Machine Learner, we have to use a
> vectorizing component to convert input data to vectors. I think that Canova
> is a better to build a generic vectorizing component. I am researching on
> using Canova for the vectorizing purpose.
>
> Any suggestions on this are highly appreciated.
>
>
> Thanks
>
>
>
> On Wed, Mar 2, 2016 at 2:25 PM, Thamali Wijewardhana 
> wrote:
>
>> Hi Srinath,
>>
>> We have decided to  implement only classification first. Once we complete
>> the classification, we hope to do next value prediction too.
>> We are basically trying to implement a program to make sure that the
>> deeplearning4j library we are using is compatible with apache spark
>> pipeline. And also we are trying to demonstrate all the machine learning
>> steps with that program.
>>
>> We are now using aclImdb sentiment analysis data set to verify the
>> accuracy of the RNN model we create.
>>
>> Thanks
>> Thamali
>>
>>
>> On Wed, Mar 2, 2016 at 10:38 AM, Srinath Perera  wrote:
>>
>>> Hi Thamali,
>>>
>>>
>>>1. RNN can do both classification and predict next value. Are we
>>>trying to do both?
>>>2. When Upul played with it, he had trouble getting deeplearning4j
>>>implementation work with predict next value scenario. Is it fixed?
>>>3. What are the data sets we will use to verify the accuracy of RNN
>>>after integration?
>>>
>>>
>>> --Srinath
>>>
>>> On Tue, Mar 1, 2016 at 3:44 PM, Thamali Wijewardhana 
>>> wrote:
>>>
 Hi,

 Currently we are working on a project to add Recurrent Neural
 Network(RNN) algorithm to machine learner. RNN is one of deep learning
 algorithms with record breaking accuracy. For more information on RNN
 please refer link[1].

 We have decided to use deeplearning4j which is an open source deep
 learning library scalable on spark and Hadoop.

 Since there is a plan to add spark pipeline to machine Learner, we have
 decided to use spark pipeline concept to our project.

 I have designed an architecture for the RNN implementation.

 This architecture is developed to be compatible with spark pipeline.

 Data set is taken in csv format and then it is con

Re: [Architecture] [C5] Artifact deployment coordination support for C5

2016-03-31 Thread Ruwan Abeykoon
Hi All,
Do we really want artifact deployment coordination in C5?

What is preventing us to build the new image with the new version of
artifacts and let the k8s take care of deployment?

Cheers,
Ruwan

On Wed, Mar 30, 2016 at 2:54 PM, Isuru Haththotuwa  wrote:

> Hi Kasun,
>
> On Wed, Mar 23, 2016 at 10:45 AM, KasunG Gajasinghe 
> wrote:
>
>> Hi,
>>
>> Given several issues we discovered with automatic artifact
>> synchronization with DepSync in C4, we have discussed how to approach this
>> problem in C5.
>>
>> We are thinking of not doing the automated artifact synchronization in
>> C5. Rather, users should use their own mechanism to synchronize the
>> artifacts across a cluster. Common approaches are RSync as a cron job and
>> shell scripts.
>>
>> But, it is vital to know the artifact deployment status of the nodes in
>> the entire cluster from a central place. For that, we are providing this
>> deployment coordination feature. There will be two ways to use this.
>>
>> 1. JMS based publishing - the deployment status will be published by each
>> node to a jms topic/queue
>>
>> 2. Log based publishing - publish the logs by using a syslog appender [1]
>> or our own custom appender to a central location.
>>
> Both are push mechanisms, IMHO we would need an API to check the status of
> a deployed artifacts on demand, WDYT?
>
>>
>> The log publishing may not be limited to just the deployment
>> coordination. In a containerized deployment, the carbon products will run
>> in disposable containers. But sometimes, the logs need to be backed up for
>> later reference. This will help with that.
>>
>> Any thoughts on this matter?
>>
>> [1]
>> https://logging.apache.org/log4j/2.x/manual/appenders.html#SyslogAppender
>>
>> Thanks,
>> KasunG
>>
>>
>>
>> --
>> ~~--~~
>> Sending this mail via my phone. Do excuse any typo or short replies
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*Ruwan Abeykoon*
*Architect,*
*WSO2, Inc. http://wso2.com  *
*lean.enterprise.middleware.*

email: ruw...@wso2.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Docker] Extensible approach to building Docker images

2016-03-31 Thread Chamila De Alwis
Hi,

As described in the Isuru's email [1], wso2/dockerfiles[2] structure was
simplified to enable a single Docker image per WSO2 product. At the same
time, the approach that was used to build the Docker images was changed
from Puppet to an extensible approach.

Earlier only a "puppet apply" method was used to build the Docker images.
The Puppet modules from wso2/puppet-modules[3] were copied at the build
time into the build container and a puppet apply was executed to configure
the product. This approach created a dependency on Puppet, that was not
necessary. Dockerfiles should not be dependent on a single build approach.

Therefore, this step was changed in a way that allowed user defined
provisioning methods to be used when configuring the WSO2 product in the
Docker image. wso2/dockerfiles will ship "default" and "puppet"
provisioning methods.

[image: Inline image 1]

The default provisioning method simply copies the JDK and the product pack
in to the Docker image. The Puppet provisioning method offers the previous
Puppet related configuration option.

Users can introduce their own provisioning methods. For example if a user
already has a set of Chef recipes to configure a WSO2 product, they can
include the relevant validations (image-prep.sh) and config commands
(image-config.sh) in /common/provision/chef folder and
pass "-r chef" to the build.sh helper script.

The Dockerfiles only enforces a single RUN layer approach to keep the
resulting image size to a minimum. When multiple RUN layers are used,
especially on the same files, the existing layers will get duplicated each
time to apply the changes, increasing the image size uncontrollably.
Furthermore, file copying, modification, and cleaning has to be done in a
single layer to minimize unnecessary persistence [4]. Therefore, any
provisioning method has to prepare and validate a folder to serve the
necessary files from, be it a folder with the JDK and the product pack, or
the PUPPET_HOME folder. The needed files can then be downloaded in the
config script when building the image and cleaned afterwards in the same
build container.

[1] - [Docker] Creating a Single Docker Image for a Product - @architecture
[2] - https://github.com/wso2/dockerfiles
[3] - https://github.com/wso2/puppet-modules
[4] - [DEV][Dockerfiles] Reducing the image size - @dev

Regards,
Chamila de Alwis
Committer and PMC Member - Apache Stratos
Software Engineer | WSO2 | +94772207163
Blog: code.chamiladealwis.com
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Sajith Kariyawasam
It seems the image name format we have now is wso2as:5.3.0. Shouldn't it be
like wso2/as:5.3.0 ?
Otherwise, once we push to dockerhub in future, we need to have separate
user accounts for each product which is not feasible.

On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa  wrote:

> Earlier the PaaS team had a discussion in architecture list thread [1] to
> create a base Docker image and a profile Docker image extending the base
> Docker image per WSO2 product. This was done for the ease of the users of
> wso2 Dockerfiles. More details can be found in the mentioned thread.
> However, we found this approach to have a few drawbacks:
>
>- The second image size would be comparatively large even if a simple
>config change is done. This is because docker adds an additional layer on
>top of existing layer, if we need to do a change to the existing layer.
>- The main rationale for having two images was the ease of using it;
>but a user/developer using a single Dockerfile can still do this manually,
>by extending from the existing image, for testing purposes.
>
> Therefore, the PaaS team had another internal discussion and decided to
> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
> product.
> In development phase, user/developer can either create a simple Dockerfile
> extending a product Dockerfile, and add the config/artifact changes using
> ADD/COPY statements [2]. When the container is starting up, a script will
> copy the relevant artifacts to the directory structure under the carbon
> server before actually starting the server.
> Else, another option would be to provide a host machine directory (shared
> volume) when starting up a container from the provided wso2 product
> Dockerfile (without creating a separate Dockerfile). This shared location
> can have a directory structure similar to a carbon server, which will be
> again copied to the carbon server before starting up.
>
> Prior to moving in to production, the recommended way would be to re-build
> the image with all configurations in place, using the latest ubuntu base
> image. This final Dockerfile should have a minimum number of ADD/COPY/RUN
> commands to reduce the image size.
>
> Please share your thoughts on this. PaaS team will be updating the WSO2
> Dockerfiles repository with this structure.
>
> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>
> [2].
> FROM wso2am:1.10.0
> MAINTAINER isu...@wso2.com
>
> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>


-- 
Sajith Kariyawasam
*Committer and PMC member, Apache Stratos, *
*WSO2 Inc.; http://wso2.com *
*Mobile: 0772269575*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Vishanth Balasubramaniam
Hi Sajith,

By default, it creates image as wso2as:5.3.0. And we have options to add
the organization name, where by executing the script as "*./build.sh -v
5.3.0 -o *wso2", it will create the image name as wso2/wso2as:5.3.0.

Regards,
Vishanth

On Thu, Mar 31, 2016 at 3:25 PM, Sajith Kariyawasam  wrote:

> It seems the image name format we have now is wso2as:5.3.0. Shouldn't it
> be like wso2/as:5.3.0 ?
> Otherwise, once we push to dockerhub in future, we need to have separate
> user accounts for each product which is not feasible.
>
> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
> wrote:
>
>> Earlier the PaaS team had a discussion in architecture list thread [1] to
>> create a base Docker image and a profile Docker image extending the base
>> Docker image per WSO2 product. This was done for the ease of the users of
>> wso2 Dockerfiles. More details can be found in the mentioned thread.
>> However, we found this approach to have a few drawbacks:
>>
>>- The second image size would be comparatively large even if a simple
>>config change is done. This is because docker adds an additional layer on
>>top of existing layer, if we need to do a change to the existing layer.
>>- The main rationale for having two images was the ease of using it;
>>but a user/developer using a single Dockerfile can still do this manually,
>>by extending from the existing image, for testing purposes.
>>
>> Therefore, the PaaS team had another internal discussion and decided to
>> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
>> product.
>> In development phase, user/developer can either create a simple
>> Dockerfile extending a product Dockerfile, and add the config/artifact
>> changes using ADD/COPY statements [2]. When the container is starting up, a
>> script will copy the relevant artifacts to the directory structure under
>> the carbon server before actually starting the server.
>> Else, another option would be to provide a host machine directory (shared
>> volume) when starting up a container from the provided wso2 product
>> Dockerfile (without creating a separate Dockerfile). This shared location
>> can have a directory structure similar to a carbon server, which will be
>> again copied to the carbon server before starting up.
>>
>> Prior to moving in to production, the recommended way would be to
>> re-build the image with all configurations in place, using the latest
>> ubuntu base image. This final Dockerfile should have a minimum number of
>> ADD/COPY/RUN commands to reduce the image size.
>>
>> Please share your thoughts on this. PaaS team will be updating the WSO2
>> Dockerfiles repository with this structure.
>>
>> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>>
>> [2].
>> FROM wso2am:1.10.0
>> MAINTAINER isu...@wso2.com
>>
>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>
>
> --
> Sajith Kariyawasam
> *Committer and PMC member, Apache Stratos, *
> *WSO2 Inc.; http://wso2.com *
> *Mobile: 0772269575*
>



-- 
*Vishanth Balasubramaniam*
Committer & PMC Member, Apache Stratos,
Software Engineer, WSO2 Inc.; http://wso2.com

mobile: *+94 77 17 377 18*
about me: *http://about.me/vishanth *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Chamila De Alwis
Hi Sajith,

There are repositories on Dockerhub without organizations, such as mysql
[1] and tomcat [2] which are official repositories. When the images are
pushed to DockerHub, the organization name can be used as "wso2".
Furthermore, the products are WSO2 prefixed (WSO2MB vs MB) so the name
should be *org_name/wso2{product_code}*.

[1] - https://hub.docker.com/_/mysql/
[2] - https://hub.docker.com/_/tomcat/


Regards,
Chamila de Alwis
Committer and PMC Member - Apache Stratos
Software Engineer | WSO2 | +94772207163
Blog: code.chamiladealwis.com



On Thu, Mar 31, 2016 at 3:25 PM, Sajith Kariyawasam  wrote:

> It seems the image name format we have now is wso2as:5.3.0. Shouldn't it
> be like wso2/as:5.3.0 ?
> Otherwise, once we push to dockerhub in future, we need to have separate
> user accounts for each product which is not feasible.
>
> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
> wrote:
>
>> Earlier the PaaS team had a discussion in architecture list thread [1] to
>> create a base Docker image and a profile Docker image extending the base
>> Docker image per WSO2 product. This was done for the ease of the users of
>> wso2 Dockerfiles. More details can be found in the mentioned thread.
>> However, we found this approach to have a few drawbacks:
>>
>>- The second image size would be comparatively large even if a simple
>>config change is done. This is because docker adds an additional layer on
>>top of existing layer, if we need to do a change to the existing layer.
>>- The main rationale for having two images was the ease of using it;
>>but a user/developer using a single Dockerfile can still do this manually,
>>by extending from the existing image, for testing purposes.
>>
>> Therefore, the PaaS team had another internal discussion and decided to
>> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
>> product.
>> In development phase, user/developer can either create a simple
>> Dockerfile extending a product Dockerfile, and add the config/artifact
>> changes using ADD/COPY statements [2]. When the container is starting up, a
>> script will copy the relevant artifacts to the directory structure under
>> the carbon server before actually starting the server.
>> Else, another option would be to provide a host machine directory (shared
>> volume) when starting up a container from the provided wso2 product
>> Dockerfile (without creating a separate Dockerfile). This shared location
>> can have a directory structure similar to a carbon server, which will be
>> again copied to the carbon server before starting up.
>>
>> Prior to moving in to production, the recommended way would be to
>> re-build the image with all configurations in place, using the latest
>> ubuntu base image. This final Dockerfile should have a minimum number of
>> ADD/COPY/RUN commands to reduce the image size.
>>
>> Please share your thoughts on this. PaaS team will be updating the WSO2
>> Dockerfiles repository with this structure.
>>
>> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>>
>> [2].
>> FROM wso2am:1.10.0
>> MAINTAINER isu...@wso2.com
>>
>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>
>
> --
> Sajith Kariyawasam
> *Committer and PMC member, Apache Stratos, *
> *WSO2 Inc.; http://wso2.com *
> *Mobile: 0772269575*
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Support Efficient Cross Tenant Analytics in DAS

2016-03-31 Thread Anjana Fernando
Hi Srinath,

I'm not sure if this is something we would have to "fix". It was a clear
design decision we took in order to isolate the tenant data, in order for
others not to access other tenant's data. So also in Spark virtual tables,
it will directly map to their own analytics tables. If we allow, maybe the
super tenant, to access other tenant's data, it can be seen as a security
threat. The idea should be, no single tenant should have any special access
to other tenant's data.

So setting aside the physical representation (which has other
complications, like adding another index for tenantId and so on, which
should be supported by all data sources), if we are to do this, we need a
special view for super tenant tables in Spark virtual tables, in order for
them to have access to the "tenantId" property of that table. And in other
tenant's tables, we need to hide this, and not let them use it of course.
This looks like bit of a hack to implement a specific scenario we have.

So this requirement as I know mainly came from APIM analytics, where its
in-built analytics publishes all tenant's data to super tenant's tables and
the data is processed from there. So if we are doing this, this data is
only used internally, and cannot be shown to each respective tenants for
their own analytics. If each tenant needs to do their own analytics, they
should configure to get data for their tenant space, and write their own
analytics scripts. This may at the end mean, some type of data duplication,
but it should happen, because two different users are doing their different
processing. And IMO, we should not try to share any possible common data
they may have and hack the system.

At the end, the point is, we should not take lightly what we try to achieve
in having multi-tenancy, and compromise its fundamentals. At the moment,
the idea should be, each tenant would have their own data, its own
analytics scripts, and if you need to scale accordingly, have separate
hardware for those tenants. And running separate queries for different
tenants does not necessarily make it very slow, since the data load will be
divided between the tenants, and only extra processing would be possible
ramp up times for query executions.

Cheers,
Anjana.

On Thu, Mar 31, 2016 at 11:45 AM, Srinath Perera  wrote:

> Hi Anjana,
>
> Currently we keep different Hbase/ RDBMS table per tenant. In
> multi-tenant, environment, this is very expensive as we will have to run a
> query per tenant.
>
> How can we fix this? e.g. if we keep tenant as field in the table, that
> let us do a "group by".
>
> --Srinath
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 
*Anjana Fernando*
Senior Technical Lead
WSO2 Inc. | http://wso2.com
lean . enterprise . middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Sajith Kariyawasam
It seems that is the recommended notation, not only for some products, but
for all the 'official' repositories. If we want to make our images as
official images, according to [1]  and [2] we should not provide any
organization name.
For esb it wil be like https://hub.docker.com/_/wso2esb
For appserver, https://hub.docker.com/_/wso2as

Therefore, once we publish to dockerhub (official) we should not use any
organization name.

[1] https://docs.docker.com/engine/userguide/containers/dockerrepos/
[2] https://docs.docker.com/docker-hub/official_repos/

On Thu, Mar 31, 2016 at 3:35 PM, Chamila De Alwis  wrote:

> Hi Sajith,
>
> There are repositories on Dockerhub without organizations, such as mysql
> [1] and tomcat [2] which are official repositories. When the images are
> pushed to DockerHub, the organization name can be used as "wso2".
> Furthermore, the products are WSO2 prefixed (WSO2MB vs MB) so the name
> should be *org_name/wso2{product_code}*.
>
> [1] - https://hub.docker.com/_/mysql/
> [2] - https://hub.docker.com/_/tomcat/
>
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Software Engineer | WSO2 | +94772207163
> Blog: code.chamiladealwis.com
>
>
>
> On Thu, Mar 31, 2016 at 3:25 PM, Sajith Kariyawasam 
> wrote:
>
>> It seems the image name format we have now is wso2as:5.3.0. Shouldn't it
>> be like wso2/as:5.3.0 ?
>> Otherwise, once we push to dockerhub in future, we need to have separate
>> user accounts for each product which is not feasible.
>>
>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>> wrote:
>>
>>> Earlier the PaaS team had a discussion in architecture list thread [1]
>>> to create a base Docker image and a profile Docker image extending the base
>>> Docker image per WSO2 product. This was done for the ease of the users of
>>> wso2 Dockerfiles. More details can be found in the mentioned thread.
>>> However, we found this approach to have a few drawbacks:
>>>
>>>- The second image size would be comparatively large even if a
>>>simple config change is done. This is because docker adds an additional
>>>layer on top of existing layer, if we need to do a change to the existing
>>>layer.
>>>- The main rationale for having two images was the ease of using it;
>>>but a user/developer using a single Dockerfile can still do this 
>>> manually,
>>>by extending from the existing image, for testing purposes.
>>>
>>> Therefore, the PaaS team had another internal discussion and decided to
>>> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
>>> product.
>>> In development phase, user/developer can either create a simple
>>> Dockerfile extending a product Dockerfile, and add the config/artifact
>>> changes using ADD/COPY statements [2]. When the container is starting up, a
>>> script will copy the relevant artifacts to the directory structure under
>>> the carbon server before actually starting the server.
>>> Else, another option would be to provide a host machine directory
>>> (shared volume) when starting up a container from the provided wso2 product
>>> Dockerfile (without creating a separate Dockerfile). This shared location
>>> can have a directory structure similar to a carbon server, which will be
>>> again copied to the carbon server before starting up.
>>>
>>> Prior to moving in to production, the recommended way would be to
>>> re-build the image with all configurations in place, using the latest
>>> ubuntu base image. This final Dockerfile should have a minimum number of
>>> ADD/COPY/RUN commands to reduce the image size.
>>>
>>> Please share your thoughts on this. PaaS team will be updating the WSO2
>>> Dockerfiles repository with this structure.
>>>
>>> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>>>
>>> [2].
>>> FROM wso2am:1.10.0
>>> MAINTAINER isu...@wso2.com
>>>
>>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* *
>>>
>>>
>>>
>>
>>
>> --
>> Sajith Kariyawasam
>> *Committer and PMC member, Apache Stratos, *
>> *WSO2 Inc.; http://wso2.com *
>> *Mobile: 0772269575*
>>
>
>


-- 
Sajith Kariyawasam
*Committer and PMC member, Apache Stratos, *
*WSO2 Inc.; http://wso2.com *
*Mobile: 0772269575*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-31 Thread Frank Leymann
Sorry for jumping into the discussion late (or even too late).  I try to
understand the discussion by drawing analogies to DBMS - maybe that's wrong
and I miss the point... If I am right, what you decided in the meeting is
fully in line with what DBMS are doing :-)

In Srinath's summary, list item (2): The API then actually will be in
charge of implementing access control. Because the API needs to decide who
can see which data. Writing an API accessing data is like writing a SQL
program accessing a database. Then, we are asking an SQL program to
implement access control by itself, isn't it? This is cumbersome, and
likely each product has to implement very similar mechanisms.

In Srinath's summary, list item (1): This sounds like established practice
in DBMS. When a SQL programmer writes a program, she must have all the
access control rights to access the DBMS. The final program is than subject
to access control mechanism w.r.t. to the users of the program: whoever is
allowed to use the program somehow inherits the access writes of the
programmer (but only in context of this program). When identifying the SQL
programmer with the tenant (or tenant admin), this is what (1) of the
summary decided, correct?

>From Srinath's summary: "*Also this means, we will not support users
providing their own analytics queries. Only tenant admins  can provide
their own queries.*"  Again, identifying tenant admins with SQL
programmers, that's exactly the paradigm.


Best regards,
Frank

2016-03-31 10:32 GMT+02:00 Srinath Perera :

> We had a meeting. Participants: Sanjiva, Sumedha, NuwanD, Prabath, Srinath
> (Prabath please list others joined from Trace)
>
> Problem: Bob writes a API and publish it API manager. Then Alice
> subscribes to the API and write an mobile APP. Charlie uses the mobile App,
> which results in an API call. We need to track the API calls via DAS. And
> when Bob, Alice, and possibly Charlie come to the dashboard server, they
> should possibly see the transaction from their view.
>
> Challenge is that now there is no clear user in the above transaction.
> Rather there is three. So we cannot handle this generically at the DAS
> level via a user concept. Hence, the API manager needs to put the right
> information when it publish data to DAS and show data only to relevant
> parties when it showing and exposing data.
>
>
> Solution
>
> [image: SecuirtyLayers.png]
>
>1. We will keep DAS in the current state with support for tenants
>without support for users. It is aware about tenant and provide full
>isolation between tenant. However, it does not aware about users.
>2. Each product will write an extended receiver and DAL layer as, that
>will build an API catered for their use cases. This API will support login
>via OAuth tokens. Since they know the fields in the  tables that has user
>data init, then can filter the data based on the user.
>3. We will run the extended DAL layers and receivers in DAS, and they
>will talk to DAL as an OSGI call.
>4. Above layers will assume that users have access to OAuth token. In
>APIM use cases, APIM can issue tokens, and in IoT use cases, APIM that runs
>in the IoT server can issue tokens.
>
>
> Also this means, we will not support users providing their own analytics
> queries. Only tenant admins  can provide their own queries.
> As decided in the earlier meeting,  We need APIM and IOT Server to be able
> to publish events as "system user", but ask DAS to place data under Ann's (
> related user) account.
>
> Please add anything I missed.
>
> --Srinath
>
>
>
>
> On Tue, Mar 29, 2016 at 11:53 AM, Srinath Perera  wrote:
> >
> > I have scheduled a meeting tomorrow to discuss this.
> >
> > --Srinath
> >
> > On Tue, Mar 29, 2016 at 11:44 AM, Sachith Withana 
> wrote:
> >>
> >> Hi all,
> >>
> >> I do believe it would be of great value to incorporate user level data
> isolation for DAS.
> >>
> >> Having said that though, it wouldn't be practical to provide a complete
> permission platform to DAS that would suffice all the requirements of APIM
> and IOT.
> >>
> >> IMO, we should provide some features that would help individual
> products build their own permission platform that caters to their
> requirements.
> >>
> >> Thanks,
> >> Sachith
> >>
> >> On Tue, Mar 29, 2016 at 10:38 AM, Nuwan Dias  wrote:
> >>>
> >>> Please ignore my reply. It was intended for another thread :)
> >>>
> >>> On Mon, Mar 28, 2016 at 4:26 PM, Nuwan Dias  wrote:
> 
>  Having to publish a single event after collecting all possible data
> records from the server would be good in terms of scalability aspects of
> the DAS/Analytics platform. However I see that it introduces new challenges
> for which we would need solutions.
> 
>  1. How to guarantee a event is always published to DAS? In the case
> of API Manager, a request has multiple exit points. Such as auth failures,
> throttling out, back-end failures, message processing failures, etc. So we

Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa  wrote:

>
> [2].
> FROM wso2am:1.10.0
> MAINTAINER isu...@wso2.com
>
> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>

Shouldn't it better to use a simple folder structure like
"/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
Tomcat [4], JBoss [5] Dockerfiles use something similar.

[3]
https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
[3]
https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
[4] https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7

Thanks


>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Chamila De Alwis
@Sajith,

Publishing WSO2 products on the Docker official repository will be a
process which involves pushing our Dockerfiles to
docker-library/official-images [1], and it will involve PRs for them to be
updated. IMO this would interfere with our ability to have control over the
Dockerfile strategy for WSO2 products. Therefore, I think we should publish
(when we do) Docker images under wso2 organization[2].

[1] - https://github.com/docker-library/official-images
[2] - https://hub.docker.com/r/wso2/


Regards,
Chamila de Alwis
Committer and PMC Member - Apache Stratos
Software Engineer | WSO2 | +94772207163
Blog: code.chamiladealwis.com



On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:

>
>
> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
> wrote:
>
>>
>> [2].
>> FROM wso2am:1.10.0
>> MAINTAINER isu...@wso2.com
>>
>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>
>
> Shouldn't it better to use a simple folder structure like
> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>
> [3]
> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
> [3]
> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
> [4] https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>
> Thanks
>
>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Circuit Breaker Pattern for MSF4J

2016-03-31 Thread Frank Leymann
Yes, all the stability patterns (that Nygard describes, the circuit breaker
being just one of them)
is not associated with microservices, but applies to all distributed
applications. In fact, Nygard's
book has been published in 2007, lng before the microservice discussion
came up ;-)

Applying these patterns to each and any invocation would be a complete
misuse. And it will very likely
result in performance hits...  The circuit breaker pattern, for example, is
recommended to be applied
to "out-of-spec errors", i.e. errors that you don't cover in the spec
(because the errors are too unlikely ;-))
of the invoked function or in the spec of the program making the call (aka
client). Often, these are errors
that never happen during testing unless you really set up a badly behaving
test environment. And it has
impact on the design/implementation of the circuit breaker itself (or
clients), for example "critical work"
not accepted by the circuit breaker has be queued (by the client? by the
circuit breaker?) for later use
(automatic replay?).

Thus, using one of the stability patterns is a (architecture/design)
decision with implications on other
components architecture/design.

Documenting a sample use of the circuit breaker pattern should also discuss
these ramifications.


Best regards,
Frank

2016-03-31 9:12 GMT+02:00 Sanjiva Weerawarana :

> Agreed. However I had understood that the circuit breaker pattern was
> advocated primarily for service clients in MSA (and of course it has
> nothing do with being micro).
>
> The general story of better failure handling applies to all code and is of
> course not MSA specific.
>
> Anyway .. Sample is fine.
> On Mar 31, 2016 9:19 AM, "Afkham Azeez"  wrote:
>
>>
>>
>> On Thu, Mar 31, 2016 at 9:04 AM, Sanjiva Weerawarana 
>> wrote:
>>
>>> That's why I said "fancy try catch" :-).
>>>
>>> However, are you SERIOUSLY saying that we for example should be wrapping
>>> all our DB access code in this stuff? If not who exactly should be doing
>>> this? What are the perf implications?
>>>
>>
>> No I am not saying that. However, there will be use cases where people
>> want to use this pattern and this is a simplified sample that demonstrates
>> how to use this pattern. In Nygards book about how an SQL statement
>> execution failure resulted in an entire checking in system in an airline
>> failing because the failure propagated is a good example of uncontrolled
>> failure propagation (Release It, Chapter 2: Case study: The exception that
>> grounded an airline, for those of you who have the book). So my example was
>> somewhat inspired by that case study and is highly simplified.
>>
>> If a sample is too complicated, people get lost in the implementation
>> details rather than seeing how the core concept or pattern is implemented.
>> I certainly can implement another sample which demonstrates client->service
>> or service->service calls, it certainly would add more code but the core
>> concept demonstrated would be the same.
>>
>>
>>
>>>
>>> Of course wrapping remote service calls in this stuff makes sense -
>>> great way to adjust to transient issues. In that case the overhead is
>>> heavily masked by the latency - I'm not so convinced that is the case for
>>> transactional JDBC calls but maybe it is. In that case WE must use it
>>> internally.
>>>
>>> Sanjiva.
>>>
>>> On Thu, Mar 31, 2016 at 8:53 AM, Afkham Azeez  wrote:
>>>
 Equating these fault tolerance patterns to Java 8 Optional or
 try-catch, is a highly oversimplified view. What Hystrix and these patterns
 provides is a framework for building fault tolerant systems. Something that
 is useful in the toolkit of an architect & developer.

 On Thu, Mar 31, 2016 at 8:36 AM, Sanjiva Weerawarana 
 wrote:

> This is almost kinda like that stupid new Java8 thing of "we removed
> null by wrapping it in a fancy object" ;-).
>
> On Thu, Mar 31, 2016 at 8:32 AM, Sanjiva Weerawarana  > wrote:
>
>> So this is not what I expected the real use case to be ... this is
>> basically a fancy try catch.
>>
>> Don't we want to show a client side example?
>>
>> On Thu, Mar 31, 2016 at 6:28 AM, Afkham Azeez  wrote:
>>
>>> Timeout is related to the actual operation taking more time than
>>> anticipated. In such a case, without waiting indefinitely, the operation
>>> times out and the fallback of the Hystrix command will be invoked. The
>>> circuit will be open for a fixed period of time configured by
>>> https://github.com/Netflix/Hystrix/wiki/Configuration#circuitBreaker.sleepWindowInMilliseconds
>>>
>>> On Thu, Mar 31, 2016 at 2:53 AM, Harshan Liyanage 
>>> wrote:
>>>
 Hi Azeez,

 Does this timeout in open state occurs in exponentially (first
 timeout in 10 secs, next in 20 secs etc) or linearly when transitioning
 back to half-open state? For example if the state is in "Open" and now 
>>

Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Gayan Gunarathne
On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa  wrote:

> Earlier the PaaS team had a discussion in architecture list thread [1] to
> create a base Docker image and a profile Docker image extending the base
> Docker image per WSO2 product. This was done for the ease of the users of
> wso2 Dockerfiles. More details can be found in the mentioned thread.
> However, we found this approach to have a few drawbacks:
>
>- The second image size would be comparatively large even if a simple
>config change is done. This is because docker adds an additional layer on
>top of existing layer, if we need to do a change to the existing layer.
>- The main rationale for having two images was the ease of using it;
>but a user/developer using a single Dockerfile can still do this manually,
>by extending from the existing image, for testing purposes.
>
> Therefore, the PaaS team had another internal discussion and decided to
> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
> product.
> In development phase, user/developer can either create a simple Dockerfile
> extending a product Dockerfile, and add the config/artifact changes using
> ADD/COPY statements [2]. When the container is starting up, a script will
> copy the relevant artifacts to the directory structure under the carbon
> server before actually starting the server.
>

So user need to have own set of artifacts and those artifact will be copied
when docker image creation.So what happen in the case of the dynamic
deployment like PaaS solution. As some configurations may dynamic, at the
startup user may not aware of it.In that case are we generating the
artifacts on the fly and copy to the docker image?


> Else, another option would be to provide a host machine directory (shared
> volume) when starting up a container from the provided wso2 product
> Dockerfile (without creating a separate Dockerfile). This shared location
> can have a directory structure similar to a carbon server, which will be
> again copied to the carbon server before starting up.
>
> Prior to moving in to production, the recommended way would be to re-build
> the image with all configurations in place, using the latest ubuntu base
> image. This final Dockerfile should have a minimum number of ADD/COPY/RUN
> commands to reduce the image size.
>
> Please share your thoughts on this. PaaS team will be updating the WSO2
> Dockerfiles repository with this structure.
>
> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>
> [2].
> FROM wso2am:1.10.0
> MAINTAINER isu...@wso2.com
>
> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

Gayan Gunarathne
Technical Lead, WSO2 Inc. (http://wso2.com)
Committer & PMC Member, Apache Stratos
email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Sajith Kariyawasam
On Thu, Mar 31, 2016 at 4:57 PM, Chamila De Alwis  wrote:

> @Sajith,
>
> Publishing WSO2 products on the Docker official repository will be a
> process which involves pushing our Dockerfiles to
> docker-library/official-images [1], and it will involve PRs for them to be
> updated. IMO this would interfere with our ability to have control over the
> Dockerfile strategy for WSO2 products. Therefore, I think we should publish
> (when we do) Docker images under wso2 organization[2].
>

If we publish under wso2 organization, will it be published as "Official" ?


>
> [1] - https://github.com/docker-library/official-images
> [2] - https://hub.docker.com/r/wso2/
>
>
> Regards,
> Chamila de Alwis
> Committer and PMC Member - Apache Stratos
> Software Engineer | WSO2 | +94772207163
> Blog: code.chamiladealwis.com
>
>
>
> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>> wrote:
>>
>>>
>>> [2].
>>> FROM wso2am:1.10.0
>>> MAINTAINER isu...@wso2.com
>>>
>>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>>
>>
>> Shouldn't it better to use a simple folder structure like
>> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
>> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>>
>> [3]
>> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
>> [3]
>> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
>> [4]
>> https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>>
>> Thanks
>>
>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* *
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>


-- 
Sajith Kariyawasam
*Committer and PMC member, Apache Stratos, *
*WSO2 Inc.; http://wso2.com *
*Mobile: 0772269575*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Circuit Breaker Pattern for MSF4J

2016-03-31 Thread Afkham Azeez
On Thu, Mar 31, 2016 at 5:03 PM, Frank Leymann  wrote:

> Yes, all the stability patterns (that Nygard describes, the circuit
> breaker being just one of them)
> is not associated with microservices, but applies to all distributed
> applications. In fact, Nygard's
> book has been published in 2007, lng before the microservice
> discussion came up ;-)
>
> Yes Frank, agreed. With the hype about microservices, people have started
talking about these a lot and during the evaluation phase, people look at
features available in frameworks. I don't understand the excitement here.
We are not saying CircuitBreaker etc have to be used. That is as stupid as
saying every object instantiation has to be done via a Factory.


> Applying these patterns to each and any invocation would be a complete
> misuse.
>

Yes, it would very stupid for someone to design a system like that or to
suggest something like that, like I said it would be like asking to
instantiate all objects using the Factory pattern!

Patterns are just part of the toolkit of architects & developers. Knowing
to use the appropriate one at the appropriate place requires proper
judgment. This sample nor this mail thread is not suggesting to use these
everywhere, and I don't understand what gave the impression that we are
suggesting such a thing.


> And it will very likely
> result in performance hits...  The circuit breaker pattern, for example,
> is recommended to be applied
> to "out-of-spec errors", i.e. errors that you don't cover in the spec
> (because the errors are too unlikely ;-))
> of the invoked function or in the spec of the program making the call (aka
> client). Often, these are errors
> that never happen during testing unless you really set up a badly behaving
> test environment. And it has
> impact on the design/implementation of the circuit breaker itself (or
> clients), for example "critical work"
> not accepted by the circuit breaker has be queued (by the client? by the
> circuit breaker?) for later use
> (automatic replay?).
>
> Thus, using one of the stability patterns is a (architecture/design)
> decision with implications on other
> components architecture/design.
>
> Documenting a sample use of the circuit breaker pattern should also
> discuss these ramifications.
>
>
Thanks. We will include these recommendations in our documentation.


>
> Best regards,
> Frank
>
> 2016-03-31 9:12 GMT+02:00 Sanjiva Weerawarana :
>
>> Agreed. However I had understood that the circuit breaker pattern was
>> advocated primarily for service clients in MSA (and of course it has
>> nothing do with being micro).
>>
>> The general story of better failure handling applies to all code and is
>> of course not MSA specific.
>>
>> Anyway .. Sample is fine.
>> On Mar 31, 2016 9:19 AM, "Afkham Azeez"  wrote:
>>
>>>
>>>
>>> On Thu, Mar 31, 2016 at 9:04 AM, Sanjiva Weerawarana 
>>> wrote:
>>>
 That's why I said "fancy try catch" :-).

 However, are you SERIOUSLY saying that we for example should be
 wrapping all our DB access code in this stuff? If not who exactly should be
 doing this? What are the perf implications?

>>>
>>> No I am not saying that. However, there will be use cases where people
>>> want to use this pattern and this is a simplified sample that demonstrates
>>> how to use this pattern. In Nygards book about how an SQL statement
>>> execution failure resulted in an entire checking in system in an airline
>>> failing because the failure propagated is a good example of uncontrolled
>>> failure propagation (Release It, Chapter 2: Case study: The exception that
>>> grounded an airline, for those of you who have the book). So my example was
>>> somewhat inspired by that case study and is highly simplified.
>>>
>>> If a sample is too complicated, people get lost in the implementation
>>> details rather than seeing how the core concept or pattern is implemented.
>>> I certainly can implement another sample which demonstrates client->service
>>> or service->service calls, it certainly would add more code but the core
>>> concept demonstrated would be the same.
>>>
>>>
>>>

 Of course wrapping remote service calls in this stuff makes sense -
 great way to adjust to transient issues. In that case the overhead is
 heavily masked by the latency - I'm not so convinced that is the case for
 transactional JDBC calls but maybe it is. In that case WE must use it
 internally.

 Sanjiva.

 On Thu, Mar 31, 2016 at 8:53 AM, Afkham Azeez  wrote:

> Equating these fault tolerance patterns to Java 8 Optional or
> try-catch, is a highly oversimplified view. What Hystrix and these 
> patterns
> provides is a framework for building fault tolerant systems. Something 
> that
> is useful in the toolkit of an architect & developer.
>
> On Thu, Mar 31, 2016 at 8:36 AM, Sanjiva Weerawarana  > wrote:
>
>> This is almost kinda like that stupid new Java8 thing of "we removed

Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Isuru Haththotuwa
On Thu, Mar 31, 2016 at 5:23 PM, Gayan Gunarathne  wrote:

>
>
> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
> wrote:
>
>> Earlier the PaaS team had a discussion in architecture list thread [1] to
>> create a base Docker image and a profile Docker image extending the base
>> Docker image per WSO2 product. This was done for the ease of the users of
>> wso2 Dockerfiles. More details can be found in the mentioned thread.
>> However, we found this approach to have a few drawbacks:
>>
>>- The second image size would be comparatively large even if a simple
>>config change is done. This is because docker adds an additional layer on
>>top of existing layer, if we need to do a change to the existing layer.
>>- The main rationale for having two images was the ease of using it;
>>but a user/developer using a single Dockerfile can still do this manually,
>>by extending from the existing image, for testing purposes.
>>
>> Therefore, the PaaS team had another internal discussion and decided to
>> scrap the two Dockerfile approach and use a single Dockerfile per WSO2
>> product.
>> In development phase, user/developer can either create a simple
>> Dockerfile extending a product Dockerfile, and add the config/artifact
>> changes using ADD/COPY statements [2]. When the container is starting up, a
>> script will copy the relevant artifacts to the directory structure under
>> the carbon server before actually starting the server.
>>
>
> So user need to have own set of artifacts and those artifact will be
> copied when docker image creation.So what happen in the case of the dynamic
> deployment like PaaS solution. As some configurations may dynamic, at the
> startup user may not aware of it.In that case are we generating the
> artifacts on the fly and copy to the docker image?
>
An artifact that needs to be deployed by the product should be bundled in
the image itself; IMHO apart from API Manager we can do this for all other
products, using car files, etc. For APIM, we need to figure out a proper
way to synchronize the runtime artifacts (APIs).

If there is a need to update an artifact, a new image should be built and
rolled out. We are using the immutable server concept [1] here.

[1]. http://martinfowler.com/bliki/ImmutableServer.html

>
>
>> Else, another option would be to provide a host machine directory (shared
>> volume) when starting up a container from the provided wso2 product
>> Dockerfile (without creating a separate Dockerfile). This shared location
>> can have a directory structure similar to a carbon server, which will be
>> again copied to the carbon server before starting up.
>>
>> Prior to moving in to production, the recommended way would be to
>> re-build the image with all configurations in place, using the latest
>> ubuntu base image. This final Dockerfile should have a minimum number of
>> ADD/COPY/RUN commands to reduce the image size.
>>
>> Please share your thoughts on this. PaaS team will be updating the WSO2
>> Dockerfiles repository with this structure.
>>
>> [1]. [WSO2 Docker Images] Creating Base Docker Images for WSO2 Products
>>
>> [2].
>> FROM wso2am:1.10.0
>> MAINTAINER isu...@wso2.com
>>
>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> Gayan Gunarathne
> Technical Lead, WSO2 Inc. (http://wso2.com)
> Committer & PMC Member, Apache Stratos
> email : gay...@wso2.com  | mobile : +94 775030545 <%2B94%20766819985>
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Isuru Haththotuwa
On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:

>
>
> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
> wrote:
>
>>
>> [2].
>> FROM wso2am:1.10.0
>> MAINTAINER isu...@wso2.com
>>
>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>
> We are not using root user, and the relevant user (wso2user) has
permission to /mnt. Technically we can give permission to /opt as well, but
IMHO we can have this directory in /mnt. Will change the name to
/mnt/wso2.

>
> Shouldn't it better to use a simple folder structure like
> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>
> [3]
> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
> [3]
> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
> [4] https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>
> Thanks
>
>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>


-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Circuit Breaker Pattern for MSF4J

2016-03-31 Thread Afkham Azeez
The blog post has been removed. Sorry for all the confusion. This was only
done as part of the agreement we had during last week's meeting to
demonstrate certain features such as Spring support, JPA support, support
for patterns etc. in order to help developers understand how to implement
certain stuff with MSF4J. Our target community of MSF4J is primarily
developers, and developers like to refer to sample code segments in order
to proceed with implementing their solutions. We lazy developers love to
Google search for code segments, e.g. JDBC connection example, and then
copy, paste and modify those segments. What I have been trying to do with
the series of blogposts is to make available such code segments developers
could readily use. Since this post and mail thread has generated a lot of
negative feeling and confusion, I think it is better to get rid of this
controversial blog post.

Thanks
Azeez

On Thu, Mar 31, 2016 at 6:54 PM, Afkham Azeez  wrote:

>
>
> On Thu, Mar 31, 2016 at 5:03 PM, Frank Leymann  wrote:
>
>> Yes, all the stability patterns (that Nygard describes, the circuit
>> breaker being just one of them)
>> is not associated with microservices, but applies to all distributed
>> applications. In fact, Nygard's
>> book has been published in 2007, lng before the microservice
>> discussion came up ;-)
>>
>> Yes Frank, agreed. With the hype about microservices, people have started
> talking about these a lot and during the evaluation phase, people look at
> features available in frameworks. I don't understand the excitement here.
> We are not saying CircuitBreaker etc have to be used. That is as stupid as
> saying every object instantiation has to be done via a Factory.
>
>
>> Applying these patterns to each and any invocation would be a complete
>> misuse.
>>
>
> Yes, it would very stupid for someone to design a system like that or to
> suggest something like that, like I said it would be like asking to
> instantiate all objects using the Factory pattern!
>
> Patterns are just part of the toolkit of architects & developers. Knowing
> to use the appropriate one at the appropriate place requires proper
> judgment. This sample nor this mail thread is not suggesting to use these
> everywhere, and I don't understand what gave the impression that we are
> suggesting such a thing.
>
>
>> And it will very likely
>> result in performance hits...  The circuit breaker pattern, for example,
>> is recommended to be applied
>> to "out-of-spec errors", i.e. errors that you don't cover in the spec
>> (because the errors are too unlikely ;-))
>> of the invoked function or in the spec of the program making the call
>> (aka client). Often, these are errors
>> that never happen during testing unless you really set up a badly
>> behaving test environment. And it has
>> impact on the design/implementation of the circuit breaker itself (or
>> clients), for example "critical work"
>> not accepted by the circuit breaker has be queued (by the client? by the
>> circuit breaker?) for later use
>> (automatic replay?).
>>
>> Thus, using one of the stability patterns is a (architecture/design)
>> decision with implications on other
>> components architecture/design.
>>
>> Documenting a sample use of the circuit breaker pattern should also
>> discuss these ramifications.
>>
>>
> Thanks. We will include these recommendations in our documentation.
>
>
>>
>> Best regards,
>> Frank
>>
>> 2016-03-31 9:12 GMT+02:00 Sanjiva Weerawarana :
>>
>>> Agreed. However I had understood that the circuit breaker pattern was
>>> advocated primarily for service clients in MSA (and of course it has
>>> nothing do with being micro).
>>>
>>> The general story of better failure handling applies to all code and is
>>> of course not MSA specific.
>>>
>>> Anyway .. Sample is fine.
>>> On Mar 31, 2016 9:19 AM, "Afkham Azeez"  wrote:
>>>


 On Thu, Mar 31, 2016 at 9:04 AM, Sanjiva Weerawarana 
 wrote:

> That's why I said "fancy try catch" :-).
>
> However, are you SERIOUSLY saying that we for example should be
> wrapping all our DB access code in this stuff? If not who exactly should 
> be
> doing this? What are the perf implications?
>

 No I am not saying that. However, there will be use cases where people
 want to use this pattern and this is a simplified sample that demonstrates
 how to use this pattern. In Nygards book about how an SQL statement
 execution failure resulted in an entire checking in system in an airline
 failing because the failure propagated is a good example of uncontrolled
 failure propagation (Release It, Chapter 2: Case study: The exception that
 grounded an airline, for those of you who have the book). So my example was
 somewhat inspired by that case study and is highly simplified.

 If a sample is too complicated, people get lost in the implementation
 details rather than seeing how the core concept or pattern is implemented.
>>

Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Thu, Mar 31, 2016 at 7:56 PM, Isuru Haththotuwa  wrote:

>
> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>
>>
>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>> wrote:
>>
>>>
>>> [2].
>>> FROM wso2am:1.10.0
>>> MAINTAINER isu...@wso2.com
>>>
>>> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>>>
>> We are not using root user, and the relevant user (wso2user) has
> permission to /mnt. Technically we can give permission to /opt as well, but
> IMHO we can have this directory in /mnt. Will change the name to
> /mnt/wso2.
>

+1 May be /mnt/wso2/wso2 would be more meaningful.

Thanks

>
>> Shouldn't it better to use a simple folder structure like
>> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
>> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>>
>> [3]
>> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
>> [3]
>> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
>> [4]
>> https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>>
>> Thanks
>>
>>
>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* *
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Support Efficient Cross Tenant Analytics in DAS

2016-03-31 Thread Inosh Goonewardena
Hi,

Having to run spark queries for each tenant can be very expensive with
large number of tenants, but in terms of data isolation current design is
better I believe. If we can come up with a good to design for supporting
tenant level data isolation, this is something we can do indeed.

However, on the other hand, let's say we keep data in a single table and
process all data with a single query. Since that table contains entire
dataset, it could still be a somewhat expensive one and will take longer
time to complete. In that case, tenants that have little amount of data
will get affected and will have to wait for longer time to see their
results.

On Thu, Mar 31, 2016 at 5:25 AM, Anjana Fernando  wrote:

> Hi Srinath,
>
> I'm not sure if this is something we would have to "fix". It was a clear
> design decision we took in order to isolate the tenant data, in order for
> others not to access other tenant's data. So also in Spark virtual tables,
> it will directly map to their own analytics tables. If we allow, maybe the
> super tenant, to access other tenant's data, it can be seen as a security
> threat. The idea should be, no single tenant should have any special access
> to other tenant's data.
>
> So setting aside the physical representation (which has other
> complications, like adding another index for tenantId and so on, which
> should be supported by all data sources), if we are to do this, we need a
> special view for super tenant tables in Spark virtual tables, in order for
> them to have access to the "tenantId" property of that table. And in other
> tenant's tables, we need to hide this, and not let them use it of course.
> This looks like bit of a hack to implement a specific scenario we have.
>
> So this requirement as I know mainly came from APIM analytics, where its
> in-built analytics publishes all tenant's data to super tenant's tables and
> the data is processed from there. So if we are doing this, this data is
> only used internally, and cannot be shown to each respective tenants for
> their own analytics. If each tenant needs to do their own analytics, they
> should configure to get data for their tenant space, and write their own
> analytics scripts. This may at the end mean, some type of data duplication,
> but it should happen, because two different users are doing their different
> processing. And IMO, we should not try to share any possible common data
> they may have and hack the system.
>
> At the end, the point is, we should not take lightly what we try to
> achieve in having multi-tenancy, and compromise its fundamentals. At the
> moment, the idea should be, each tenant would have their own data, its own
> analytics scripts, and if you need to scale accordingly, have separate
> hardware for those tenants. And running separate queries for different
> tenants does not necessarily make it very slow, since the data load will be
> divided between the tenants, and only extra processing would be possible
> ramp up times for query executions.
>
> Cheers,
> Anjana.
>
> On Thu, Mar 31, 2016 at 11:45 AM, Srinath Perera  wrote:
>
>> Hi Anjana,
>>
>> Currently we keep different Hbase/ RDBMS table per tenant. In
>> multi-tenant, environment, this is very expensive as we will have to run a
>> query per tenant.
>>
>> How can we fix this? e.g. if we keep tenant as field in the table, that
>> let us do a "group by".
>>
>> --Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
Thanks & Regards,

Inosh Goonewardena
Associate Technical Lead- WSO2 Inc.
Mobile: +94779966317
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-31 Thread Srinath Perera
Hi Frank,

I agree on your concerns.

However, reason we are though of doing this is relationship between data
item and a user is not 1-1, but rather complicated.

e.g. Problem: Bob writes a API and publish it API manager. Then Alice
subscribes to the API and write an mobile APP. Charlie uses the mobile App,
which results in an API call. We need to track the API calls via DAS. And
when Bob, Alice, and possibly Charlie come to the dashboard server, they
should possibly see the transaction from their view.

Above transaction is owned by all three users: Bob, Alice, and Charlie. It
is complicated to match this to a permission model. We felt you need to
understand the context of the data generated and used ( e.g. above API
manager scenario) to decide how to do access control. Scenario will be very
different with IoT as involved roles will be different so on and so forth.

If there is an efficient way to generalize and map permissions, we can go
with that. We felt it is complicated to solve.

--Srinath

On Thu, Mar 31, 2016 at 4:38 PM, Frank Leymann  wrote:

> Sorry for jumping into the discussion late (or even too late).  I try to
> understand the discussion by drawing analogies to DBMS - maybe that's wrong
> and I miss the point... If I am right, what you decided in the meeting is
> fully in line with what DBMS are doing :-)
>
> In Srinath's summary, list item (2): The API then actually will be in
> charge of implementing access control. Because the API needs to decide who
> can see which data. Writing an API accessing data is like writing a SQL
> program accessing a database. Then, we are asking an SQL program to
> implement access control by itself, isn't it? This is cumbersome, and
> likely each product has to implement very similar mechanisms.
>
> In Srinath's summary, list item (1): This sounds like established practice
> in DBMS. When a SQL programmer writes a program, she must have all the
> access control rights to access the DBMS. The final program is than subject
> to access control mechanism w.r.t. to the users of the program: whoever is
> allowed to use the program somehow inherits the access writes of the
> programmer (but only in context of this program). When identifying the SQL
> programmer with the tenant (or tenant admin), this is what (1) of the
> summary decided, correct?
>
> From Srinath's summary: "*Also this means, we will not support users
> providing their own analytics queries. Only tenant admins  can provide
> their own queries.*"  Again, identifying tenant admins with SQL
> programmers, that's exactly the paradigm.
>
>
> Best regards,
> Frank
>
> 2016-03-31 10:32 GMT+02:00 Srinath Perera :
>
>> We had a meeting. Participants: Sanjiva, Sumedha, NuwanD, Prabath,
>> Srinath (Prabath please list others joined from Trace)
>>
>> Problem: Bob writes a API and publish it API manager. Then Alice
>> subscribes to the API and write an mobile APP. Charlie uses the mobile App,
>> which results in an API call. We need to track the API calls via DAS. And
>> when Bob, Alice, and possibly Charlie come to the dashboard server, they
>> should possibly see the transaction from their view.
>>
>> Challenge is that now there is no clear user in the above transaction.
>> Rather there is three. So we cannot handle this generically at the DAS
>> level via a user concept. Hence, the API manager needs to put the right
>> information when it publish data to DAS and show data only to relevant
>> parties when it showing and exposing data.
>>
>>
>> Solution
>>
>> [image: SecuirtyLayers.png]
>>
>>1. We will keep DAS in the current state with support for tenants
>>without support for users. It is aware about tenant and provide full
>>isolation between tenant. However, it does not aware about users.
>>2. Each product will write an extended receiver and DAL layer as,
>>that will build an API catered for their use cases. This API will support
>>login via OAuth tokens. Since they know the fields in the  tables that has
>>user data init, then can filter the data based on the user.
>>3. We will run the extended DAL layers and receivers in DAS, and they
>>will talk to DAL as an OSGI call.
>>4. Above layers will assume that users have access to OAuth token. In
>>APIM use cases, APIM can issue tokens, and in IoT use cases, APIM that 
>> runs
>>in the IoT server can issue tokens.
>>
>>
>> Also this means, we will not support users providing their own analytics
>> queries. Only tenant admins  can provide their own queries.
>> As decided in the earlier meeting,  We need APIM and IOT Server to be
>> able to publish events as "system user", but ask DAS to place data under
>> Ann's ( related user) account.
>>
>> Please add anything I missed.
>>
>> --Srinath
>>
>>
>>
>>
>> On Tue, Mar 29, 2016 at 11:53 AM, Srinath Perera 
>> wrote:
>> >
>> > I have scheduled a meeting tomorrow to discuss this.
>> >
>> > --Srinath
>> >
>> > On Tue, Mar 29, 2016 at 11:44 AM, Sachith Withana 
>> wrote:

Re: [Architecture] Support Efficient Cross Tenant Analytics in DAS

2016-03-31 Thread Srinath Perera
Use case come from IoT and APIM, and may be others. And it will be a common
use case due to our cloud.

On Thu, Mar 31, 2016 at 3:55 PM, Anjana Fernando  wrote:

> Hi Srinath,
>
> I'm not sure if this is something we would have to "fix". It was a clear
> design decision we took in order to isolate the tenant data, in order for
> others not to access other tenant's data. So also in Spark virtual tables,
> it will directly map to their own analytics tables. If we allow, maybe the
> super tenant, to access other tenant's data, it can be seen as a security
> threat. The idea should be, no single tenant should have any special access
> to other tenant's data.
>
> So setting aside the physical representation (which has other
> complications, like adding another index for tenantId and so on, which
> should be supported by all data sources), if we are to do this, we need a
> special view for super tenant tables in Spark virtual tables, in order for
> them to have access to the "tenantId" property of that table. And in other
> tenant's tables, we need to hide this, and not let them use it of course.
> This looks like bit of a hack to implement a specific scenario we have.
>
> So this requirement as I know mainly came from APIM analytics, where its
> in-built analytics publishes all tenant's data to super tenant's tables and
> the data is processed from there. So if we are doing this, this data is
> only used internally, and cannot be shown to each respective tenants for
> their own analytics. If each tenant needs to do their own analytics, they
> should configure to get data for their tenant space, and write their own
> analytics scripts. This may at the end mean, some type of data duplication,
> but it should happen, because two different users are doing their different
> processing. And IMO, we should not try to share any possible common data
> they may have and hack the system.
>

Yes results need to go to super tenant space.


>
> At the end, the point is, we should not take lightly what we try to
> achieve in having multi-tenancy, and compromise its fundamentals. At the
> moment, the idea should be, each tenant would have their own data, its own
> analytics scripts, and if you need to scale accordingly, have separate
> hardware for those tenants. And running separate queries for different
> tenants does not necessarily make it very slow, since the data load will be
> divided between the tenants, and only extra processing would be possible
> ramp up times for query executions.
>

Multi-tenancy always have to tradeoff with isolation vs. efficency.
However, we need to find a way to do APIM and IoT cloud use cases.


>
> Cheers,
> Anjana.
>
> On Thu, Mar 31, 2016 at 11:45 AM, Srinath Perera  wrote:
>
>> Hi Anjana,
>>
>> Currently we keep different Hbase/ RDBMS table per tenant. In
>> multi-tenant, environment, this is very expensive as we will have to run a
>> query per tenant.
>>
>> How can we fix this? e.g. if we keep tenant as field in the table, that
>> let us do a "group by".
>>
>> --Srinath
>>
>> --
>> 
>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>> Site: http://home.apache.org/~hemapani/
>> Photos: http://www.flickr.com/photos/hemapani/
>> Phone: 0772360902
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
Site: http://home.apache.org/~hemapani/
Photos: http://www.flickr.com/photos/hemapani/
Phone: 0772360902
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Application Server 6.0.0 - Architecture

2016-03-31 Thread Manuri Amaya Perera
Hi Kalpa,

In the request flow diagram, why are tomcat valves orthogonal to the other
valves?

Thanks.
Manuri

On Thu, Mar 31, 2016 at 3:53 PM, Kalpa Welivitigoda  wrote:

> Hi all,
>
> WSO2 Application Server 6.0.0 is based on Apache Tomcat 8.0. To
> add/enhance functionality, we have developed WSO2 modules and are packaged
> as libraries in the application server distribution. I have listed the
> modules we have in place with a brief description.
>
>
>- HTTP statistics monitoring
>
> This feature is to monitor HTTP traffic to the server. We have
> HttpStatValve, a Tomcat valve to collect and publish data to DAS. The
> monitoring aspect of the feature, the dashboard is being developed with
> WSO2 Dashboard Server (The earlier dashboard was a jaggery app).
>
>- Webapp loader
>
> This feature allows the users to configure different classloading
> environments for webapps. This can be configured globally (for all the
> webapps) or per webapp. We had this feature in carbon based AS as well. It
> is ported to AS 6.0 with some improvements. By default we have enabled CXF
> runtime in the server, meaning a user can deploy a JAX-RS webapp without
> any additional configuration in the server.
>
>- appserver-utils
>
> This module contains utils and configuration context that are to be used
> by other modules and for future extensions.
>
> We have implemented a test framework based on testng for integration
> tests. We also have introduced new descriptor files, a server descriptor
> named wso2as.xml and a webapp deployment descriptor named wso2as-web.xml.
> These descriptors have the configuration related to the above features.
> wso2as-web.xml can be used inside webapps as well in case the
> configurations for that particular webapp (for example class loading) needs
> to be differed that from the server wide.
>
> With the above in mind we have come up with the component diagram and
> request flow diagram attached herewith. Any comments/suggestions?
>
>
>
>
> --
> Best Regards,
>
> Kalpa Welivitigoda
> Software Engineer, WSO2 Inc. http://wso2.com
> Email: kal...@wso2.com
> Mobile: +94776509215
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*Manuri Amaya Perera*

*Software Engineer*

*WSO2 Inc.*

*Blog: http://manuriamayaperera.blogspot.com
*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] ESB Debugger - Representing wire message debugging at the visual editor

2016-03-31 Thread Rajith Vitharana
Hi,

I was able to simply show the wire log (what is just written to the wire)
as a property when a debug point get hits, But in that case there are some
drawbacks, such as we can't see back end call wire logs because we can't
set debug point just after back end call happens, (we can see back end
response but not back end request) and also we can't see the final response
to the client as well(due to the same reason).

But if we can just print the wire logs (as already doing when wire logs are
enabled in ESB) in Developer studio side when something gets written to the
wire, IMO that would be more usable, If we are going to do that, we will
have to figure out a way to filter the wire logs for that specific service
only(which is being debugged) otherwise it will show everything which get's
written to the wire.
WDYT?

Thanks,

On Thu, Mar 31, 2016 at 10:54 AM, Kasun Indrasiri  wrote:

> Hi,
>
> We came across a sort of a mandatory requirement for debugger : debugging wire
> level message from the visual editor. We need to figure out how to
> represent this visually + implement this at the core engine level (probably
> propagate these transport level information to the mediation level).
>
> With this feature, the debugger can be used to design/debug complete end
> to end message flows in ESB.
>
> Thanks,
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 
Rajith Vitharana

Software Engineer,
WSO2 Inc. : wso2.com
Mobile : +94715883223
Blog : http://lankavitharana.blogspot.com/
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Adding RNN to WSO2 Machine Learner

2016-03-31 Thread Thamali Wijewardhana
Hi,
I have organized a review on Monday (4th  of April).

Thanks

On Thu, Mar 31, 2016 at 3:21 PM, Srinath Perera  wrote:

> Please setup a review. Shall we do it monday?
>
> On Thu, Mar 31, 2016 at 2:15 PM, Thamali Wijewardhana 
> wrote:
>
>> Hi,
>>
>> we have created a spark program to prove the feasibility of adding the
>> RNN algorithm to machine learner.
>> This program demonstrates all the steps in machine learner:
>>
>> Uploading a dataset
>>
>> Selecting the hyper parameters for the model
>>
>> Creating a RNN model using data and training the model
>>
>> Calculating the accuracy of the model
>>
>> Saving the model(As a serialization object)
>>
>> predicting using the model
>>
>> This program is based on deeplearning4j and apache spark pipeline.
>> Deeplearning4j was used as the deep learning library for recurrent neural
>> network algorithm. As the program should be based on the Spark pipeline,
>> the main challenge was to use deeplearning4j library with spark pipeline.
>> The components used in the spark pipeline should be compatible with spark
>> pipeline. For other components which are not compatible with spark
>> pipeline, we have to wrap them with a org.apache.spark.predictionModel
>> object.
>>
>> We have designed a pipeline with sequence of stages (transformers and
>> estimators):
>>
>> 1. Tokenizer:Transformer-Split each sequential data to tokens.(For
>> example, in sentiment analysis, split text into words)
>>
>> 2. Vectorizer :Transformer-Transforms features into vectors.
>>
>> 3. RNN algorithm :Estimator -RNN algorithm which trains on a data frame
>> and produces a RNN model
>>
>> 4. RNN model : Transformer- Transforms data frame with features to data
>> frame with predictions.
>>
>> The diagrams below explains the stages of the pipeline. The first diagram
>> illustrates the training usage of the pipeline and the next diagram
>> illustrates the testing and predicting usage of a pipeline.
>>
>>
>> ​
>>
>>
>> ​
>>
>>
>> I also have tuned the RNN model for hyper parameters[1] and found the
>> values of hyper parameters which optimizes accuracy of the model.
>> Give below is the set of hyper parameters relevant to RNN algorithm and
>> the tuned values.
>>
>>
>> Number of epochs-10
>>
>> Number of iterations- 1
>>
>> Learning rate-0.02
>>
>> We used the aclImdb sentiment analysis data set for this program and with
>> the above hyper parameters, we could achieve 60% accuracy. And we are
>> trying to improve the accuracy and efficiency of our algorithm.
>>
>> [1]
>> https://docs.google.com/spreadsheets/d/1Wcta6i2k4Je_5l16wCVlH6zBMNGIb-d7USaWdbrkrSw/edit?ts=56fcdc9b#gid=2118685173
>>
>>
>> Thanks
>>
>>
>>
>> On Fri, Mar 25, 2016 at 10:18 AM, Thamali Wijewardhana 
>> wrote:
>>
>>> Hi all,
>>>
>>> One of the most important obstacles in machine learning and deep
>>> learning is getting data into a format that neural nets can understand.
>>> Neural nets understand vectors. Therefore, vectorization is an important
>>> part in building neural network algorithms.
>>>
>>> Canova is a Vectorization library for Machine Learning which is
>>> associated with deeplearning4j library. It is designed to support all major
>>> types of input data such as text,csv,image,audio,video and etc.
>>>
>>> In our project to add RNN for Machine Learner, we have to use a
>>> vectorizing component to convert input data to vectors. I think that Canova
>>> is a better to build a generic vectorizing component. I am researching on
>>> using Canova for the vectorizing purpose.
>>>
>>> Any suggestions on this are highly appreciated.
>>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> On Wed, Mar 2, 2016 at 2:25 PM, Thamali Wijewardhana 
>>> wrote:
>>>
 Hi Srinath,

 We have decided to  implement only classification first. Once we
 complete the classification, we hope to do next value prediction too.
 We are basically trying to implement a program to make sure that the
 deeplearning4j library we are using is compatible with apache spark
 pipeline. And also we are trying to demonstrate all the machine learning
 steps with that program.

 We are now using aclImdb sentiment analysis data set to verify the
 accuracy of the RNN model we create.

 Thanks
 Thamali


 On Wed, Mar 2, 2016 at 10:38 AM, Srinath Perera 
 wrote:

> Hi Thamali,
>
>
>1. RNN can do both classification and predict next value. Are we
>trying to do both?
>2. When Upul played with it, he had trouble getting deeplearning4j
>implementation work with predict next value scenario. Is it fixed?
>3. What are the data sets we will use to verify the accuracy of
>RNN after integration?
>
>
> --Srinath
>
> On Tue, Mar 1, 2016 at 3:44 PM, Thamali Wijewardhana  > wrote:
>
>> Hi,
>>
>> Currently we are working on a project to add Recurrent Neural
>> Network(RNN) algorithm to machine learner. RNN is one of deep learning
>>

Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Isuru Haththotuwa
On Fri, Apr 1, 2016 at 12:45 AM, Imesh Gunaratne  wrote:

>
>
> On Thu, Mar 31, 2016 at 7:56 PM, Isuru Haththotuwa 
> wrote:
>
>>
>> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>>
>>>
>>> On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
>>> wrote:
>>>

 [2].
 FROM wso2am:1.10.0
 MAINTAINER isu...@wso2.com

 COPY artifacts/ /mnt/wso2-artifacts/carbon-home

>>> We are not using root user, and the relevant user (wso2user) has
>> permission to /mnt. Technically we can give permission to /opt as well, but
>> IMHO we can have this directory in /mnt. Will change the name to
>> /mnt/wso2.
>>
>
> +1 May be /mnt/wso2/wso2 would be more meaningful.
>
IMHO since we run a single product in a container, using only 'wso2' is
enough.

>
> Thanks
>
>>
>>> Shouldn't it better to use a simple folder structure like
>>> "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
>>> Tomcat [4], JBoss [5] Dockerfiles use something similar.
>>>
>>> [3]
>>> https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
>>> [3]
>>> https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
>>> [4]
>>> https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7
>>>
>>> Thanks
>>>
>>>


 --
 Thanks and Regards,

 Isuru H.
 +94 716 358 048* *



>>>
>>>
>>> --
>>> *Imesh Gunaratne*
>>> Senior Technical Lead
>>> WSO2 Inc: http://wso2.com
>>> T: +94 11 214 5345 M: +94 77 374 2057
>>> W: http://imesh.io
>>> Lean . Enterprise . Middleware
>>>
>>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>
>
> --
> *Imesh Gunaratne*
> Senior Technical Lead
> WSO2 Inc: http://wso2.com
> T: +94 11 214 5345 M: +94 77 374 2057
> W: http://imesh.io
> Lean . Enterprise . Middleware
>
>


-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Application Server 6.0.0 - Architecture

2016-03-31 Thread Kalpa Welivitigoda
Hi Manuri,



On Fri, Apr 1, 2016 at 9:23 AM, Manuri Amaya Perera 
wrote:

> Hi Kalpa,
>
> In the request flow diagram, why are tomcat valves orthogonal to the other
> valves?
>
>
"Tomcat Valves" represents that the above are Tomcat valves, the boarder of
"Tomcat valves" makes it confusing, I agree. Thanks for the feedback.

Also "Apache Tomcat" needs to be renamed to something like "Apache Tomcat
engine" or "servlet/JSP engine", because Tomcat valves are also a part of
Tomcat Server.


> Thanks.
> Manuri
>
> On Thu, Mar 31, 2016 at 3:53 PM, Kalpa Welivitigoda 
> wrote:
>
>> Hi all,
>>
>> WSO2 Application Server 6.0.0 is based on Apache Tomcat 8.0. To
>> add/enhance functionality, we have developed WSO2 modules and are packaged
>> as libraries in the application server distribution. I have listed the
>> modules we have in place with a brief description.
>>
>>
>>- HTTP statistics monitoring
>>
>> This feature is to monitor HTTP traffic to the server. We have
>> HttpStatValve, a Tomcat valve to collect and publish data to DAS. The
>> monitoring aspect of the feature, the dashboard is being developed with
>> WSO2 Dashboard Server (The earlier dashboard was a jaggery app).
>>
>>- Webapp loader
>>
>> This feature allows the users to configure different classloading
>> environments for webapps. This can be configured globally (for all the
>> webapps) or per webapp. We had this feature in carbon based AS as well. It
>> is ported to AS 6.0 with some improvements. By default we have enabled CXF
>> runtime in the server, meaning a user can deploy a JAX-RS webapp without
>> any additional configuration in the server.
>>
>>- appserver-utils
>>
>> This module contains utils and configuration context that are to be used
>> by other modules and for future extensions.
>>
>> We have implemented a test framework based on testng for integration
>> tests. We also have introduced new descriptor files, a server descriptor
>> named wso2as.xml and a webapp deployment descriptor named wso2as-web.xml.
>> These descriptors have the configuration related to the above features.
>> wso2as-web.xml can be used inside webapps as well in case the
>> configurations for that particular webapp (for example class loading) needs
>> to be differed that from the server wide.
>>
>> With the above in mind we have come up with the component diagram and
>> request flow diagram attached herewith. Any comments/suggestions?
>>
>>
>>
>>
>> --
>> Best Regards,
>>
>> Kalpa Welivitigoda
>> Software Engineer, WSO2 Inc. http://wso2.com
>> Email: kal...@wso2.com
>> Mobile: +94776509215
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
>
> *Manuri Amaya Perera*
>
> *Software Engineer*
>
> *WSO2 Inc.*
>
> *Blog: http://manuriamayaperera.blogspot.com
> *
>



-- 
Best Regards,

Kalpa Welivitigoda
Software Engineer, WSO2 Inc. http://wso2.com
Email: kal...@wso2.com
Mobile: +94776509215
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Application Server 6.0.0 - Architecture

2016-03-31 Thread Kishanthan Thangarajah
All the valves and extensions are part of tomcat. We can show the runtime
engine separately and we do not need to show the request flow on the same
diagram but focus only on the core components and where it fits in the
architecture.

The valve names should be correctly named. What we have is SAML based SSO
valve whereas tomcat itself also provides an SSO valve.

On Fri, Apr 1, 2016 at 10:21 AM, Kalpa Welivitigoda  wrote:

> Hi Manuri,
>
>
>
> On Fri, Apr 1, 2016 at 9:23 AM, Manuri Amaya Perera 
> wrote:
>
>> Hi Kalpa,
>>
>> In the request flow diagram, why are tomcat valves orthogonal to the
>> other valves?
>>
>>
> "Tomcat Valves" represents that the above are Tomcat valves, the boarder
> of "Tomcat valves" makes it confusing, I agree. Thanks for the feedback.
>
> Also "Apache Tomcat" needs to be renamed to something like "Apache Tomcat
> engine" or "servlet/JSP engine", because Tomcat valves are also a part of
> Tomcat Server.
>
>
>> Thanks.
>> Manuri
>>
>> On Thu, Mar 31, 2016 at 3:53 PM, Kalpa Welivitigoda 
>> wrote:
>>
>>> Hi all,
>>>
>>> WSO2 Application Server 6.0.0 is based on Apache Tomcat 8.0. To
>>> add/enhance functionality, we have developed WSO2 modules and are packaged
>>> as libraries in the application server distribution. I have listed the
>>> modules we have in place with a brief description.
>>>
>>>
>>>- HTTP statistics monitoring
>>>
>>> This feature is to monitor HTTP traffic to the server. We have
>>> HttpStatValve, a Tomcat valve to collect and publish data to DAS. The
>>> monitoring aspect of the feature, the dashboard is being developed with
>>> WSO2 Dashboard Server (The earlier dashboard was a jaggery app).
>>>
>>>- Webapp loader
>>>
>>> This feature allows the users to configure different classloading
>>> environments for webapps. This can be configured globally (for all the
>>> webapps) or per webapp. We had this feature in carbon based AS as well. It
>>> is ported to AS 6.0 with some improvements. By default we have enabled CXF
>>> runtime in the server, meaning a user can deploy a JAX-RS webapp without
>>> any additional configuration in the server.
>>>
>>>- appserver-utils
>>>
>>> This module contains utils and configuration context that are to be used
>>> by other modules and for future extensions.
>>>
>>> We have implemented a test framework based on testng for integration
>>> tests. We also have introduced new descriptor files, a server descriptor
>>> named wso2as.xml and a webapp deployment descriptor named wso2as-web.xml.
>>> These descriptors have the configuration related to the above features.
>>> wso2as-web.xml can be used inside webapps as well in case the
>>> configurations for that particular webapp (for example class loading) needs
>>> to be differed that from the server wide.
>>>
>>> With the above in mind we have come up with the component diagram and
>>> request flow diagram attached herewith. Any comments/suggestions?
>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>>
>>> Kalpa Welivitigoda
>>> Software Engineer, WSO2 Inc. http://wso2.com
>>> Email: kal...@wso2.com
>>> Mobile: +94776509215
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> *Manuri Amaya Perera*
>>
>> *Software Engineer*
>>
>> *WSO2 Inc.*
>>
>> *Blog: http://manuriamayaperera.blogspot.com
>> *
>>
>
>
>
> --
> Best Regards,
>
> Kalpa Welivitigoda
> Software Engineer, WSO2 Inc. http://wso2.com
> Email: kal...@wso2.com
> Mobile: +94776509215
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 
*Kishanthan Thangarajah*
Associate Technical Lead,
Platform Technologies Team,
WSO2, Inc.
lean.enterprise.middleware

Mobile - +94773426635
Blog - *http://kishanthan.wordpress.com *
Twitter - *http://twitter.com/kishanthan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Docker] Creating a Single Docker Image for a Product

2016-03-31 Thread Imesh Gunaratne
On Fri, Apr 1, 2016 at 10:18 AM, Isuru Haththotuwa  wrote:

>
>
> On Fri, Apr 1, 2016 at 12:45 AM, Imesh Gunaratne  wrote:
>
>>
>>
>> On Thu, Mar 31, 2016 at 7:56 PM, Isuru Haththotuwa 
>> wrote:
>>
>>>
>>> On Thu, Mar 31, 2016 at 4:47 PM, Imesh Gunaratne  wrote:
>>>

 On Thu, Mar 31, 2016 at 2:05 PM, Isuru Haththotuwa 
 wrote:

>
> [2].
> FROM wso2am:1.10.0
> MAINTAINER isu...@wso2.com
>
> COPY artifacts/ /mnt/wso2-artifacts/carbon-home
>
 We are not using root user, and the relevant user (wso2user) has
>>> permission to /mnt. Technically we can give permission to /opt as well, but
>>> IMHO we can have this directory in /mnt. Will change the name to
>>> /mnt/wso2.
>>>
>>
>> +1 May be /mnt/wso2/wso2 would be more meaningful.
>>
> IMHO since we run a single product in a container, using only 'wso2' is
> enough.
>

+1


>
>> Thanks
>>
>>>
 Shouldn't it better to use a simple folder structure like
 "/usr/local/wso2/wso2/" instead of above? Apache Httpd [3],
 Tomcat [4], JBoss [5] Dockerfiles use something similar.

 [3]
 https://github.com/docker-library/httpd/blob/bc72e42914f671e725d85a01ff037ce87c827f46/2.4/Dockerfile#L6
 [3]
 https://github.com/docker-library/tomcat/blob/ed98c30c1cd42c53831f64dffa78a0abf7db8e9a/8-jre8/Dockerfile#L3
 [4]
 https://github.com/jboss-dockerfiles/wildfly/blob/master/Dockerfile#L7

 Thanks


>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>


 --
 *Imesh Gunaratne*
 Senior Technical Lead
 WSO2 Inc: http://wso2.com
 T: +94 11 214 5345 M: +94 77 374 2057
 W: http://imesh.io
 Lean . Enterprise . Middleware


>>>
>>>
>>> --
>>> Thanks and Regards,
>>>
>>> Isuru H.
>>> +94 716 358 048* *
>>>
>>>
>>>
>>
>>
>> --
>> *Imesh Gunaratne*
>> Senior Technical Lead
>> WSO2 Inc: http://wso2.com
>> T: +94 11 214 5345 M: +94 77 374 2057
>> W: http://imesh.io
>> Lean . Enterprise . Middleware
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>


-- 
*Imesh Gunaratne*
Senior Technical Lead
WSO2 Inc: http://wso2.com
T: +94 11 214 5345 M: +94 77 374 2057
W: http://imesh.io
Lean . Enterprise . Middleware
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Deployable Artifact Model

2016-03-31 Thread Supun Sethunga
[+ arch@]

Hi,

IMO, we have to go with separate artifacts, for the easiness of
maintainability. for eg: if we have separate artifacts (say for projects
and analyzes),

   - One can easily add and remove analyzes at any time from a project,
   just by adding/deleting the file corresponds to that artifact.
   - Since it doesn't require updating any existing files, that also
   eliminates the possibility of affecting (rather harming) existing
   projects/analyzes.
   - This also means, one corrupted analyzes (or project or any other
   'module') would not affect other analyzes.

Regards,
Supun

On Fri, Apr 1, 2016 at 10:51 AM, Nethaji Chandrasiri 
wrote:

> Hi,
>
> Since I'm working on deployable artifact model scenario I made a list [1]
> of pros and cons of both the approaches I found so far.
>
>
> https://docs.google.com/a/wso2.com/spreadsheets/d/1Lm5xSmXOG1dDEXGPOthI7nnsjwjKt-apwj77SjxB3Fg/edit?usp=sharing
>
> --
> *Nethaji Chandrasiri*
> *Software Engineering* *Intern; WSO2, Inc.; http://wso2.com
> *
> Mobile : +94 (0) 779171059 <%2B94%20%280%29%20778%20800570>
> Email  : neth...@wso2.com
>



-- 
*Supun Sethunga*
Software Engineer
WSO2, Inc.
http://wso2.com/
lean | enterprise | middleware
Mobile : +94 716546324
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture