[Architecture] WSO2 Data Analytics Server 4.0.0-M2 Released !

2017-04-27 Thread Sriskandarajah Suhothayan
Hi All,

The WSO2 Smart Analytics  team is pleased
to announce the release of WSO2 Data Analytics Server version 4.0.0
Milestone 2.

WSO2 Smart Analytics let digital business creating real-time, intelligent,
actionable business insights, and data products which are achieved by WSO2
Data Analytics Server's real-time, incremental  & intelligent data
processing capabilities.

WSO2 DAS can:

   - Receive events from various data sources
   - Process & correlate them in real-time with the sate of the art
   high-performance real-time Siddhi Complex Event Processing Engine that
   works with easy to learn the SQL-Like query language.
   - Process analysis that spans for longer time duration with its
   incremental processing capability by achieving high performance with low
   infrastructure cost.
   - Uses Machine Learning and other models to drive intelligent insights
   from the data
   - Notifications interesting event occurrences as alerts via multiple
   transports & let users visualize the results via customizable dashboards.

WSO2 DAS is released under Apache Software License Version 2.0
, one of the most
business-friendly licenses available today.

You can find the product at
https://github.com/wso2/product-das/releases/download/v4.0.0-M2/wso2das-4.0.0-M2.zip
Documentation at https://docs.wso2.com/display/DAS400/
Source code at https://github.com/wso2/product-das/releases/tag/v4.0.0-M2

WSO2 DAS 4.0.0-M2 includes following new features.

New Features

   - XML input and output mapping support
   - Siddhi syntax highlighting and query autocompletion for Data Analytics
   Editor
   - Better support for In-Memory table indexing with @PrimaryKey and
   @Index annotations
   - Event simulator to send a single event and to simulate .csv files

Reporting *Issues*
Issues can be reported using the public JIRA available at
https://wso2.org/jira/browse/DAS
Contact usWSO2 Data Analytics Server developers can be contacted via the
mailing lists:

   Developer List : d...@wso2.org | Subscribe
 | Mail Archive


Alternatively, questions can also be raised in the stackoverflow:
*Forum* http://stackoverflow.com/questions/tagged/wso2/

Support

We are committed to ensuring that your enterprise middleware deployment is
completely supported from evaluation to production. Our unique approach
ensures that all support leverages our open development methodology and is
provided by the very same engineers who build the technology.

For more details and to take advantage of this unique opportunity please
visit http://wso2.com/support/.

For more information on WSO2 Smart Analytics and Smart Analytics Solutions,
visit the WSO2 Smart Analytics Page .
*-The WSO2 WSO2 Smart Analytics Team- *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [CEP] Behavior of AND in Siddhi pattern

2017-05-22 Thread Sriskandarajah Suhothayan
+1 for the change.

On Tue, May 23, 2017 at 1:28 AM, Gobinath  wrote:

> Hi all,
>
> I have encountered an issue with existing Siddhi pattern unit test
> case [1]. Consider the following Siddhi query and the inputs:
>
> *Query:*
>
> from e1=Stream1[price > 20] -> e2=Stream2[price > e1.price] and
> e3=Stream2['IBM' == symbol]
> select e1.symbol as symbol1, e2.price as price2, e3.price as price3
> insert into OutputStream;
>
> *Given input:*
> stream1.send(new Object[]{"WSO2", 55.6f, 100});
> stream2.send(new Object[]{"IBM", 72.7f, 100});
> stream2.send(new Object[]{"IBM", 75.7f, 100});
>
> *Actual Output:* "WSO2", 72.7f, 72.7f
>
> *What I Expect:* "WSO2", 72.7f, *75.7f*
>
> As you can see I expect two different events to satisfy the 'and'
> operation in the pattern but in Siddhi, the 'and' condition is satisfied
> with a single event. I have tested similar query in another CEP which
> outputs the same output as I expect (i.e "WSO2", 72.7f, 75.7f).
>
> Could you please clarify whether the current design decision makes sense
> or not.
>
>
> [1] https://github.com/wso2/siddhi/blob/master/modules/
> siddhi-core/src/test/java/org/wso2/siddhi/core/query/pattern/
> LogicalPatternTestCase.java#L277
> ​
>
> Thanks & Regards,
> Gobinath
>
> --
> *Gobinath** Loganathan*
> Graduate Student,
> Electrical and Computer Engineering,
> Western University.
> Email  : slgobin...@gmail.com
> Blog: javahelps.com 
>
>



-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [IS 6.0.0] Email Management Component Implementation

2017-06-01 Thread Sriskandarajah Suhothayan
Hi All

DAS team is about to start C5 based the email transport implementation for
DAS.
Have we already implemented a common component for Email?
What's the status of this?
If so is this a carbon transport or common component?

Regards
Suho


On Tue, Jan 24, 2017 at 10:50 AM, Ayesha Dissanayaka 
wrote:

>
> On Tue, Jan 24, 2017 at 10:39 AM, Ishara Karunarathna 
> wrote:
>
>> Shall we create a template folder inside configs and put relevant configs
>> there
>>
>> config/
>> └── templates/
>>   └── email/
>>
>
> +1
>
>
> --
> *Ayesha Dissanayaka*
> Software Engineer,
> WSO2, Inc : http://wso2.com
> 
> 20, Palmgrove Avenue, Colombo 3
> E-Mail: aye...@wso2.com 
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [IS 6.0.0] Email Management Component Implementation

2017-06-01 Thread Sriskandarajah Suhothayan
Thanks, we will also use the same and use the Javax for sending maills.

Suho

On Thu, Jun 1, 2017 at 5:26 PM Lahiru Manohara  wrote:

> Hi Suho,
>
> We have not implemented the email transport for the IS 6.0.0 M3. We have
> implemented the custom template adding and used Javax mail to send the
> email[1].
>
> 1.
> https://github.com/wso2-extensions/identity-event-handler-notification/tree/C5/components/email-mgt/org.wso2.carbon.email.mgt
>
> Best Regards,
>
> On Thu, Jun 1, 2017 at 4:49 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> Hi All
>>
>> DAS team is about to start C5 based the email transport implementation
>> for DAS.
>> Have we already implemented a common component for Email?
>> What's the status of this?
>> If so is this a carbon transport or common component?
>>
>> Regards
>> Suho
>>
>>
>> On Tue, Jan 24, 2017 at 10:50 AM, Ayesha Dissanayaka 
>> wrote:
>>
>>>
>>> On Tue, Jan 24, 2017 at 10:39 AM, Ishara Karunarathna 
>>> wrote:
>>>
>>>> Shall we create a template folder inside configs and put relevant
>>>> configs there
>>>>
>>>> config/
>>>> └── templates/
>>>>   └── email/
>>>>
>>>
>>> +1
>>>
>>>
>>> --
>>> *Ayesha Dissanayaka*
>>> Software Engineer,
>>> WSO2, Inc : http://wso2.com
>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.com&sa=D&sntz=1&usg=AFQjCNEZvyc0uMD1HhBaEGCBxs6e9fBObg>
>>> 20, Palmgrove Avenue, Colombo 3
>>> E-Mail: aye...@wso2.com 
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Director / Architect
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <+94%2077%20975%206757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>
>
> --
> *Lahiru Manohara*
> *Software Engineer*
> Mobile: +94716561576
> WSO2 Inc. | http://wso2.com
> lean.enterprise.middleware
>
> --

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dashboard] Introducing a base widget component

2017-08-31 Thread Sriskandarajah Suhothayan
+1, I think this will also help us to manage user-prefs in the future.

On Thu, Aug 31, 2017 at 1:44 PM, Lasantha Samarakoon 
wrote:

> Hi all,
>
> In the new React based dashboard widget is defined as a another React
> component. ATM these widgets are basic React components which extends from
> React.Component class.
>
> But within these widgets we need to provide some extra capabilities for
> widget developers such as providing mechanism to inter-communicate among
> widgets via pub/sub, provide APIs to save widget states, etc. And also
> there is another issue with the current implementation i.e. the CSS styles
> embedded within a widget may conflict with styles applied to dashboard and
> other widgets (no CSS isolation).
>
> *Solution:*
>
> As a common solution for these requirements we thought of introducing a
> base widget class so that all the other widgets can extend from this base
> class instead of React.Component. By introducing this base class we can
> provide additional capabilities to widget developers.
>
> Ex. 1) Pub/sub
>
> For the Pub/sub implementation base widget component can expose methods
> and events for widgets to use (methods for publishing/subscribing to topic).
>
> Ex. 2) CSS isolation
>
> For CSS isolation we can isolate the widget content using React shadow
> DOM. For that we can introduce a new method called renderWidget() in the
> widget base class and all the widgets can define the renderWidget() instead
> of the React's render() method. Within the render() method of the widget
> class we can invoke the renderWidget() method and wrap the resultant
> content using React shadow DOM. High level implementation of those
> components will be as follows.
>
> *Widget base component:*
>
> class Widget extends React.Component {
> /* ... */
> render() {
> return (
> 
> renderWidget()
> 
> );
> }
> /* ... */
> }
>
>
> *Widget component:*
>
> class MyWidget extends Widget {
> /* ... */
> renderWidget() {
> return ;
> }
> /* ... */
> }
>
>
> Any feedback on this?
>
> Regards,
>
> *Lasantha Samarakoon* | Software Engineer
> WSO2, Inc.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 (71) 214 1576 <071%20214%201576>
> Email:  lasant...@wso2.com
> Web:www.wso2.com
>
> lean . enterprise . middleware
>



-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 <077%20975%206757> | blog:
http://suhothayan.blogspot.com/ twitter:
http://twitter.com/suhothayan  | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] WSO2 Stream Processor 4.0.0-M11 Released !

2017-09-07 Thread Sriskandarajah Suhothayan
On Fri, Sep 8, 2017 at 4:55 AM Anusha Jayasundara  wrote:

> Hi All,
>
>
> The WSO2 Analytics team is pleased to announce the release of *WSO2
> Stream Processor Version 4.0.0 Milestone 11*.
>
> WSO2 Smart Analytics let digital business creating real-time, intelligent,
> actionable business insights, and data products which are achieved by WSO2
> Stream Processor's real-time, incremental & intelligent data processing
> capabilities.
>
> WSO2 Stream Processor can:
>
>-
>
>Receive events from various data sources
>-
>
>Process & correlate them in real-time with the sate of the art
>high-performance real-time Siddhi Complex Event Processing Engine that
>works with easy to learn the SQL-Like query language.
>-
>
>Process analysis that spans for longer time duration with its
>incremental processing capability by achieving high performance with low
>infrastructure cost.
>-
>
>Uses Machine Learning and other models to drive intelligent
>insights from the data
>-
>
>Notifications interesting event occurrences as alerts via multiple
>types of transport & let users visualize the results via customizable
>dashboards.
>-
>
>WSO2 SP is released under Apache Software License Version 2.0, one
>of the most business-friendly licenses available today.
>
>
> You can find the product at 
> *https://github.com/wso2/product-sp/releases/download/v4.0.0-M11/wso2sp-4.0.0-M11.zip
> *
> Documentation at *https://docs.wso2.com/display/SP400/Introduction*/
> 
> Source code at *https://github.com/wso2/product-sp
> /*
>
> *WSO2 SP 4.0.0-M11 includes the following*
>
> *New Features*
>
>- Reliable message processing with kafka
>- Non occurrence of event - siddhi pattern
>- Multiple primary-key support for inmemory tables
>- Streaming ML
>   - Hoeffding classifier
>   - K-means clustering
>- Editor improvements
>
> + Static Dashboard

>
>-
>
> *Reporting Issues*
>
> Issues can be reported using the github issue tracker available at
> https://github.com/wso2/product-sp
> 
> *Contact us*
>
> WSO2 Stream Processor developers can be contacted via the mailing lists:
>
> Developer List : d...@wso2.org | Subscribe
>  | M
> 
>
>
> Alternatively, questions can also be raised in the Stackoverflow:
>
> Forum http://stackoverflow.com/questwso2/
> 
>
>
> *Support *
>
> We are committed to ensuring that your enterprise middleware deployment is
> completely supported from evaluation to production. Our unique approach
> ensures that all support leverages our open development methodology and is
> provided by the very same engineers who build the technology.
>
> For more details and to take advantage of this unique opportunity please
> visit http://wso2.com/support/.  
>
> For more information on WSO2 Smart Analytics and
> Smart Analytics Solutions, visit the WSO2 Smart Analytics Page
> .
>
>
>
> *~ The WSO2 Analytics Team ~*
> --
>
> *Anusha Jayasundara*
> Software Engineer | WSO2
>
> Email : anus...@wso2.com
> Mobile : +94772601160
> Web : http://wso2.com
> Lean.Enterprise.Middleware
> 
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Archetype for Siddhi extension.

2017-09-21 Thread Sriskandarajah Suhothayan
Why are we asking -DtypeOfIO=?
I thought it was asked during the interactive mode?
What are some possible values to this?

Regards
Suho



On Thu, Sep 21, 2017 at 12:55 PM, Kalaiyarasi Ganeshalingam <
kalaiyar...@wso2.com> wrote:

> Hi all,
>
> I have implemented the archetype for Siddhi extension.Archetype
> automatically set up all directories and project files for a  new maven
> project for siddhi extension.This is a multi module archetype which
> contains siddhi-io ,siddhi-map ,siddhi-execution and siddhi-store
> archetypes.This can be used for creating Siddhi extensions template in
> maven projects.Users can able to generate templates by passing parameters
> like group id, artifact id, version and package names.
>
> For an instance,to create siddhi-io template,user can execute following
> command
>
> mvn archetype:generate -DarchetypeGroupId= group_id> -DarchetypeArtifactId=  artifact_id> -DarchetypeVersion=
> -DgroupId=
> -Dversion=  -DtypeOfIO= type>
>
> The given example is used to creating the siddhi-extension-io's
> template,by passing mentioned parameters.
>
> For an instance new project template's structure is given below.
>
> └── siddhi-io-file
> ├── component
> │   ├── pom.xml
> │   └── src
> │   ├── main
> │   │   ├── java
> │   │   │   └── org
> │   │   │   └── wso2
> │   │   │   └── extension
> │   │   │   └── siddhi
> │   │   │   └── io
> │   │   │   └── file
> │   │   │   ├── sink
> │   │   │   │   └── FileSink.java
> │   │   │   └── source
> │   │   │   └── FileSource.java
> │   │   └── resources
> │   │   └── log4j.properties
> │   └── test
> │   └── java
> │   └── org
> │   └── wso2
> │   └── extension
> │   └── siddhi
> │   └── io
> │   └── file
> │   ├── sink
> │   │   └── TestCaseOfFileSink.java
> │ └── source
> │   └──
> TestCaseOfFileSource.java
> └── pom.xml
>
>
> Regards,
> Kalaiyarasi Ganeshalingam
> Associate Software Engineer| WSO2
> WSO2 Inc : http://wso2.org
> 
> Tel:+94 076 6792895 <+94%2076%20679%202895>
> LinkedIn :www.linkedin.com/in/kalaiyarasiganeshalingam
> Blogs : http://kalai4.blogspot.com/
>



-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Archetype for Siddhi extension.

2017-09-21 Thread Sriskandarajah Suhothayan
Ok Noted


On Thu, Sep 21, 2017 at 1:58 PM, Kalaiyarasi Ganeshalingam <
kalaiyar...@wso2.com> wrote:

> Hi ,
>
> Yes, it was asked during the interative mode,the reason why I mentioned it
> here is,because this parameter is mandatory to create the template.
> Possible values are file,mqtt,http,tcp,jms,email etc.
>
> Regards,
>
> Kalaiyarasi Ganeshalingam
> Associate Software Engineer| WSO2
> WSO2 Inc : http://wso2.org
> <http://www.google.com/url?q=http%3A%2F%2Fwso2.org&sa=D&sntz=1&usg=AFQjCNE_eTDfyl2ibPcq0hcXvRDNVuQmMg>
> Tel:+94 076 6792895 <076%20679%202895>
> LinkedIn :www.linkedin.com/in/kalaiyarasiganeshalingam
> Blogs : http://kalai4.blogspot.com/
>
> On Thu, Sep 21, 2017 at 1:20 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> Why are we asking -DtypeOfIO=?
>> I thought it was asked during the interactive mode?
>> What are some possible values to this?
>>
>> Regards
>> Suho
>>
>>
>>
>> On Thu, Sep 21, 2017 at 12:55 PM, Kalaiyarasi Ganeshalingam <
>> kalaiyar...@wso2.com> wrote:
>>
>>> Hi all,
>>>
>>> I have implemented the archetype for Siddhi extension.Archetype
>>> automatically set up all directories and project files for a  new maven
>>> project for siddhi extension.This is a multi module archetype which
>>> contains siddhi-io ,siddhi-map ,siddhi-execution and siddhi-store
>>> archetypes.This can be used for creating Siddhi extensions template in
>>> maven projects.Users can able to generate templates by passing parameters
>>> like group id, artifact id, version and package names.
>>>
>>> For an instance,to create siddhi-io template,user can execute following
>>> command
>>>
>>> mvn archetype:generate -DarchetypeGroupId=>> group_id> -DarchetypeArtifactId=>>  artifact_id> -DarchetypeVersion=
>>> -DgroupId=
>>> -Dversion=  -DtypeOfIO=>> type>
>>>
>>> The given example is used to creating the siddhi-extension-io's
>>> template,by passing mentioned parameters.
>>>
>>> For an instance new project template's structure is given below.
>>>
>>> └── siddhi-io-file
>>> ├── component
>>> │   ├── pom.xml
>>> │   └── src
>>> │   ├── main
>>> │   │   ├── java
>>> │   │   │   └── org
>>> │   │   │   └── wso2
>>> │   │   │   └── extension
>>> │   │   │   └── siddhi
>>> │   │   │   └── io
>>> │   │   │   └── file
>>> │   │   │   ├── sink
>>> │   │   │   │   └── FileSink.java
>>> │   │   │   └── source
>>> │   │   │   └── FileSource.java
>>> │   │   └── resources
>>> │   │   └── log4j.properties
>>> │   └── test
>>> │   └── java
>>> │   └── org
>>> │   └── wso2
>>> │   └── extension
>>> │   └── siddhi
>>> │   └── io
>>> │   └── file
>>> │   ├── sink
>>> │   │   └──
>>> TestCaseOfFileSink.java
>>> │ └── source
>>> │   └──
>>> TestCaseOfFileSource.java
>>> └── pom.xml
>>>
>>>
>>> Regards,
>>> Kalaiyarasi Ganeshalingam
>>> Associate Software Engineer| WSO2
>>> WSO2 Inc : http://wso2.org
>>> <http://www.google.com/url?q=http%3A%2F%2Fwso2.org&sa=D&sntz=1&usg=AFQjCNE_eTDfyl2ibPcq0hcXvRDNVuQmMg>
>>> Tel:+94 076 6792895 <+94%2076%20679%202895>
>>> LinkedIn :www.linkedin.com/in/kalaiyarasiganeshalingam
>>> Blogs : http://kalai4.blogspot.com/
>>>
>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Director / Architect
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <+94%2077%20975%206757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>


-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Securing Product Apis and Product artifacts in Stream Processor

2017-10-20 Thread Sriskandarajah Suhothayan
On Thu, Oct 19, 2017 at 5:21 PM, Niveathika Rajendran 
wrote:

> Hi all,
>
> The following points outline the integration of Stream Processor with an
> Identity Provider. Identity Provider Client interface will act as the
> mediator between SP components and Identity Provider. The points are
> divided into to two main parts based on the authentication mechanism.
>
> *Basic Authentication (Only for evaluation of the product)*
>
> Why this is only for evaluation of the product?

We have to support Basic Authentication in production too.

The idea is to support this by converting the Basic Authentication as
Password GrantType and validating the client.
I think you have missed that part. Can you elobarate on that too.

1. User store is maintained in the file system.
> 2. Session management is maintained by the Identity Provider Client
> interface by maintaining the users login  along with a randomly generated
> session id and expiry time.
>
> We have to have a filebased user store by default (thats for the
evaluation of the product) and that should work for both Basic
Authentication and OAuth2 Authentication.

Can you udpate the mail with the correct information.


>
> *OAuth2 Authentication*
>
> 1. Use the Dynamic Client Registration endpoint in the IdP to create
> service provider dynamically.
> 1. Through SP dashboard UI user can requests access tokens through either
> password grant type or authorization code grant type.
> 2. Session management is maintained through the tokens returned by the IdP.
> 3.  Users can also access the back end APIs with either username &
> password or access token. If user presents the username & password the
> interceptor will redirect to Identity Provider Client's token requesting
> function. Thus essentially the user requesting token from the IdP. If user
> accesses with token then the token will be validated through introspection
> end point of the IdP.
>
>
>
>
> More information on the solution can be found at [1]
>
>
> [1] https://docs.google.com/a/wso2.com/document/d/1vFP_GZcuLzJrk
> RDV3mCfuSDkwC8eKClmp4zt-lUs1Ro/edit?usp=sharing
>
> --
> Best Regards,
> *Niveathika Rajendran,*
> *Software Engineer.*
> *Mobile : +94 077 903 7536 <+94%2077%20903%207536>*
>
>


-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Analytics] Introducing a common permission model.

2017-10-20 Thread Sriskandarajah Suhothayan
As of the offline discussion

We decided to go ahead with the following databases


*PERMISSIONS*

APP_NAME VARCHAR(3) NOT NULL
PERMISSION_STRING VARCHAR(50) NOT NULL
PRIMARY KEY (APP_NAME, PERMISSION_STRING)


*ROLE_PERMISSIONS*

APP_NAME VARCHAR(3) NOT NULL
PERMISSION_STRING VARCHAR(50) NOT NULL
ROLE_ID VARCHAR(100) NOT NULL

We decided to use a composite key to uniquely identify the permissions. and
to store role_id instead of the name.


This component will also have a method to check hasPermission(username, app
name, permission string)
That will get all the roles assigned to the given user from the IdP client
OSGi service (described in mail [1]) and check for role permission mapping
from the database.

Please update if I have missed any

[1] [Architecture] Securing Product Apis and Product artifacts in Stream
Processor

Regards
Suho




On Wed, Oct 18, 2017 at 2:47 PM, Tanya Madurapperuma  wrote:

>
>
> On Wed, Oct 18, 2017 at 2:20 PM, Lasantha Samarakoon 
> wrote:
>
>> ​Where do we maintain the resource to permission mapping? Is it at the
>> common component level or each app has to maintain its own mapping?
>>
>> Resource to permission mapping needs to be maintained at each app level.
>> Common component doesn't need to know about the resources but only roles
>> and permissions. At the app level we can implement a hasPermission() method
>> which check whether any of the roles of the current user has respective
>> permission.​
>>
> IMO we should bring that also to the common component. If not every app
> developer will have to have their own hasPermission method.
> Instead of that i think it would be better if we can provide a common api
> for adding resource permission mapping and a common api to check
> hasPermission. WDYT?
>
> Thanks,
> Tanya
>
>>
>> *Lasantha Samarakoon* | Software Engineer
>> WSO2, Inc.
>> #20, Palm Grove, Colombo 03, Sri Lanka
>> 
>> Mobile: +94 (71) 214 1576 <071%20214%201576>
>> Email:  lasant...@wso2.com
>> Web:www.wso2.com
>>
>> lean . enterprise . middleware
>>
>> On Wed, Oct 18, 2017 at 2:04 PM, Tanya Madurapperuma 
>> wrote:
>>
>>> Hi Lasantha,
>>>
>>> Where do we maintain the resource to permission mapping? Is it at the
>>> common component level or each app has to maintain its own mapping?
>>>
>>> Thanks,
>>> Tanya
>>>
>>> On Wed, Oct 18, 2017 at 1:34 PM, Lasantha Samarakoon >> > wrote:
>>>
 Hi all,

 In the new React based dashboard component we need to implement a
 permission model based on user roles to limit access to dashboard
 resources. Since this can be a common requirement among all the React based
 apps in under Analytics we thought of introducing a common component to
 serve the purpose. Therefore we are thinking of add this component into
 carbon-analytics repository.

 Implementation:

 As we discussed internally this component will expose an OSGi service
 which provides all the necessary APIs. This includes the following.

- CRUD operations on permissions (i.e. add/edit/delete/get/list
permissions)
- Grant and revoke permissions from particular roles.

 In order to persist permissions following database will be implemented.

 *PERMISSIONS*

 ID INT AUTO_INCREMENT PRIMARY KEY
 APP_NAME VARCHAR(3) NOT NULL
 PERMISSION_STRING VARCHAR(50) NOT NULL



 *ROLE_PERMISSIONS*

 ID INT AUTO_INCREMENT PRIMARY KEY
 PERMISSION_ID INT NOT NULL
 ROLE_NAME VARCHAR(100) NOT NULL


 Since we are not maintaining the roles withing this database schema we
 suppose to retrieve them via the SCIM API.

 Appreciate your feedback.


 Regards,

 *Lasantha Samarakoon* | Software Engineer
 WSO2, Inc.
 #20, Palm Grove, Colombo 03, Sri Lanka
 
 Mobile: +94 (71) 214 1576 <071%20214%201576>
 Email:  lasant...@wso2.com
 Web:www.wso2.com

 lean . enterprise . middleware

>>>
>>>
>>>
>>> --
>>> Tanya Madurapperuma
>>>
>>> Associate Technical Lead,
>>> WSO2 Inc. : wso2.com
>>> Mobile : +94718184439 <+94%2071%20818%204439>
>>> Blog : http://tanyamadurapperuma.blogspot.com
>>>
>>
>>
>
>
> --
> Tanya Madurapperuma
>
> Associate Technical Lead,
> WSO2 Inc. : wso2.com
> Mobile : +94718184439 <071%20818%204439>
> Blog : http://tanyamadurapperuma.blogspot.com
>



-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 <077%20975%206757> | blog:
http://suhothayan.blogspot.com/ twitter:
http://twitter.com/suhothayan  | linked-in:
http://lk.linkedin.com/in/suhothayan *

Re: [Architecture] 2 Node Minimum HA Pattern for Stream Processor

2017-10-24 Thread Sriskandarajah Suhothayan
The image is missing.

On Sun, Oct 22, 2017 at 3:48 PM, Anoukh Jayawardena  wrote:

> Hi All
>
> This is an update on the WSO2 Stream Processor’s 2 Node minimum HA
> deployment. While some of the design elements changed during the
> implementation, this email will cover the overall design
> Two SP nodes work in an active-passive configuration where both nodes
> receive and process the same events. But only the active node will publish
> the events. If the active node goes down, the passive node would change its
> role and start publishing events as the active node. When the node that
> went down is restarted, it would act as the passive node ready to change
> roles when needed. For this to work it should be ensured that both nodes
> receive the same events continuously.
>
>
> ​
> To ensure "at least once publishing" following techniques were adopted
>
> *1. Double State Syncing of Passive Node*
>
> The base of the HA implementation is that both nodes have the same state
> at a given time. For this, when the Passive node starts up it should sync
> up with the Active node. This syncing is done in two user configurable
> ways. i.e. Live Sync enabled or disabled.
>
> When live sync is enabled and a Siddhi application is deployed in the
> passive node, a REST call is made to the Active node to get the snapshot of
> the deployed Siddhi application. Once the snapshot is received and restored
> on the passive node the Siddhi application will be deployed. If such a
> snapshot is not found, the passive node would defer the deployment of the
> Siddhi application for a user configurable time period after which another
> state sync occurs. If yet a snapshot is not received, the Siddhi
> application will be in an inactive state. When live sync is disabled, the
> snapshot of the Siddhi application will be taken from the Active nodes last
> persisted state which is either from the database or the file system.
>
> After the initial syncing of snapshots, the passive node’s sources may not
> connect on time to process events. This means that passive node is not 100%
> in sync with the active node. Hence a second syncing of states happen after
> a user configured time period from the server start time. The snapshot off
> all Siddhi applications is taken and restored in the passive node. Since
> the size of the state may be large, an event queue is implemented in the
> passive node as a solution for the time taken for the snapshot to reach the
> passive node and restore. During the syncing, passive node would queue all
> events and start processing events from where the active node stopped
> processing to take the snapshot. This guarantees that the active and
> passive node process the same amount of events.
> In transferring snapshots using live sync, the snapshots are compressed
> using gzip to reduce the size and time taken for the snapshot to reach the
> passive node.
>
>
> *2. Periodic output syncing of Passive Node*
>
> Although both active and passive nodes process the same events in a 100%
> synced manner, once the active node goes down, the passive node may take
> some time to identify this and start publishing events. This means some
> events may not be published. As a solution, the passive node would queue
> all the events that are processed (per event sink). Periodically passive
> node would trim this queue according to the last processed event of the
> active node. When passive node identifies that the active node is down, it
> would start publishing the events in the queue first. This guarantees that
> no events are dropped.
>
> Enabling live sync would allow the passive node to directly ping the
> active node periodically to get the timestamp of the last published event
> and use it to trim the queue. When live sync is disabled, the active node
> would periodically save the same information in the database and the
> passive node would periodically read this value.
>
> Find the user story of the implementation at [1] which has details about
> the configuration of the 2 node minimum HA.
>
> [1] https://redmine.wso2.com/issues/6724
>
> Thanks
> Anoukh
>
>
> On Fri, Aug 18, 2017 at 2:01 PM, Anoukh Jayawardena 
> wrote:
>
>> +architecture
>>
>>
>> On Thu, Aug 17, 2017 at 3:24 PM, Anoukh Jayawardena 
>> wrote:
>>
>>> Hi All,
>>>
>>> This is a high level overview of the 2 node minimum high availability
>>> (HA) deployment feature for the Stream processor (SP). The implementation
>>> would adopt an Active Passive approach with periodic state persistence. The
>>> process flow of how this feature would work is as follows
>>>
>>> *Prerequisites*
>>>
>>>- 2 SP workers, one would be the Active worker while the other would
>>>be the passive. Both nodes should include the same Siddhi Applications
>>>deployed.
>>>- A specified RDBMS or file location for periodic state persistence
>>>of Siddhi App states.
>>>- A running zookeeper service or RDBMS instance for coordination
>>>among the two nodes (Will 

Re: [Architecture] [APIM 3.0.0] & [SP 4.0.0] siddhi-store-cassandra implementation

2017-10-28 Thread Sriskandarajah Suhothayan
With Stream Processor 4.0, we have added support to RDBMS, HBase, MongoDB,
and Solr.
We are still working in Progress with Cassandra.

As of now, I don't see a reason why you can't use Cassandra with APIM 3.0.

When APIM analytics related scripts and Cassandra Store implementation are
ready we will be able to validate them
against Cassandra and give recommendations.

Regards
Suho


On Sun, Oct 29, 2017 at 9:30 AM, Lakmal Warusawithana 
wrote:

> Adding Suho
>
> On Sat, Oct 28, 2017 at 12:58 PM, Youcef HILEM 
> wrote:
>
>> Hi,
>>
>> I am studying the architecture of APIM 3.0.0 and I am preparing the
>> qualification environment for this next release.
>>
>> Among the APIM 3.0.0 components, there is WSO2 APIM Data Analytics Server
>> 3.0.0 that relies on SP.
>>
>> My question: can we expect an implementation of siddhi-store-cassandra
>> (https://github.com/wso2-extensions/siddhi-store-cassandra) ?
>>
>> Our future directions consist of using the NoSQL Cassandra database for
>> these use cases.
>> Our infrastructure is Cassandra ready.
>>
>> Thanks,
>> Youcef HILEM
>>
>>
>>
>> --
>> Sent from: http://wso2-oxygen-tank.10903.n7.nabble.com/WSO2-Architectur
>> e-f62919.html
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>
>
>
> --
> Lakmal Warusawithana
> Senior Director - Cloud Architecture; WSO2 Inc.
> Mobile : +94714289692 <071%20428%209692>
> Blogs : https://medium.com/@lakwarus/
> http://lakmalsview.blogspot.com/
>
>
>


-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [VOTE] Release WSO2 Stream Processor 4.0.0 RC2

2017-12-21 Thread Sriskandarajah Suhothayan
Hi Chandana,

We couldn’t reproduce this in windows 10 and 7. Can you please give us more
information about the environment, and also pleas make sure the pack you
have downloaded is not corrupted.

Thanks

On Fri, Dec 22, 2017 at 9:49 AM Chandana Napagoda 
wrote:

> Hi Eranga,
>
> I tested above without having the snappy jar.
>
> Regards,
> Chandana
>
> On 22 December 2017 at 14:40, Eranga Liyanage  wrote:
>
>> Hi Chandana,
>>
>> Could you please test without snappy.
>>
>> Best regards,
>> Eranga.
>>
>> On 22 Dec 2017 9:20 am, "Rukshani Weerasinha"  wrote:
>>
>>> Hello Chandana,
>>>
>>> The team mentioned that we do not need Snappy Java for SP. Therefore, I
>>> removed the section to install and add it.
>>>
>>> Best Regards,
>>> Rukshani.
>>>
>>> On Fri, Dec 22, 2017 at 9:09 AM, Rukshani Weerasinha 
>>> wrote:
>>>
 Hello Chandana,

 I will check with the team where this jar should be copied and update
 the instructions accordingly. Thank you for pointing it out.

 Best Regards,
 Rukshani.

 On Fri, Dec 22, 2017 at 8:44 AM, Chandana Napagoda >>> > wrote:

>
>
> On 22 December 2017 at 13:30, Rukshani Weerasinha 
> wrote:
>
>> Hi Chandana,
>>
>> Instructions to install and set up Snappy Java is there in the page
>> [1] you shared under sub-heading *Installing and setting up
>> snappy-java*.
>>
>
> I can't find any folder called "repository" under the (*Copy
> the jar to \repository\components\lib*)
>
>
>> Best Regards,
>> Rukshani.
>>
>> On Fri, Dec 22, 2017 at 8:14 AM, Chandana Napagoda <
>> cnapag...@gmail.com> wrote:
>>
>>> -1, Unable to start  Stream Processor Studio in the windows
>>> machine[1][2]. It was hanging on the below step for more than 20 
>>> minutes.
>>>
>>> Also, it seems "Installing on Windows"[1] doc is outdated, I can't
>>> find any place to copy snappy-java jar file.
>>>
>>>
>>> ​
>>>
>>> [1]. https://docs.wso2.com/display/SP400/Installing+on+Windows
>>> [2]. https://docs.wso2.com/display/SP400/Running+the+Product
>>>
>>> Regards,
>>> Chandana
>>>
>>> On 22 December 2017 at 09:59, SajithAR Ariyarathna <
>>> sajit...@wso2.com> wrote:
>>>
 Hi Devs,

 We are pleased to announce the release candidate of WSO2 Stream
 Processor 4.0.0.

 This is the Release Candidate version 2 of the WSO2 Stream
 Processor 4.0.0

 Please download, test the product and vote. Vote will be open for
 72 hours or as needed.

 Known issues: https://github.com/wso2/product-sp/issues

 Source and binary distribution files:
 https://github.com/wso2/product-sp/releases/tag/v4.0.0-RC2

 The tag to be voted upon:
 https://github.com/wso2/product-sp/tree/v4.0.0-RC2

 Please vote as follows.
 [+] Stable - go ahead and release
 [-] Broken - do not release (explain why)

 ~ The WSO2 Analytics Team ~
 Thanks.

 --
 Sajith Janaprasad Ariyarathna
 Senior Software Engineer; WSO2, Inc.;  http://wso2.com/
 

 ___
 Dev mailing list
 d...@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


>>>
>>>
>>> --
>>>
>>> Blog: http://blog.napagoda.com
>>> Linkedin: https://www.linkedin.com/in/chandananapagoda/
>>>
>>>
>>> ___
>>> Dev mailing list
>>> d...@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Rukshani Weerasinha
>>
>> WSO2 Inc.
>> Web:http://wso2.com
>> Mobile: 0777 683 738
>>
>>
>
>
> --
>
> Blog: http://blog.napagoda.com
> Linkedin: https://www.linkedin.com/in/chandananapagoda/
>
>


 --
 Rukshani Weerasinha

 WSO2 Inc.
 Web:http://wso2.com
 Mobile: 0777 683 738


>>>
>>>
>>> --
>>> Rukshani Weerasinha
>>>
>>> WSO2 Inc.
>>> Web:http://wso2.com
>>> Mobile: 0777 683 738
>>>
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>
>
> --
>
> Blog: http://blog.napagoda.com
> Linkedin: https://www.linkedin.com/in/chandananapagoda/
>
> ___
> Dev mailing list
> d...@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
-- 

*S. Suhothayan*
Associate Director / Architect
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twi

Re: [Architecture] [DAS][Feature] Disable all scheduled spark scripts temporarily

2018-01-05 Thread Sriskandarajah Suhothayan
How are we handling this currently? Are we deleting all and adding back?

Regards
Suho

On Fri, Jan 5, 2018 at 2:12 PM, Nirmal Fernando  wrote:

> Hi All,
>
> I believe providing a feature to disable all scheduled spark scripts
> temporarily in one click via the admin console of the WSO2 Data Analytics
> Server will be a handy feature to have especially when it comes to
> troubleshooting a DAS deployment. Of course, we need to be able to
> re-enable them with one click. Wdyt?
>
> --
>
> Thanks & regards,
> Nirmal
>
> Technical Lead, WSO2 Inc.
> Mobile: +94715779733 <071%20577%209733>
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 

*S. Suhothayan*
Director
*WSO2 Inc. *
http://wso2.com  


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [DAS][Feature] Disable all scheduled spark scripts temporarily

2018-01-05 Thread Sriskandarajah Suhothayan
Then +1 for the feature.


On Fri, Jan 5, 2018 at 2:53 PM, Nirmal Fernando  wrote:

> Currently, there's no way to disable all the scripts, temporarily. There's
> delete option and that's also not practical when the scripts are deployed
> via a CApp.
>
> On Fri, Jan 5, 2018 at 2:47 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> How are we handling this currently? Are we deleting all and adding back?
>>
>> Regards
>> Suho
>>
>> On Fri, Jan 5, 2018 at 2:12 PM, Nirmal Fernando  wrote:
>>
>>> Hi All,
>>>
>>> I believe providing a feature to disable all scheduled spark scripts
>>> temporarily in one click via the admin console of the WSO2 Data Analytics
>>> Server will be a handy feature to have especially when it comes to
>>> troubleshooting a DAS deployment. Of course, we need to be able to
>>> re-enable them with one click. Wdyt?
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Technical Lead, WSO2 Inc.
>>> Mobile: +94715779733 <071%20577%209733>
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Director
>> *WSO2 Inc. *
>> http://wso2.com  <http://wso2.com/>
>>
>>
>> *cell: (+94) 779 756 757 <+94%2077%20975%206757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Technical Lead, WSO2 Inc.
> Mobile: +94715779733 <071%20577%209733>
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 

*S. Suhothayan*
Director
*WSO2 Inc. *
http://wso2.com  <http://wso2.com/>


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [VOTE] Release WSO2 Stream Processor 4.1.0 RC1

2018-03-14 Thread Sriskandarajah Suhothayan
-1
In Mac, the graphics of the editor event flow is not rendering correctly.

Please check the attached image.

On Thu, Mar 15, 2018 at 12:46 AM, Damith Wickramasinghe 
wrote:

> - sup dev
> + dev
>
> On Thu, Mar 15, 2018 at 12:05 AM, Damith Wickramasinghe 
> wrote:
>
>> Hi Devs,
>>
>> We are pleased to announce the release candidate of WSO2 Stream Processor
>> 4.1.0.
>>
>> This is the Release Candidate version 1 of the WSO2 Stream Processor 4.1.
>> 0
>>
>> Please download, test the product and vote. Vote will be open for 72
>> hours or as needed.
>>
>> Known issues: https://github.com/wso2/product-sp/issues
>>
>> Source and binary distribution files: https://github.com/wso2/produc
>> t-sp/releases/tag/v4.1.0-RC1
>>
>> The tag to be voted upon: https://github.com/wso2/product-sp/tree/v4.1.0-
>> RC1
>>
>> Please vote as follows.
>> [+] Stable - go ahead and release
>> [-] Broken - do not release (explain why)
>>
>> ~ The WSO2 Analytics Team ~
>> Thanks.
>>
>>
>> --
>> Senior Software Engineer
>> WSO2 Inc.; http://wso2.com
>> 
>> lean.enterprise.middleware
>>
>> mobile: *+94728671315 <+94%2072%20867%201315>*
>>
>>
>
>
> --
> Senior Software Engineer
> WSO2 Inc.; http://wso2.com
> 
> lean.enterprise.middleware
>
> mobile: *+94728671315 <072%20867%201315>*
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Director
*WSO2 Inc. *
http://wso2.com  


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [APIM][Micro-Gateway][Analytics] Analytics for Micro-gateway

2018-07-03 Thread Sriskandarajah Suhothayan
IMO, we can upload the files to any of the DAS nodes and let it share among
its HA seems to be the correct approach.
Here we can set the event duplicated in the cluster property as falls all
the DAS receivers.
Doing so events will be shared and processed on both DAS HA nodes.

On Tue, Jul 3, 2018 at 10:18 AM Fazlan Nazeem  wrote:

> Hi Sajith,
>
> The Gateway will not have access to any of the databases. Therefore it
> will use the filesystem to temporarily store the zip file until it is
> uploaded to the Analytics server.
>
> On Tue, Jul 3, 2018 at 8:24 AM Sajith Perera  wrote:
>
>>
>> Whether we buffering the zip file in micro GW if the analytics server not
>> available? In such cases what if we give an option to persist zip file
>> directly to database by micro GW as analytics server anyway reading the
>> data from database.
>>
>> Regards,
>> SajithD
>>
>> On Tue, Jul 3, 2018 at 2:47 AM Tishan Dahanayakage 
>> wrote:
>>
>>> Sorry for the delayed reply Nuwan. I was traveling.
>>>
>>> On Fri, Jun 29, 2018 at 9:19 AM, Nuwan Dias  wrote:
>>>


 On Fri, Jun 29, 2018 at 4:43 PM Tishan Dahanayakage 
 wrote:

> Hi Dinusha,
>
> On Fri, Jun 29, 2018, 4:37 PM Dinusha Dissanayake 
> wrote:
>
>> Hi Tishan,
>>
>>>
> One more thing. Can't we just save these zip files to file system
> rather than stressing STATS_DB. We use STATS_DB mainly to store end
> analytics data which is used by presentation layer(Dashboards). WDYT?
>
 This would be problematic in HA deployment. If we keep them in the
 file system and if a node goes down, we won't be able to retrieve  the
 event data in files in that node.

>>> ​That we can solve by publishing to both DAS nodes from GW. Even
>>> earlier I was discussing with Fazlan to avoid adding file to DB by using
>>> file tail adaptor but later reverted due to zip files. But given that we
>>> are now using custom adaptor we can use files :)
>>>
>> If we publish to both DAS nodes, then the files would be available in
>> both nodes. When event publishing is happening by reading those files, 
>> the
>> same file will be processed from both the nodes right? :)
>> Then the same events will be accumulated twice as I see.
>>
> No that is handled by HA implementation.
>

 Didn't get that part. What do you mean by "handled by HA
 implementation"?

>>>
>>> ​When we configure DAS in HA mode, both receivers can receive same event
>>> and yet active node will do the presentation part.​
>>>

 Another question is, how does the gateways know the number of DAS nodes
 to upload to? In a HA scenario, the gateway will only see the LB URL
 (because DAS will be proxied via an LB). In that case the gateway only
 uploads to the LB url, it has no idea how many DAS nodes are behind that LB
 and it doesn't need to know as well.

>>> ​Yeah if DAS is proxied via LB then publishing to both nodes is an
>>> issue. Then file based solution becomes obsolete as we can't share files
>>> in-between nodes
>>>
>>> /Tishan
>>>

 To me it sounds like the problems we may have to solve by persisting to
 local file systems in each DAS node are much more severe than the overhead
 that gets added to the DB. Because in reality, each gateway will only
 upload these files like once every 15 minutes. So in a system with 1
 gateway, we're introducing just 1 additional DB read/write per every 15
 minutes. Yes, it increases with the number of gateways in the system, in
 which case we may have to reduce the upload frequency.



> /Tishan
>
>>
>>> /Tishan
>>>
 ​
 ​

>>>
>
 /Tishan
>
> On Fri, Jun 29, 2018 at 2:42 PM, Tishan Dahanayakage <
> tis...@wso2.com> wrote:
>
>> Fazlan,
>>
>> On Fri, Jun 29, 2018 at 2:17 PM, Fazlan Nazeem 
>> wrote:
>>
>>> Hi all,
>>>
>>> At the moment, analytics for microgateway is supported via a
>>> JAX-RS web app and a custom component which are deployed in APIM 
>>> publisher
>>> node. The component was responsible for publishing the analytics 
>>> data
>>> persisted in a DB table to the Analytics server via thrift. As an
>>> improvement for this, we have planned to move the web app to 
>>> Analytics
>>> server and process the events within itself which will remove the 
>>> overhead
>>> of publishing data via thrift. The micro-gateways will then upload 
>>> the zip
>>> files with analytics data directly to the analytics server so that 
>>> we can
>>> eliminate an unnecessary network hop.
>>>
>>> For this, we have developed a working prototype which follows
>>> the following de

Re: [Architecture] [Siddhi] [Extension] Running select queries with join capabilities on dataource

2018-07-04 Thread Sriskandarajah Suhothayan
That should be possible if you are using RDBMS datastore

On Wed, Jul 4, 2018 at 2:04 PM Silmy Hasan  wrote:

> Hi Niveathika,
>
> Can we write this type of join queries when creating custom widgets(in
> widgetConf.json)?
>
> Shilmy Hasan
> Associate Software Engineer | WSO2
>
> E-mail :si...@wso2.com
> Phone :0779188653
> web : http://www.wso2.com
>
> [image: https://wso2.com/signature] 
>
> On Wed, Jul 4, 2018 at 1:06 PM, Niveathika Rajendran 
> wrote:
>
>> Hi all,
>>
>> This extension is added to siddhi-store-rdbms repo with the PR[1]
>>
>> [1] https://github.com/wso2-extensions/siddhi-store-rdbms/pull/91
>>
>> Best Regards,
>> *Niveathika Rajendran,*
>> *Software Engineer.*
>> *Mobile : +94 077 903 7536*
>>
>>
>>
>>
>>
>> On Tue, Jun 26, 2018 at 3:11 PM Niveathika Rajendran 
>> wrote:
>>
>>> Hi all,
>>>
>>> Please find the code review notes here[1].
>>>
>>> [1] Updated invitation: [Code Review] Siddhi-execution-rdbms @ Tue Jun
>>> 26, 2018 1pm - 2pm (IST) (Analytics Group)
>>>
>>> Best Regards,
>>> *Niveathika Rajendran,*
>>> *Software Engineer.*
>>> *Mobile : +94 077 903 7536*
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jun 26, 2018 at 3:03 PM Niveathika Rajendran <
>>> niveath...@wso2.com> wrote:
>>>
 Hi all,

 As per the offline discussion, the functionality of the extension will
 be changed as follows,
 1. Running retrieval queries

> #rdbms:query(', , )


 2. Running CUD queries (INSERT, UPDATE, DELETE)

> #rdbms:cud('', )

 *This extension function should be enabled through configurations
> ('perform.cud.operation' : true) , this is disabled by default.


 Moreover, in both functions, queries will be validated to disable users
 performing, DROP/CREATE/ ALTER operations

 Best Regards,
 *Niveathika Rajendran,*
 *Software Engineer.*
 *Mobile : +94 077 903 7536*





 On Mon, Jun 25, 2018 at 11:30 AM Niveathika Rajendran <
 niveath...@wso2.com> wrote:

> Hi Dilini,
>
> For now, we are supporting the following siddhi data types,
>
>
> *Siddhi DatatypeJDBC DatatypeSQL DatatypeSTRINGSTRINGCHAR,
> VARCHAR,
> LONGVARCHARINTINTINTEGERLONGLONGBIGINTDOUBLEDOUBLEDOUBLEFLOATFLOATREALBOOLEANBOOLEANBIT*
>
> Best Regards,
> *Niveathika Rajendran,*
> *Software Engineer.*
> *Mobile : +94 077 903 7536*
>
>
>
>
>
> On Fri, Jun 22, 2018 at 1:10 PM Dilini Muthumala 
> wrote:
>
>> Hi Niveathika,
>>
>> How does a user supposed to map the data types from RDBMS types to
>> Siddhi types? Any guidance we could provide?
>>
>> Thanks,
>> Dilini
>>
>> On Fri, Jun 22, 2018 at 1:02 PM Niveathika Rajendran <
>> niveath...@wso2.com> wrote:
>>
>>> Hi Damith,
>>>
>>> We are only verifying that the query is doing only a select
>>> operation, other than that we are not doing any validation on the SQL 
>>> query
>>> defined in the function. In the essence, the user can define any joins 
>>> he/
>>> she wants.
>>>
>>> The query I have added in the user story is a sample, and contains
>>> 'INNER JOIN'.
>>>
>>> Best Regards,
>>> *Niveathika Rajendran,*
>>> *Software Engineer.*
>>> *Mobile : +94 077 903 7536*
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Jun 22, 2018 at 12:55 PM Damith Wickramasinghe <
>>> dami...@wso2.com> wrote:
>>>
 + architecture

 Hi Niveathika ,

 What types of joining are we supporting ? is it only INNER JOIN
 for now

 Thanks,
 Damith


 On Fri, Jun 22, 2018 at 7:17 AM, Niveathika Rajendran <
 niveath...@wso2.com> wrote:

> Hi all,
>
> I am currently working on $subject for Siddhi which will enable a
> Siddhi developer to run select queries with table joins on the 
> database.
>
> The syntax of the execution extension will be as follows,
>
>> *rdbms:query(, ,  (attributeName  attribute Type)>)*
>
> Please find the user story here [1].
>
> [1]
> https://docs.google.com/document/d/1mPySJKVy8-Wq3L8o6PJmtKhyY-jMjN72aLuyVivW_lc/edit?usp=sharing
>
> Best Regards,
> *Niveathika Rajendran,*
> *Software Engineer.*
> *Mobile : +94 077 903 7536*
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "WSO2 Engineering Group" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to engineering-group+unsubscr...@wso2.com.
> For more options, visit
> https://groups.google.com/a/wso2.com/d/optout.
>



 --
 Senior

Re: [Architecture] [SP] [Editor] Export Siddhi App to Worker/Manager

2018-11-27 Thread Sriskandarajah Suhothayan
IMO deploy should we a top level Menu item as its a different lifecycle of
the project.
Under that we have to have "Deploy to Server", and  "Deploy to Server"
dialog box should have list of server URLs and user should be able to
select one or more and say deploy. This can show all the workers or
managers specified in the deployment.yaml file.
If needed then we can also give an option to add servers using "Add Server"
button or something like that.
if a server is added in this way it will be also shown along with servers
coming from deployment.yaml file and these can also be deleted by clicking
the delete button next to them when they are listed. But the ones coming
from deployment.yaml file cannot be deleted.

Regards
Suho

On Tue, Nov 27, 2018 at 3:18 PM Lasith Manaram  wrote:

> Hi all,
>
> This is regarding the project I'm currently working on. The idea is to
> export a siddhi app to worker/manager through the editor.
>
> Up to now I have developed the UI. There is a "Deploy File" option under
> the file menu [1]. When you select that option "Deploy to Worker" dialog
> box popups [2].
>
> If the editor and the worker are in different instances you have to select
> "Advanced" options [3] and there you have to enter the host name, port,
> user name and password respectively.
>
> Apart from that we can support providing worker/manager details on
> deployment ymal and populate under the advanced options in the editor.
>
> Furthermore there is a suggestion to use a url instead of using host, port.
>
> Your valuable suggestions are highly appreciated.
>
> [1] Deploy File
> 
> [2] Dialog Box
> 
> [3] Advanced Options
> 
>
> Thank you.
>
>
>
> --
>
> *Lasith Jayalath *
> *Software engineering Intern*
> WSO2  (University of Moratuwa)
> *mobile *: *+94 716331505* |   *email *:  lasi...@wso2.com
>


-- 
*S. Suhothayan* | Director | WSO2 Inc. 
(m) (+94) 779 756 757 | (e) s...@wso2.com | (t) @suhothayan

GET INTEGRATION AGILE
Integration Agility for Digitally Driven Business
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Data Isolation level for Data from APIM and IoT? Tenant vs. User

2016-03-25 Thread Sriskandarajah Suhothayan
Hi

User level isolation is needed for the IoT server, as in the IoT server
context user registers a device and use that, hence he/she should only be
able to see his/her devices and not any other users devices or data.

@Pabath & Sumedha correct me if I'm wrong.

Regards
Suho

On Fri, Mar 25, 2016 at 9:02 AM, Srinath Perera  wrote:

> For the data published from APIM and IoT servers, what kind of isolation
> do we need?
>
> Option 1: Tenant level - DAS already has this. However, this means that
> multiple users (e.g. publishers, subscribers, or IoT users) can see other
> people's stats of they are in the same tenant
>
> Option 2: User level - DAS does not have this concept yet.
>
> Also a related question is that if user add their own queries, at what
> level they are isolated.
>
> --Srinath
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [analytics-esb] Summary Stat Generation Mechanism

2016-04-19 Thread Sriskandarajah Suhothayan
I think it will make more sense to run seconds and minutes from siddhi, and
run the spark every hour, when there are lots of date on the system this
will be much more scalable.

WDYT?

Regards
Suho

On Wed, Apr 20, 2016 at 11:50 AM, Supun Sethunga  wrote:

> Hi,
>
> This is a follow-up mail of [1], to give an update on the status with the
> performance issue [2] . So as mentioned in the previous mail, with
> Spark-script doing the summary stat generation as a batch process, creates
> a bottleneck at a higher TPS. More precisely, with our findings, it cannot
> handle a throughput of more than 30 TPS as a batch process. (i.e: events
> published to DAS within 10 mins with a TPS of 30, take more than 10 mins to
> process. Means, if we schedule a script every 10 mins, the events to be
> processed grows over time).
>
> To overcome this, thought of doing the summarizing up to a certain extent
> (upto second-wise summary) using siddhi, and to generate remaining
> stats (per-minute/hour/day/month), using spark. With this enhancement, ran
> some load tests locally to evaluate this approach, and the results are as
> follows.
>
> Backend DB : MySQL
> ESB analytics nodes: 1
>
>  With InnoDB
>
>- With *80 TPS*: (script scheduled every 1 min) : Avg time taken for
>completion of  the script  = ~ *20 sec*.
>- With* 500 TPS* (script scheduled every 2 min) : Avg time taken for
>completion of  the script  = ~ *45 sec*.
>
>
> With MyISAM
>
>- With *80 TPS* (script scheduled every 1 min) : Avg time taken for
>completion of  the script  = ~ *24 sec*.
>- With *80 TPS *(script scheduled every 2 min) : Avg time taken for
>completion of  the script  = ~ *20 sec*.
>- With *500 TPS* (script scheduled every 2 min) : Avg time taken for
>completion of  the script  = ~ *35 sec*.
>
> As a further improvement, we would be trying out to do summarizing upto
> minute/hour level (eventually do all the summarizing using siddhi).
>
> [1] [Dev] ESB Analytics - Verifying the common production use cases
> [2] https://wso2.org/jira/browse/ANLYESB-15
>
> Thanks,
> Supun
>
> --
> *Supun Sethunga*
> Software Engineer
> WSO2, Inc.
> http://wso2.com/
> lean | enterprise | middleware
> Mobile : +94 716546324
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [analytics-esb] Summary Stat Generation Mechanism

2016-04-20 Thread Sriskandarajah Suhothayan
+1 for the approach. Lets test it and see.

Regards
Suho

On Wed, Apr 20, 2016 at 12:30 PM, Anjana Fernando  wrote:

> Hi,
>
> Good progress Supun! .. do keep pushing the parameters to find the limits
> we can go to.
>
> @Suho, the idea was to all together eliminate the batch script and just
> store/index the data for later lookup, and do the computation purely in
> Siddhi. I don't think we will get a big scaling problem, since the data
> needs to be stored in-memory when we go to upper layers of summarization is
> smaller, and stops at yearly granularity. So it would be at that time, we
> having data in-memory for last years worth of data, in a way of last 12
> records of summary data for 12 months for a specific artifact, last day's
> worth, that is 30 entries etc.. so growing of data slows immensely, and
> also it has a upper limit, which I guess should be comfortability within
> usual memory capacity.
>
> So if we can get a proper checkpoint and replay mechanism figured out for
> data processed, we can do all the things in CEP, then we just don't have
> the complexity of maintaining two mechanism of doing the processing.
>
> Cheers,
> Anjana.
>
> On Wed, Apr 20, 2016 at 12:11 PM, Sriskandarajah Suhothayan  > wrote:
>
>> I think it will make more sense to run seconds and minutes from siddhi,
>> and run the spark every hour, when there are lots of date on the system
>> this will be much more scalable.
>>
>> WDYT?
>>
>> Regards
>> Suho
>>
>> On Wed, Apr 20, 2016 at 11:50 AM, Supun Sethunga  wrote:
>>
>>> Hi,
>>>
>>> This is a follow-up mail of [1], to give an update on the status with
>>> the performance issue [2] . So as mentioned in the previous mail, with
>>> Spark-script doing the summary stat generation as a batch process, creates
>>> a bottleneck at a higher TPS. More precisely, with our findings, it cannot
>>> handle a throughput of more than 30 TPS as a batch process. (i.e: events
>>> published to DAS within 10 mins with a TPS of 30, take more than 10 mins to
>>> process. Means, if we schedule a script every 10 mins, the events to be
>>> processed grows over time).
>>>
>>> To overcome this, thought of doing the summarizing up to a certain
>>> extent (upto second-wise summary) using siddhi, and to generate remaining
>>> stats (per-minute/hour/day/month), using spark. With this enhancement, ran
>>> some load tests locally to evaluate this approach, and the results are as
>>> follows.
>>>
>>> Backend DB : MySQL
>>> ESB analytics nodes: 1
>>>
>>>  With InnoDB
>>>
>>>- With *80 TPS*: (script scheduled every 1 min) : Avg time taken for
>>>completion of  the script  = ~ *20 sec*.
>>>- With* 500 TPS* (script scheduled every 2 min) : Avg time taken for
>>>completion of  the script  = ~ *45 sec*.
>>>
>>>
>>> With MyISAM
>>>
>>>- With *80 TPS* (script scheduled every 1 min) : Avg time taken for
>>>completion of  the script  = ~ *24 sec*.
>>>- With *80 TPS *(script scheduled every 2 min) : Avg time taken for
>>>completion of  the script  = ~ *20 sec*.
>>>- With *500 TPS* (script scheduled every 2 min) : Avg time taken for
>>>completion of  the script  = ~ *35 sec*.
>>>
>>> As a further improvement, we would be trying out to do summarizing upto
>>> minute/hour level (eventually do all the summarizing using siddhi).
>>>
>>> [1] [Dev] ESB Analytics - Verifying the common production use cases
>>> [2] https://wso2.org/jira/browse/ANLYESB-15
>>>
>>> Thanks,
>>> Supun
>>>
>>> --
>>> *Supun Sethunga*
>>> Software Engineer
>>> WSO2, Inc.
>>> http://wso2.com/
>>> lean | enterprise | middleware
>>> Mobile : +94 716546324
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Product Integration Server thread model and implementation

2016-04-22 Thread Sriskandarajah Suhothayan
Whats the waiting strategy that is being used in Disrupter? And is the
maximum of Disrupter in all configuration is 2 ?

Suho


On Fri, Apr 22, 2016 at 8:01 PM, Chanaka Fernando  wrote:

> Hi Isuru,
>
> Have we tested on the pure passthrough scenarios? According to the
> results, we cannot see much performance difference in using disruptor over
> thread pool. Shall we do some testing around passthrough scenarios and
> verify?
>
> Cheers,
> Chanaka
>
> On Fri, Apr 22, 2016 at 12:54 PM, Isuru Ranawaka  wrote:
>
>> Hi all,
>>
>> We have working on decoupling engine thread model from transport thread
>> model.Previously carbon transport was included with the Disruptor and
>> according to the behaviour of MSF4J it is very hard to mapped custom logics
>> written using MSF4J to Disruptor thread model in order to gain better
>> performance. So we have moved Disruptor thread model to carbon-gateway and
>> carbon-transport is kept with Netty thread pool.
>>
>> According to current implementation carbon-transport will dispatch events
>> to registered message processor via Netty worker threads and there after it
>> operates under engine level thread model.
>>
>> Following is the Thread model diagram for Integration Server.
>>
>> [image: gw_thread_model.png]
>>
>> Basically it works as follows
>>
>>-
>>
>>CPU bound mediators are working on CPU bound disruptor threads.
>>-
>>
>>IO bound mediators are working on IO bound disruptor threads.
>>-
>>
>>We assume custom mediators are written in bad way and may contain
>>blocking calls so those are executed using IO bound mediator.
>>-
>>
>>We have included ThreadPool implementation as well and can switch
>>between ThreadPool based model or Disruptor based model.
>>
>> This is for we haven’t yet finalized exact thread model and we  keep
>> testing with different mediators how both are behaving according to
>> different parameters like TPS, memory, startup time ,latency , .etc
>>
>>
>> Following are some of the tests results we have conducted with  both
>> thread model  implementations for two main scenarios.
>>
>> Machine Details
>>
>> Server :-  32 core machine with 64 GB memory
>>
>> Back End Service :-  Netty based Echo Service which has TPS around 10
>>
>> Tested message size :- 4kb
>>
>> Server startup Time with Disruptor :- 1.34 s
>>
>> Server startup time without Disruptor :- 1.38 s
>>
>> Use case:-
>>
>> (CPU + IO)
>>
>> Header based routing with File Writing .One message path is writing
>> message to file and other one sends messages to Echo service and respond
>> back to client.
>>
>> TPS
>>
>> [image: image1.png]
>>
>> Latency
>>
>> [image: image2.png]
>>
>> Memory
>>
>> Disruptor
>>
>> [image: disruptor.png]
>>
>>
>>
>>
>>
>>
>>
>> Thread Pool
>>
>> [image: threadpoolMemory.png]
>>
>> Use case:-
>>
>> (CPU )
>>
>> Header based routing ,   send messages to Echo service and respond back
>> to client.
>>
>> TPS
>>
>> [image: image3.png]
>>
>> Latency
>>
>> [image: image4.png]
>>
>>
>> For test results please look in to [1]
>>
>> [1]
>> https://docs.google.com/spreadsheets/d/1A2dxknP1xEJKBpl4ymbQD2Mt9kWywYCI-60j16JVLx0/edit#gid=0
>>
>>
>>
>> Thanks
>> IsuruR
>>
>> --
>> Best Regards
>> Isuru Ranawaka
>> M: +94714629880
>> Blog : http://isurur.blogspot.com/
>>
>
>
>
> --
> Thank you and Best Regards,
> Chanaka Fernando
> Senior Technical Lead
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 773337238
> Blog : http://soatutorials.blogspot.com
> LinkedIn:http://www.linkedin.com/pub/chanaka-fernando/19/a20/5b0
> Twitter:https://twitter.com/chanakaudaya
>
>
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Product Integration Server thread model and implementation

2016-04-23 Thread Sriskandarajah Suhothayan
In CEP we have experienced latency issues chaining disrupter and don't go
for PhasedBackOff strategy as there is a hit for calculating time and if
you use  spinning or yielding there you have to also correlate then with
the number of cores on the system which will be a problem when it comes to
deployment.

Suho

On Sat, Apr 23, 2016 at 7:18 PM, Isuru Ranawaka  wrote:

> Hi Chanaka,
>
> I have run the test for header based routing scenario (Pure CPU) and there
> is small improvement on Disruptor. We can get clear performance difference
> if  we send huge load such that  requests are get queued at ThreadPool  and
> will has huge contention for internal queues but for Disruptor it is
> locking free .when profiling this test I didn't  see contention in the
> ThreadPool level as well.  Actually advantage we got through Disruptor  is
> due to cache optimization at disruptor level. Another problem is we have
> active Netty thread pool as well. So there is no probability that Disruptor
> threads are always get high priority. Due to that also we cannot get
> maximum out of Disruptor thread model.  I will run long running test with
> significant amount of CPU bound work and see how it behaves.
>
> @Suho,
>
> CPU bound Disruptor wait strategy is Sleep waiting and IO bound Disruptor
> is Blocking Wait . Tested with PhasedBackOff with LiteLocking as well not
> much difference in TPS  but observed latency was  varied in higher amount
> for some requests.
>
> Number of Disruptors are configurable and I have used five  CPU bound
> Disruptors and one IO bound DIsruptor for testing which gives maximum Perf
> according to testes we carried for Gateway.
>
> thanks
>
>
> On Fri, Apr 22, 2016 at 8:20 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> Whats the waiting strategy that is being used in Disrupter? And is the
>> maximum of Disrupter in all configuration is 2 ?
>>
>> Suho
>>
>>
>> On Fri, Apr 22, 2016 at 8:01 PM, Chanaka Fernando 
>> wrote:
>>
>>> Hi Isuru,
>>>
>>> Have we tested on the pure passthrough scenarios? According to the
>>> results, we cannot see much performance difference in using disruptor over
>>> thread pool. Shall we do some testing around passthrough scenarios and
>>> verify?
>>>
>>> Cheers,
>>> Chanaka
>>>
>>> On Fri, Apr 22, 2016 at 12:54 PM, Isuru Ranawaka 
>>> wrote:
>>>
>>>> Hi all,
>>>>
>>>> We have working on decoupling engine thread model from transport thread
>>>> model.Previously carbon transport was included with the Disruptor and
>>>> according to the behaviour of MSF4J it is very hard to mapped custom logics
>>>> written using MSF4J to Disruptor thread model in order to gain better
>>>> performance. So we have moved Disruptor thread model to carbon-gateway and
>>>> carbon-transport is kept with Netty thread pool.
>>>>
>>>> According to current implementation carbon-transport will dispatch
>>>> events to registered message processor via Netty worker threads and there
>>>> after it operates under engine level thread model.
>>>>
>>>> Following is the Thread model diagram for Integration Server.
>>>>
>>>> [image: gw_thread_model.png]
>>>>
>>>> Basically it works as follows
>>>>
>>>>-
>>>>
>>>>CPU bound mediators are working on CPU bound disruptor threads.
>>>>-
>>>>
>>>>IO bound mediators are working on IO bound disruptor threads.
>>>>-
>>>>
>>>>We assume custom mediators are written in bad way and may contain
>>>>blocking calls so those are executed using IO bound mediator.
>>>>-
>>>>
>>>>We have included ThreadPool implementation as well and can switch
>>>>between ThreadPool based model or Disruptor based model.
>>>>
>>>> This is for we haven’t yet finalized exact thread model and we  keep
>>>> testing with different mediators how both are behaving according to
>>>> different parameters like TPS, memory, startup time ,latency , .etc
>>>>
>>>>
>>>> Following are some of the tests results we have conducted with  both
>>>> thread model  implementations for two main scenarios.
>>>>
>>>> Machine Details
>>>>
>>>> Server :-  32 core machine with 64 GB memory
>>>>
>>>> Back End Service :-  Netty based Echo Service which has TPS a

Re: [Architecture] [DS]Embeddable gadgets feature for Dashboard Server

2016-04-26 Thread Sriskandarajah Suhothayan
Hi Megala

I think its not necessary to communicate to DS if we are to do gadget to
gadget communication, but we have to provide a js library that the web page
developer need to embed in order to get the pub-sub working.
Via this library we can do the pub-sub locally within that page, and who
can communicate with whom information should also be passed to the lib such
that it knows how to pass messages across.

Regards
Suho

On Wed, Apr 27, 2016 at 10:10 AM, Megala Uthayakumar 
wrote:

> Hi,
>
> High Level view of gadget rendering is follow,
>
> Sorry for the inconvenience caused.
> ​
> Thanks.
>
> --
> Megala Uthayakumar
>
> Software Engineer
> Mobile : 0779967122
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Implementing proper security model for dashboard server

2016-04-28 Thread Sriskandarajah Suhothayan
Thanks Sumedha for the points, to make life easy for the gadget developer
we decided to add oAuth token retrieval to DS.

Based on the offline discussion with Johann, Sinthuja and Geesara we
decided to support only the following scenarios for DS

1. If SSO is enabled, obtaining a token (OAuth2) using SAML Token and
passing to the backend

2. If SSO is disabled, obtaining a token (OAuth2) using client credential
grant type and passing to the backend,
Here the username and password will be obtained at server login and the
token is generated at the same time.

In both cases we will be using DCR for client registration at the server
level, and same token will be used by all gadgets to access the secured
backend APIs.

To access secured backend from the gadgets a oAuth2Client js service (shindig
features) will be implemented at DS, such that gadgets can talk to backend
using the oAuth2Client which will embed appropriate authorisation header
when sending.

Regards
Suho

On Thu, Apr 28, 2016 at 2:37 PM, Sinthuja Ragendran 
wrote:

> Hi Sumedha,
>
> On Thu, Apr 28, 2016 at 1:58 PM, Sumedha Rubasinghe 
> wrote:
>
>> Geesara,
>> This is a model that should be coming out of Dashboard perspective.
>>
>> If we take a look @ basic building blocks of DS, its (similar to what you
>> have mentioned)
>> - Gadget
>> - Dashboard
>> - Wizards
>> - Dashboard Admin panel
>> - etc
>>
>> Each of these elements should have a permission model associated per
>> instance.
>>
>
> Yeah +1, as per now the permission checks are not implemented for these
> operations, but we need to incorporate that as well.
>
>
>> For example, defining "permission to view any gadget" is not enough.  But
>> rather it should be "permission to view Gadget X".
>> Same should be extended for all other building blocks. (AFAIK, this is
>> not there for gadgets as of now)
>>
>> These need to be stored @ gadget server level and evaluated b4 rendering
>> any gadget.
>>
>
> Yeah, actually we are planning to implement the role based access control
> for gadgets and then again different views of the dashboard page based on
> roles.
>
>
>>
>> Permissions to BE
>> 
>> Once presentation layer permissions are sorted, it becomes responsibility
>> of Gadget / Dashboard author to figure out mapping
>> those permissions to a backend API.
>>
>> There are multiple ways to do this based on how backend is secured.
>> - Passing session cookie obtained @ login to backend
>> - Obtaining a token (OAuth2) using session cooking (using an OAuth2 grant
>> type)
>> - If SSO is enabled, obtaining a token (OAuth2) using SAML Token
>> - IdP enabled deployment
>>
>> Ensuring backend API's permission requirements match front end user's
>> privileges is part of author's
>> responsibility. This is not something DS product needs to worry about.
>>
>
> Exactly, but I think there should be some API provided by DS server
> (shindig features), so that the users can just call the necessary methods
> with respective parameters to get oauth token. WDYT?
>
> Thanks,
> Sinthuja.
>
>
>> If by any chance backend is written using WSO2 technologies, we can
>> leverage concepts like
>> - Sharing same identity provider for both DS and BE server
>> - passing authorisation details from FE to BE using JWT/SAML Response /
>> User profile
>>
>>
>> Permissions when gadgets being embedded into other products without
>> dashboard
>> 
>> This is yet another perspective of the same problem. This also can be
>> solved if we follow same principles
>> mentioned above.
>> - Having gadget instance level permission definition
>> - Way to obtain a gadget by passing in authorisation details (using one
>> of the methods mentions above)
>>
>
>> Same applies for dashboards.
>>
>>
>> On Thu, Apr 28, 2016 at 1:00 AM, Geesara Prathap 
>> wrote:
>>
>>> *Requirement:*
>>> *When dashboard retrieving data from some REST APIs which are secured,
>>> we do require proper security model in place in order to identify who can
>>> access this dashboard and at which level should it be done. In addition,how
>>> can dashboard be going to communicate with respective REST API securely?*
>>>
>>>
>>>
>>>  Figure 01:
>>> Dashboard Server
>>>
>>>
>>> Data providers for gadgets need to communicate with DS securely. Most of
>>> the cases data providers are some REST APIs. There might be a situation
>>> which dashboard will be getting data from different data providers as well.
>>> In the DS perspective, there must be an effective way to tackle these
>>> security related issues up to some extent. Referring to figure 1, we are
>>> having three places where we can address these issues.
>>>
>>>- gadget level
>>>- per-dashboard level
>>>- dashboard server level
>>>
>>> What would be the proper place which we can address security concerns in
>>> a proper manner?  If we try to address this at gadget level, It will

Re: [Architecture] Implementing proper security model for dashboard server

2016-04-28 Thread Sriskandarajah Suhothayan
On Fri, Apr 29, 2016 at 6:47 AM, Farasath Ahamed  wrote:

> Hi Suho,
>
> Just to be clear, Are we going to use the Password Grant Type in the case
> where SSO is disabled or is it the Client Credentials grant type using the
> client_id and client_secret of the app created?
>
Sorry it should be  Password Grant Type in the case where SSO is disabled.

>
>
> Thanks,
> Farasath Ahamed
> Software Engineer,
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
>
> Email: farasa...@wso2.com
> Mobile: +94777603866
> Blog: blog.farazath.com
> Twitter: @farazath619 <https://twitter.com/farazath619>
>
> On Thu, Apr 28, 2016 at 10:28 PM, Sriskandarajah Suhothayan  > wrote:
>
>> Thanks Sumedha for the points, to make life easy for the gadget developer
>> we decided to add oAuth token retrieval to DS.
>>
>> Based on the offline discussion with Johann, Sinthuja and Geesara we
>> decided to support only the following scenarios for DS
>>
>> 1. If SSO is enabled, obtaining a token (OAuth2) using SAML Token and
>> passing to the backend
>>
>> 2. If SSO is disabled, obtaining a token (OAuth2) using client credential
>> grant type and passing to the backend,
>> Here the username and password will be obtained at server login and the
>> token is generated at the same time.
>>
>> Sorry my bad it should be Password Grant Type!


In both cases we will be using DCR for client registration at the server
>> level, and same token will be used by all gadgets to access the secured
>> backend APIs.
>>
>> To access secured backend from the gadgets a oAuth2Client js service (shindig
>> features) will be implemented at DS, such that gadgets can talk to
>> backend using the oAuth2Client which will embed appropriate authorisation
>> header when sending.
>>
>> Regards
>> Suho
>>
>> On Thu, Apr 28, 2016 at 2:37 PM, Sinthuja Ragendran 
>> wrote:
>>
>>> Hi Sumedha,
>>>
>>> On Thu, Apr 28, 2016 at 1:58 PM, Sumedha Rubasinghe 
>>> wrote:
>>>
>>>> Geesara,
>>>> This is a model that should be coming out of Dashboard perspective.
>>>>
>>>> If we take a look @ basic building blocks of DS, its (similar to what
>>>> you have mentioned)
>>>> - Gadget
>>>> - Dashboard
>>>> - Wizards
>>>> - Dashboard Admin panel
>>>> - etc
>>>>
>>>> Each of these elements should have a permission model associated per
>>>> instance.
>>>>
>>>
>>> Yeah +1, as per now the permission checks are not implemented for these
>>> operations, but we need to incorporate that as well.
>>>
>>>
>>>> For example, defining "permission to view any gadget" is not enough.
>>>> But rather it should be "permission to view Gadget X".
>>>> Same should be extended for all other building blocks. (AFAIK, this is
>>>> not there for gadgets as of now)
>>>>
>>>> These need to be stored @ gadget server level and evaluated b4
>>>> rendering any gadget.
>>>>
>>>
>>> Yeah, actually we are planning to implement the role based access
>>> control for gadgets and then again different views of the dashboard page
>>> based on roles.
>>>
>>>
>>>>
>>>> Permissions to BE
>>>> 
>>>> Once presentation layer permissions are sorted, it becomes
>>>> responsibility of Gadget / Dashboard author to figure out mapping
>>>> those permissions to a backend API.
>>>>
>>>> There are multiple ways to do this based on how backend is secured.
>>>> - Passing session cookie obtained @ login to backend
>>>> - Obtaining a token (OAuth2) using session cooking (using an OAuth2
>>>> grant type)
>>>> - If SSO is enabled, obtaining a token (OAuth2) using SAML Token
>>>> - IdP enabled deployment
>>>>
>>>> Ensuring backend API's permission requirements match front end user's
>>>> privileges is part of author's
>>>> responsibility. This is not something DS product needs to worry about.
>>>>
>>>
>>> Exactly, but I think there should be some API provided by DS server
>>> (shindig features), so that the users can just call the necessary methods
>>> with respective parameters to get oauth token. WDYT?
>>>
>>> Thanks,
>>> Sinthuja.
>>>
>>>
>>>>

Re: [Architecture] [DS] Gadget Generation Framework

2016-04-29 Thread Sriskandarajah Suhothayan
Please see the comments inline.

On Fri, Apr 29, 2016 at 7:52 PM, Tanya Madurapperuma  wrote:

> Hi all,
>
> *Introduction*
>
> The purpose of this feature is to provide a framework to generate gadgets
> where you can plug datasource providers and chart templates.
>
> For an example, you will be able to plug your RDBMS datasource, rest api ,
> csv file , real time datsources etc as pluggable providers.
>
> *Flow*
>
> Select datasource provider (stage 1) --> Configure datasource parameters
> (stage 2) --> Configure chart parameters (stage 3) --> Preview gadget
> (stage 4) --> Generate gadget (stage 5)
>
> *Proposed Architecuture*
>
> Provider developers can plug their providers to DS by adding their
> respective provider into the designer.json
>
> *{*
> **
> *"gadgetGeneration" :{*
> *"isCreateGadgetEnable": false,*
> *"providers": ["batch", "rdbms", "rest", "rt"]*
> *},*
> *   *
> *}*
>
> Can't we load this dynamically by scanning the files at server start?

Provider implementation should be placed under /portal/extensions/providers
> and should mainly contain 2 files.
>
>- config.json - contains the expected configuration in the "Configure
>datasource parameters" stage 2
>
> *example config*
> *   {*
>
> *"id":"rdbms",*
> *"name" : "Relational Database Source",*
> *"image" : "",*
> *"config" : [*
> *{*
> *"fieldLabel" : "Database URL",*
> *"fieldName" :"db_url",*
> *"fieldType" : "text",*
> *"defaultValue" : "",*
> *"isRequired" : true*
> *},*
> *{*
> *"fieldLabel" : "Username",*
> *"fieldName" :"username",*
> *"fieldType" : "text",*
> *"defaultValue" : "",*
> *"isRequired" : true*
> *},*
> *{*
> *"fieldLabel" : "Password",*
> *"fieldName" :"password",*
> *"fieldType" : "password",*
> *"defaultValue" : "",*
> *"isRequired" : true*
> *},*
> *{*
> *"fieldLabel" : " Database Driver",*
> *"fieldName" :"driver",*
> *"fieldType" : "dropDown",*
> *"valueSet" : [ ],*
> *"isRequired" : true*
> *},*
> *{*
> *"fieldLabel" : " Check Box",*
> *"fieldName" :"checkbox",*
> *"fieldType" : "checkbox",*
> *"isRequired" : true*
> *},*
> *{*
> *"fieldType" : "advanced",*
> *"partialName" : "myPartial"*
> *}*
> *]*
> *} *
>
> The configuration UI will be dynamically generated for the data types
> text-box, checkbox, password field and drop-down. And a provider can plug
> his own UI blocks as partials when there are advanced fields.
>
>
>- index.js - implements the provider api for
>   - validateData (providerConfig)
>  - To validate the inputs provided at the stage 2
>  - providerConfig will be key value pairs of provided field names
>  and values
>   - getSchema (providerConfig)
>  - Returns the list of column names and their data types of the
>  configured data provider
>   - getData (providerConfig,schemaPropertyList)
>  - schemaPropertyList  will be list of column names selected at
>  stage 3
>
>
> IMO the index.js should have a getConfigInfo() which will return the
config.json which the UI will be plotted against, this is because for batch
and realtime we have to pre populate the tables/stream.
I think the validation of data can be part of getSchema()/generateSchema().

Similarly chart templates can also be plugged via configuring at
> designer.json
> Chart template mplementations also goes under
> /portal/extensions/chart-templates and have 2 main files.
>
>
>- config.json - contains the fields that needs to be populated at
>stage 3
>
> *example config*
>
> *{*
> *"id":"lineChart",*
> *"name" : "Line Chart",*
> *"config": [*
> *{*
> *"fieldLabel": "X axis",*
> *"fieldName": "x",*
> *"fieldType": "String",*
> *"fieldValue" : ["$COLUMN_NAMES"]*
> *},*
> *{*
> *"fieldLabel": "Y axis",*
> *"fieldName": "y",*
> *"fieldType": "Int",*
> *"fieldValue" : ["$COLUMN_NAMES"]*
> *},*
> *{*
> *"fieldLabel": "Chart Colour",*
> *"fieldName": "colour",*
> *"fieldType": "String"*
> *}*
> *]*
>
> *}*
>
> If the first element of field value is * ["$COLUMN_NAMES"]  *,it will
> populate the list of column names retrieved getSchema method of provider
> api.
>
>
>- index.js - implements the chart template api for
>- draw (chartConfig, data)
>  - responsible for plotting the chart/gadget
>  - chartConfig will be key value pairs of chart configu

[Architecture] Fixing auto batching in Siddhi

2016-05-12 Thread Sriskandarajah Suhothayan
Hi

There are several complaints when Siddhi aggregate outputs its
automatically batching the events and hence the output number of events is
not consistently reproducible.

E.g. when using

from FooStream#window.time(100)
select bar, count() as totalReq
insert into FooBarStream;

In order to fix this issue we thought of improving the internal event chunk
"ComplexEventChunk" to have isbatch() property and only batch events when
they are batching enabled.

Through this when we write batching extensions such as TimeBatch,
LengthBatch windows we can explicitly batch the outputs, and it will be
more predictable.

Regards
Suho
-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] [Siddhi] Making Disruptor configurable

2016-05-15 Thread Sriskandarajah Suhothayan
Hi

We have made Disruptor as optional for Siddhi[1], currently its always
enabled and it uses PhasedBackoffWaitStrategy, event though Disruptor was
giving high throughput there are several issues identified.

1. It is adding latency, and tuning latency is subject to use-cases hence
the deployment is becoming complicated.
2. PhasedBackoffWaitStrategy is not showing good results when there are
lots of Disruptors.

Hence we have removed disruptor by default and made is as an option to add
via configurations.

By using @plan:async(bufferSize='') you can enable Disruptor at all
streams in an execution-plan with the queue size of .  Here
(bufferSize='') is optional.

e.g

*@plan:async(bufferSize='2')*

define stream cseEventStream (symbol string, price float, volume int);

@info(name = 'query1')
from cseEventStream[70 > price]
select *
insert into innerStream ;

@info(name = 'query2')
from innerStream[volume > 90]
select *
insert into outputStream ;

In this case cseEventStream, innerStream and outputStream will have
async behaviors using Disruptor

Alternatively we can also enable Disruptor for specific streams by
annotating them as below.

e.g

*@async(bufferSize='2')*
define stream cseEventStream (symbol string, price float, volume int);

@info(name = 'query1')
from cseEventStream[70 > price]
select *
insert into innerStream ;

@info(name = 'query2')
from innerStream[volume > 90]
select *
insert into outputStream ;


Here only cseEventStream will have async behavior using Disruptor

Performance stats after the improvements.

Filter Query *without* *Disruptor*
Throughput : 3.5M Events/sec
Time spend :  *2.29E-4 ms*

Filter Query *with Disruptor *
Throughput : *6.1M Events/sec *
Time spend :  0.028464 ms

Multiple Filter Query without Disruptor
Throughput : 3.0M Events/sec
Time spend :  2.91E-4 ms

Multiple Filter Query with Disruptor
Throughput : 5.5M Events/sec
Time spend :  0.089888 ms

[1]https://github.com/wso2/siddhi/tree/latency

Regards
Suho

-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Siddhi] Making Disruptor configurable

2016-05-15 Thread Sriskandarajah Suhothayan
On Mon, May 16, 2016 at 11:07 AM, Seshika Fernando  wrote:

> Hi Suho,
>
> Looks good.
>
> If (bufferSize='') is optional, what is the default bufferSize that
> will be taken if I just add @plan:async ?
>
> Yes the default one will be taken. & the default one is 1024.

Suho


> seshi
>
> On Sun, May 15, 2016 at 3:58 PM, Sriskandarajah Suhothayan 
> wrote:
>
>>
>> Hi
>>
>> We have made Disruptor as optional for Siddhi[1], currently its always
>> enabled and it uses PhasedBackoffWaitStrategy, event though Disruptor was
>> giving high throughput there are several issues identified.
>>
>> 1. It is adding latency, and tuning latency is subject to use-cases hence
>> the deployment is becoming complicated.
>> 2. PhasedBackoffWaitStrategy is not showing good results when there are
>> lots of Disruptors.
>>
>> Hence we have removed disruptor by default and made is as an option to
>> add via configurations.
>>
>> By using @plan:async(bufferSize='') you can enable Disruptor at all
>> streams in an execution-plan with the queue size of .  Here
>> (bufferSize='') is optional.
>>
>> e.g
>>
>> *@plan:async(bufferSize='2')*
>>
>> define stream cseEventStream (symbol string, price float, volume int);
>>
>> @info(name = 'query1')
>> from cseEventStream[70 > price]
>> select *
>> insert into innerStream ;
>>
>> @info(name = 'query2')
>> from innerStream[volume > 90]
>> select *
>> insert into outputStream ;
>>
>> In this case cseEventStream, innerStream and outputStream will have
>> async behaviors using Disruptor
>>
>> Alternatively we can also enable Disruptor for specific streams by
>> annotating them as below.
>>
>> e.g
>>
>> *@async(bufferSize='2')*
>> define stream cseEventStream (symbol string, price float, volume int);
>>
>> @info(name = 'query1')
>> from cseEventStream[70 > price]
>> select *
>> insert into innerStream ;
>>
>> @info(name = 'query2')
>> from innerStream[volume > 90]
>> select *
>> insert into outputStream ;
>>
>>
>> Here only cseEventStream will have async behavior using Disruptor
>>
>> Performance stats after the improvements.
>>
>> Filter Query *without* *Disruptor*
>> Throughput : 3.5M Events/sec
>> Time spend :  *2.29E-4 ms*
>>
>> Filter Query *with Disruptor *
>> Throughput : *6.1M Events/sec *
>> Time spend :  0.028464 ms
>>
>> Multiple Filter Query without Disruptor
>> Throughput : 3.0M Events/sec
>> Time spend :  2.91E-4 ms
>>
>> Multiple Filter Query with Disruptor
>> Throughput : 5.5M Events/sec
>> Time spend :  0.089888 ms
>>
>> [1]https://github.com/wso2/siddhi/tree/latency
>>
>> Regards
>> Suho
>>
>> --
>>
>> *S. Suhothayan*
>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Siddhi] Making Disruptor configurable

2016-05-16 Thread Sriskandarajah Suhothayan
Hi Malith

Yes, there is a drop on throughput if we are not using disruptor. Thats why
we have not removed disruptor but rather made it configurable, such that
based on the use-case we can pick latency or throughput. And yes getting
both is not trivial and needs heavy use-case specific tuning.

Regards
Suho

On Mon, May 16, 2016 at 2:06 PM, Malith Jayasinghe  wrote:

> Although there is an improvement in the latency, I notice that we get a
> significant reduction in the throughput (in both scenarios) when not using
> the disruptor. Is there a way to address this?  I guess it will be
> difficult to optimise both performance metrics at the same time?
>
> On Mon, May 16, 2016 at 11:10 AM, Sriskandarajah Suhothayan  > wrote:
>
>>
>>
>> On Mon, May 16, 2016 at 11:07 AM, Seshika Fernando 
>> wrote:
>>
>>> Hi Suho,
>>>
>>> Looks good.
>>>
>>> If (bufferSize='') is optional, what is the default bufferSize that
>>> will be taken if I just add @plan:async ?
>>>
>>> Yes the default one will be taken. & the default one is 1024.
>>
>> Suho
>>
>>
>>> seshi
>>>
>>> On Sun, May 15, 2016 at 3:58 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>>>
>>>> Hi
>>>>
>>>> We have made Disruptor as optional for Siddhi[1], currently its always
>>>> enabled and it uses PhasedBackoffWaitStrategy, event though Disruptor was
>>>> giving high throughput there are several issues identified.
>>>>
>>>> 1. It is adding latency, and tuning latency is subject to use-cases
>>>> hence the deployment is becoming complicated.
>>>> 2. PhasedBackoffWaitStrategy is not showing good results when there are
>>>> lots of Disruptors.
>>>>
>>>> Hence we have removed disruptor by default and made is as an option to
>>>> add via configurations.
>>>>
>>>> By using @plan:async(bufferSize='') you can enable Disruptor at all
>>>> streams in an execution-plan with the queue size of .  Here
>>>> (bufferSize='') is optional.
>>>>
>>>> e.g
>>>>
>>>> *@plan:async(bufferSize='2')*
>>>>
>>>> define stream cseEventStream (symbol string, price float, volume int);
>>>>
>>>> @info(name = 'query1')
>>>> from cseEventStream[70 > price]
>>>> select *
>>>> insert into innerStream ;
>>>>
>>>> @info(name = 'query2')
>>>> from innerStream[volume > 90]
>>>> select *
>>>> insert into outputStream ;
>>>>
>>>> In this case cseEventStream, innerStream and outputStream will have
>>>> async behaviors using Disruptor
>>>>
>>>> Alternatively we can also enable Disruptor for specific streams by
>>>> annotating them as below.
>>>>
>>>> e.g
>>>>
>>>> *@async(bufferSize='2')*
>>>> define stream cseEventStream (symbol string, price float, volume int);
>>>>
>>>> @info(name = 'query1')
>>>> from cseEventStream[70 > price]
>>>> select *
>>>> insert into innerStream ;
>>>>
>>>> @info(name = 'query2')
>>>> from innerStream[volume > 90]
>>>> select *
>>>> insert into outputStream ;
>>>>
>>>>
>>>> Here only cseEventStream will have async behavior using Disruptor
>>>>
>>>> Performance stats after the improvements.
>>>>
>>>> Filter Query *without* *Disruptor*
>>>> Throughput : 3.5M Events/sec
>>>> Time spend :  *2.29E-4 ms*
>>>>
>>>> Filter Query *with Disruptor *
>>>> Throughput : *6.1M Events/sec *
>>>> Time spend :  0.028464 ms
>>>>
>>>> Multiple Filter Query without Disruptor
>>>> Throughput : 3.0M Events/sec
>>>> Time spend :  2.91E-4 ms
>>>>
>>>> Multiple Filter Query with Disruptor
>>>> Throughput : 5.5M Events/sec
>>>> Time spend :  0.089888 ms
>>>>
>>>> [1]https://github.com/wso2/siddhi/tree/latency
>>>>
>>>> Regards
>>>> Suho
>>>>
>>>> --
>>>>
>>>> *S. Suhothayan*
>>>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>>>>

Re: [Architecture] Comments about IoTS Docs

2016-05-17 Thread Sriskandarajah Suhothayan
Its in this release, users can write queries by themselves using the IoT
Analytics Pack.
Currently we are working on building a framework for templating common
scenarios, we might need 2 more weeks to finish the work.
I believe we can add this to the next IoT release.

Suho

On Tue, May 17, 2016 at 12:50 PM, Srinath Perera  wrote:

> Thanks Sumedha.
>
> Suho, are we going to support users to write queries CEP, Spark on top of
> this data IoTS captured? ( or is it for future releases?) IMO that will add
> lot of values to the story.
>
> --Srinath
>
> On Tue, May 17, 2016 at 10:59 AM, Sumedha Rubasinghe 
> wrote:
>
>>
>>
>> On Tue, May 17, 2016 at 10:45 AM, Srinath Perera 
>> wrote:
>>
>>>
>>>1. When you land on the doc, it is not clear where you should go
>>>
>>>  Planning to fix this Srinath.
>>
>>>
>>>1. There is a quick start guide and "Getting started with IoTS"
>>>server
>>>2. in #2 "Start the Virtual Fire-Alarm", it does not tell you have
>>>to go to device section to find "virtual fire alarm"
>>>3. When I said srinathFirealarm, it does not like the name but did
>>>not tell me why
>>>
>>> I used the same name (srinathFirealarm) and it worked. Did you try on a
>> latest nightly build?
>>
>>>
>>>1. Download instructions and "Example: Navigate to the device agent
>>>file that is in the /device_agents/virtual_firealarm 
>>> directory."
>>>does not match
>>>
>>> This is just giving an example directory to which you may have extracted
>> the device agent.
>>
>>
>>>1. When I click on buzzer, why should I pick a protocol and state?
>>>Cannot we pick protocol automatically ( based on how device connected) 
>>> and
>>>state via a drop down box?
>>>
>>>
>> We will fix this to be a drop down with default selected. This is because
>> we have included both MQTT and XMPP support in this example.
>>
>>
>>>
>>>1. [image: Inline image 1]
>>>2. Add end of "Device onwer" tutorial, can we point to how to write
>>>a new device. Also can we have a mobile app that will make your phone a
>>>device?
>>>
>>> Android Sense is this one.
>> It's basically using mobile phone as a gateway with bunch of sensors
>> connected.
>>
>>
>>
>>
>>> BTW, demo is nice!!
>>>
>>> --Srinath
>>>
>>> --
>>> 
>>> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
>>> Site: http://home.apache.org/~hemapani/
>>> Photos: http://www.flickr.com/photos/hemapani/
>>> Phone: 0772360902
>>>
>>
>>
>>
>> --
>> /sumedha
>> m: +94 773017743
>> b :  bit.ly/sumedha
>>
>
>
>
> --
> 
> Blog: http://srinathsview.blogspot.com twitter:@srinath_perera
> Site: http://home.apache.org/~hemapani/
> Photos: http://www.flickr.com/photos/hemapani/
> Phone: 0772360902
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Siddhi] Making Disruptor configurable

2016-05-18 Thread Sriskandarajah Suhothayan
Hi Kasun

In Siddhi Disruptor is used in between queries (at streams) and hence by
enabling disruptor at certain streams we can enable parallel processing
when there are large number of queries.

Regards
Suho

On Tue, May 17, 2016 at 10:46 PM, Kasun Indrasiri  wrote:

> Hi Suho,
>
> We have observed similar behavior with recent Gateway framework
> development in which the heavy resource consumption overrules the
> performance gain that the Disruptor brings in. Also for IO bound scenarios
> we tried using dedicated Disruptors for CPU and IO bound scenarios but that
> again didn't give a significant gain. In CEP's case have we identified
> specific use cases that we must use Disruptor configuration for gain
> maximum throughput?
>
> Thanks,
> Kasun
>
>
> On Sun, May 15, 2016 at 3:28 AM, Sriskandarajah Suhothayan 
> wrote:
>
>>
>> Hi
>>
>> We have made Disruptor as optional for Siddhi[1], currently its always
>> enabled and it uses PhasedBackoffWaitStrategy, event though Disruptor was
>> giving high throughput there are several issues identified.
>>
>> 1. It is adding latency, and tuning latency is subject to use-cases hence
>> the deployment is becoming complicated.
>> 2. PhasedBackoffWaitStrategy is not showing good results when there are
>> lots of Disruptors.
>>
>> Hence we have removed disruptor by default and made is as an option to
>> add via configurations.
>>
>> By using @plan:async(bufferSize='') you can enable Disruptor at all
>> streams in an execution-plan with the queue size of .  Here
>> (bufferSize='') is optional.
>>
>> e.g
>>
>> *@plan:async(bufferSize='2')*
>>
>> define stream cseEventStream (symbol string, price float, volume int);
>>
>> @info(name = 'query1')
>> from cseEventStream[70 > price]
>> select *
>> insert into innerStream ;
>>
>> @info(name = 'query2')
>> from innerStream[volume > 90]
>> select *
>> insert into outputStream ;
>>
>> In this case cseEventStream, innerStream and outputStream will have
>> async behaviors using Disruptor
>>
>> Alternatively we can also enable Disruptor for specific streams by
>> annotating them as below.
>>
>> e.g
>>
>> *@async(bufferSize='2')*
>> define stream cseEventStream (symbol string, price float, volume int);
>>
>> @info(name = 'query1')
>> from cseEventStream[70 > price]
>> select *
>> insert into innerStream ;
>>
>> @info(name = 'query2')
>> from innerStream[volume > 90]
>> select *
>> insert into outputStream ;
>>
>>
>> Here only cseEventStream will have async behavior using Disruptor
>>
>> Performance stats after the improvements.
>>
>> Filter Query *without* *Disruptor*
>> Throughput : 3.5M Events/sec
>> Time spend :  *2.29E-4 ms*
>>
>> Filter Query *with Disruptor *
>> Throughput : *6.1M Events/sec *
>> Time spend :  0.028464 ms
>>
>> Multiple Filter Query without Disruptor
>> Throughput : 3.0M Events/sec
>> Time spend :  2.91E-4 ms
>>
>> Multiple Filter Query with Disruptor
>> Throughput : 5.5M Events/sec
>> Time spend :  0.089888 ms
>>
>> [1]https://github.com/wso2/siddhi/tree/latency
>>
>> Regards
>> Suho
>>
>> --
>>
>> *S. Suhothayan*
>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] WSO2 IoTS Analytics v1.0.0-ALPHA Released

2016-05-30 Thread Sriskandarajah Suhothayan
*WSO2 IoTS Analytics v1.0.0-ALPHA Released*

We are pleased to announce the alpha release of WSO2 IOTS Analytics v1.0.0 [
1 ] which is powered by WSO2 Data
Analytics Server. IoTS Analytics can be used to monitor devices and analyze
their sensor readings. Your feedback is highly appreciated, any bugs or
issues can be reported here [2 ].

This release contains the following capabilities:-

   1. Device Overview Dashboard - showing overview of device status.
   2. Geo Dashboard with predefined geographical analysis.
   3. Gadget generation wizard
   4. Analytics Execution Manager - with predefined common analytics
   solutions such as current value (number) chats, groped analysis (via bar
   charts) with Siddhi & Spark queries & stream mapping.

[1]
http://svn.wso2.org/repos/wso2/people/suho/packs/analytics-iots/wso2analytics-iots-1.0.0-SNAPSHOT.zip
[2] *https://wso2.org/jira/browse/ANLYIOTS
*

Analytics IoTS Team


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [CEP] Extending the Regression Function to support time window

2016-06-02 Thread Sriskandarajah Suhothayan
I think having batchSize & duration will be good as this will limit the
number of events considered, this can help to improve performance as well.

Suho

On Thu, Jun 2, 2016 at 1:59 PM, Charini Nanayakkara 
wrote:

> Hi Tishan,
>
> For my requirement, having time window alone is adequate. So your point
> might be valid. However I'm concerned of the re-usability of the extension.
>
> @Srinath, WDYT? Which would be the better option? Having a single
> implementation or two different ones?
>
> Thanks
>
> On Thu, Jun 2, 2016 at 1:48 PM, Tishan Dahanayakage 
> wrote:
>
>> Charini,
>>
>> My knowledge on the on this domain is sparse. Hence I do not know whether
>> a scenario where time AND length is a valid business case. If it is a valid
>> business case +1 for the design including both parameters in same
>> implementation.
>>
>> Thanks
>> /Tishan
>>
>> On Thu, Jun 2, 2016 at 12:54 PM, Charini Nanayakkara 
>> wrote:
>>
>>> Hi Tishan,
>>>
>>> Yes. Assuming batch size is 5 and time window is 20 mins, only 5 out of
>>> 10 events which arrive within last 5 mins would be processed due to batch
>>> size constraint (even though all events must be processed if time alone was
>>> considered). Having separate implementations would work on the majority of
>>> the scenarios, since only time OR length is usually applicable but not
>>> both. However, having two implementations would cause trouble in the
>>> situations where both the time factor and length are important (equivalent
>>> to AND operation on the two constraints). If our requirement is to have
>>> only one of the two constraints, we can use a very large value for the
>>> other parameter (i.e. if we only need to limit number of events based on
>>> time = 1 sec constraint, we can specify 1,000,000 for batch size assuming
>>> we have prior knowledge that 1,000,000 events would never arrive within 1
>>> sec). IMHO neither of the two options (separate or single implementation)
>>> are perfect for every scenario. However having a single implementation
>>> would help address more cases as I understand. What's your opinion on this?
>>>
>>> Thanks
>>>
>>> On Thu, Jun 2, 2016 at 10:14 AM, Charini Nanayakkara 
>>> wrote:
>>>
 Hi All,

 I have planned to extend the existent Regression Function by adding
 time parameter. Regression is a functionality available for the Siddhi
 stream processor extension known as timeseries. In the current
 implementation, the regression function consumes two or more parameters and
 performs regression as follows.

 The mandatory parameters to be given are the dependent attribute Y and
 the independent attribute(s) X1, X2,Xn. For performing simple linear
 regression, merely one independent attribute would be given. Two or more
 independent attributes are consumed for executing multiple linear
 regression.

 timeseries:regress(Y, X1, X2..,Xn)

 The other three optional parameters to be specified are calculation
 interval, batch size and confidence interval (ci). In the case where those
 are not specified, the default values would be assumed.

 timeseries:regress(calcInterval, batchSize, ci, Y, X1, X2..,Xn)

 Batch size works as a length window in this implementation, which
 allows one to restrict the number of events considered when executing
 regression in real time. For example, if length is 5, only the latest 5
 events (current event and the 4 events prior to it) would be used for
 performing regression.

 *This suggested extension would allow the user to restrict the number
 of events based on a time window as well, apart from constraining based on
 length only. Therefore regression function would consume duration as an
 additional parameter, subsequent to the completion of my task. *

 *timeseries:regress(calcInterval, duration, batchSize, ci, Y, X1,
 X2..,Xn).*

 Here the parameter 'duration' would comprise of two parts, where the
 first part specifies the number and the second part specifies the unit
 (e.g. 2 sec, 5 mins, 7 days). On arrival of each event, the past events to
 be considered for performing regression would be based on this 'duration'
 (i.e. If a new event arrives at 10.00 a.m and the duration is 5  mins, only
 the events which arrived within the time period of 9.55 a.m to 10.00 a.m
 are considered for regression).

 Suggestions and comments are most welcome.

 Thank you.

 --
 Charini Vimansha Nanayakkara
 Software Engineer at WSO2
 Mobile: 0714126293


>>>
>>>
>>> --
>>> Charini Vimansha Nanayakkara
>>> Software Engineer at WSO2
>>> Mobile: 0714126293
>>>
>>>
>>
>>
>> --
>> Tishan Dahanayakage
>> Software Engineer
>> WSO2, Inc.
>> Mobile:+94 716481328
>>
>> Disclaimer: This communication may contain privileged or other
>> confidential information and is intended exclusively for the addressee/s.
>

Re: [Architecture] Data Bridge Agent Publisher for C5 products

2016-06-02 Thread Sriskandarajah Suhothayan
Are we going to use data bridge in C5 ?
C5 have Netty based transports, cant we use one of them to publish events
to DAS, since DAS have the capability to receiving from any transport
protocol via extensions this will not be a problem.

In my opinion we should not be depending on data publisher as it have
several issues like events can get out of ordered, dropped and its not
reliable. We should have a publishing framework which is independent of the
transport, so users can pic Thrift, HTTP or AMQP based on their use cases.

WDYT?

Suho

On Thu, Jun 2, 2016 at 2:35 PM, Kishanthan Thangarajah 
wrote:

> A separate point to note.
>
> Thinking along the AS 6.0 and C5 aspect, what we need is a library, where
> we could use that in both OSGi env and non-OSGi environments. It should not
> have any direct dependency on carbon API's. Currently, with AS 6.0, we are
> using the data publisher from C4 without any code changes and removing
> unwanted dependencies. If we planning to write this again, we should come
> up with the minimum library that could be used in a OSGi and non-OSGi env.
>
>
> On Thu, Jun 2, 2016 at 1:40 PM, Isuru Perera  wrote:
>
>> Hi,
>>
>> On Thu, Jun 2, 2016 at 1:10 PM, Sinthuja Ragendran 
>> wrote:
>>
>>> Hi IsuruP,
>>>
>>> Please find the comments inline.
>>>
>>> On Thu, Jun 2, 2016 at 12:18 PM, Isuru Perera  wrote:
>>>
 Hi,

 This is regarding $subject and the main problem we have is that there
 is no Carbon 5 compatible feature for data bridge agent.

>>>
>>> Data bridge agent publisher is not depends on carbon, and it has some
>>> dependencies for carbon utils, and carbon base, which we can eliminate by
>>> passing proper configurations in data-agent-config.xml.
>>>
>> Yes. This is what should be done.
>>
>> I think we should avoid carbon dependencies in data publisher. We can
>> have some Carbon specific component to initialize data publisher in Carbon
>> (OSGi) environment.
>>
>>> In that case, what do you mean by it's not compatible by Carbon 5 as
>>> it's anyhow doesn't depend on carbon features?
>>>
>> As I mentioned earlier, the publisher has Carbon 4.x dependencies. So, we
>> need to workaround problems like NoClassDefFoundError for CarbonUtils
>> etc.
>>
>>>
>>>

 Since Data Bridge Agent is already an OSGi bundle, we can use it within
 C5 products. But we have to include it with some feature.

 For example, Carbon Metrics needs to publish events to DAS. So, is it
 okay if I keep data bridge agent in Metrics feature?

>>>
>>> No, I don't think that is a good option. Because the publisher is a
>>> generic feature, and it doesn't have any relation ship to metrics feature
>>> other then metrics feature is using data publisher feature. In that case,
>>> you need to have just importFeature defn for the datapublisher feature form
>>> the metrics feature.
>>>
>> Yes. The correct way is to import data publisher feature. However there
>> is no such feature available now. Since Metrics needs the publisher
>> dependencies, I thought we can include those dependencies until we have a
>> data publisher designed to work with C5. When someone wants to install
>> metrics feature, it should work without any issue. Right now, I cannot do a
>> release of Carbon Metrics till I have answers to the questions raised in
>> this mail thread.
>>
>>>
>>>

 Other problem is that the current Data Bridge Agent is written for
 Carbon 4.x based products. For example it uses CarbonUtils to find the
 location of data-agent-config.xml. The CarbonUtils class used by the agent
 is only available in C4.

 We can avoid this by giving a configuration file to the agent. Then
 there will be no NoClassDefFoundError. However as the next step, the agent
 requires the client-trustore.jks and the password to the truststore.

 How should we give the trust store configuration in Carbon 5? For now,
 we use the system properties: javax.net.ssl.trustStore and
 javax.net.ssl.trustStorePassword.

>>>
>>> I think in C5 also the above mentioned system properties will be
>>> exposed, anyhow I'm not sure though. Anyhow you can set above configuration
>>> from data-agent-config.xml as well, so you may have it in the static
>>> location now.
>>>
>> Yes. We can do that as well.
>>
>> Thanks!
>>
>>>
>>> Thanks,
>>> Sinthuja.
>>>
>>>
 The Message Broker is trying to use Carbon Metrics for analytics and we
 need to know what are the recommendations for using Data Bridge Agent in 
 C5.

>>>
 Thanks,

 Best Regards,

 --
 Isuru Perera
 Associate Technical Lead | WSO2, Inc. | http://wso2.com/
 Lean . Enterprise . Middleware

 about.me/chrishantha
 Contact: +IsuruPereraWSO2
 

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo

Re: [Architecture] [DS] Gadget Generation from REST API

2016-06-02 Thread Sriskandarajah Suhothayan
Looks good please proceed. For your point #2 don't store the token in files
but rather keep that in memory.

Regards
Suho

On Wed, Jun 1, 2016 at 10:19 PM, Geesara Prathap  wrote:

> Hi All,
>
> I am to implement a new data provider which requires getting data from
> some third party REST API for gadgets generation wizard in DS. These are
> the authorization methods which are to support in this implementation.
>
>
> Authorization Method
>
> Required Fields
>
> No Auth
>
> Basic Auth
>
> Username, password
>
> OAuth2 with password grant type
>
> Auth URL
>
> Client id and client secret
>
> Scope
>
> Username
>
> Password
>
> OAuth2 with client credentials grant type
>
> Auth URL
>
> Client id and client secret
>
> Scope
>
> 1. Do we need to support any additional authorization mechanisms?
> 2. The access token is temporally stored in a configuration file since
> this is only required for gadget generation. Is there any concern on this?
>
> Thanks,
> Geesara
>
> --
> Geesara Prathap Kulathunga
> Software Engineer
> WSO2 Inc; http://wso2.com
> Mobile : +940772684174
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Access Level Model For WSO2 Dashboard Server

2016-06-07 Thread Sriskandarajah Suhothayan
Why are we not using different permissions for each dashboard than using
 roles. I believe using permissions will be more scalable than using roles.
WDYT?

Regards
Suho

On Tue, Jun 7, 2016 at 2:38 PM, Nisala Nanayakkara  wrote:

> Hi Udara,
>
> Since these are internal roles, they are not stored in LDAP. So it will
> work fine.
>
> Thanks,
> Nisala
>
> On Tue, Jun 7, 2016 at 10:57 AM, Udara Rathnayake  wrote:
>
>> Another question, ​Is this going to work if we have to connect to a
>> read-only LDAP/A
>> ​D​
>> userstore?
>>
>> On Tue, Jun 7, 2016 at 9:43 AM, Tanya Madurapperuma 
>> wrote:
>>
>>> Is this model scalable? Because per dashboard we will have to create 4
>>> internal roles. So if we have N number of dashboards we will end up having
>>> 4 * N number of internal roles.
>>>
>>> @ IS team : is this approach fine? Or is there any better approach?
>>>
>>> Thanks,
>>> Tanya
>>>
>>> On Mon, Jun 6, 2016 at 3:44 PM, Nisala Nanayakkara 
>>> wrote:
>>>
 adding Johan and Manuranga

 Thanks,
 Nisala

 On Mon, Jun 6, 2016 at 3:41 PM, Nisala Nanayakkara 
 wrote:

> Hi all,
>
> I am working on implementing an access levels model for WSO2 Dashboard
> Server. Currently global permission model for create/delete/login is
> implemented by Megala. Since it does not support to provide per dashboard
> level access for the users. I am going to extend it and implement a
> permission model that can be used to provide per dashboard level access 
> for
> the users.
>
> In order to implement this feature, I am going to add four roles at
> dashboard creation time as follows,
>
>- internal/dashboard/{dashboardID}/editor
>- internal/dashboard/{dashboardID}/viewer
>- internal/dashboard/{dashboardID}/settings
>- internal/dashboard/{dashboardID}/delete
>
> At the dashboard creation time, the user who creates the dashboard
> will get all the four roles. But other users have to get above roles to do
> appropriate actions to the dashboard. So that we can set above four roles
> for the users and They will be given different access levels according to
> their roles.
>
> Please feel free to give any feedback.
>
> Thanks,
> Nisala
> --
> *Nisala Niroshana Nanayakkara,*
> Software Engineer
> Mobile:(+94)717600022
> WSO2 Inc., http://wso2.com/
>



 --
 *Nisala Niroshana Nanayakkara,*
 Software Engineer
 Mobile:(+94)717600022
 WSO2 Inc., http://wso2.com/

>>>
>>>
>>>
>>> --
>>> Tanya Madurapperuma
>>>
>>> Senior Software Engineer,
>>> WSO2 Inc. : wso2.com
>>> Mobile : +94718184439
>>> Blog : http://tanyamadurapperuma.blogspot.com
>>>
>>
>>
>>
>> --
>> Regards,
>> UdaraR
>>
>
>
>
> --
> *Nisala Niroshana Nanayakkara,*
> Software Engineer
> Mobile:(+94)717600022
> WSO2 Inc., http://wso2.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Modification of the gadget generation wizard's extension structure

2016-06-07 Thread Sriskandarajah Suhothayan
Gadget generation API  is orthogonal to the usage API and we are not
changing the usage API, hence gadget created via old generation tools will
still work.

Regards
Suho

On Tue, Jun 7, 2016 at 3:34 PM, Rajith Vitharana  wrote:

> Hi Tanya,
>
> On Tue, Jun 7, 2016 at 3:21 PM, Tanya Madurapperuma 
> wrote:
>
>> Hi Rajith,
>>
>> We have not done any GA release, not even an Alpha release of the product
>> with this, but a component repo release.
>>
> Still if this is a API change to a component which we already released to
> public, I feel we have to think of those aspects as well. (Conflicts
> happens only if some one is going to use it, hence kind of taking a chance
> doing such release, if we can't 100% sure no one will going to use that) or
> else we may need to provide migration (support when/if) needed. Just my two
> cents :)
>
> Thanks,
>
> --
> Rajith Vitharana
>
> Software Engineer,
> WSO2 Inc. : wso2.com
> Mobile : +94715883223
> Blog : http://lankavitharana.blogspot.com/
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Modification of the gadget generation wizard's extension structure

2016-06-08 Thread Sriskandarajah Suhothayan
On Wed, Jun 8, 2016 at 3:31 PM, Manuranga Perera  wrote:

> Hi Tanya, Sinthuja,
>
> 1) We had a chat about how we can use gadget parameters instead of
> generation, have you guys considered that approach?
>
> We have cases like database credentials which should not be shown to the
user and should not be editable from the gadget properties. Further we also
need a model that gadget should be greeted by some privileged user and used
by others, so the gadget generation is the appropriate model to handle
this.


> Edit/ Re-generate function is not supported yet.
>
> 2) The issues is, with this model it will be harder to support that even
> in the future. At least we should serialize all the parameters with the
> generated gadget.
>
> Editing is something we can achieve with very little effort, if we store
all the properties that we have passed in the UI in a config file within
the gadget then we can simply load that to the gadget generation wizard
when we need to modify the gadget.

Regards
Suho

> --
> With regards,
> *Manu*ranga Perera.
>
> phone : 071 7 70 20 50
> mail : m...@wso2.com
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [CEP] Siddhi Extension for calculate percentile values

2016-06-08 Thread Sriskandarajah Suhothayan
Since p can't change during the execution, make sure to force p to be a
constant value.
Am I correct here ?

Regards
Suho

On Wed, Jun 8, 2016 at 6:38 AM, Ashen Weerathunga  wrote:

> Hi All,
>
> I'm writing a siddhi extension for calculating percentile values. This
> will be implemented as an Aggregate Function Extension under math
> extensions. Two input parameter will be required for this function as below:
>
> * percentile(** arg, ** p)*
>
>- *arg* : values that need to be considered when calculating the
>percentile value
>- *p* : percentile
>- This will return an estimate for pth percentile of arg values.
>
> eg : *percentile(temperature, 95.0)*
>
>- returns the 95th percentile value of all the temperature events
>based on their arrival and expiry.
>
> Please let me know if you have any suggestions on this.
>
> Thanks,
> Ashen
>
> --
> *Ashen Weerathunga*
> Software Engineer
> WSO2 Inc.: http://wso2.com
> lean.enterprise.middleware
>
> Email: as...@wso2.com
> Mobile: +94 716042995 <94716042995>
> LinkedIn: *http://lk.linkedin.com/in/ashenweerathunga
> *
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Caching Support for Analytics Event Tables

2016-06-20 Thread Sriskandarajah Suhothayan
Is this in line with the RDBMS implementation? Else it will be confusing to
the users.
Shall we have a chat and merge the caching code?

@Mohan can you work with Anjana

Regards
Suho

On Mon, Jun 20, 2016 at 12:49 PM, Anjana Fernando  wrote:

> Hi,
>
> With a chat we had with Srinath, we've decided to set the default cache
> timeout to 10 seconds, so from this moment, it is set to 10 seconds by
> default in the code.
>
> Cheers,
> Anjana.
>
> On Wed, Jun 15, 2016 at 1:57 PM, Nirmal Fernando  wrote:
>
>> Great! Thanks Anjana!
>>
>> On Wed, Jun 15, 2016 at 11:26 AM, Anjana Fernando 
>> wrote:
>>
>>> Hi,
>>>
>>> We've added the $subject. Basically, a local cache is now maintained in
>>> each event table, where it will store the most recently used data items in
>>> the cache, up to a certain given cache size, for a maximum given lifetime.
>>> The format is as follows:-
>>>
>>>  @from(eventtable = 'analytics.table' , table.name = 'name', *caching*
>>> = 'true', *cache.timeout.seconds* = '10', *cache.size.bytes* = '10')
>>>
>>> The cache.timeout.seconds and cache.size.bytes values are optional, with
>>> default values of 60 (1 minute) and 1024 * 1024 * 10 (10 MB) respectively.
>>>
>>> Also, there are some debug logs available in the component, if you want
>>> to check for explicit cache hit/miss situations and record lookup timing,
>>> basically enable debug logs for the class
>>> "org.wso2.carbon.analytics.eventtable.AnalyticsEventTable".
>>>
>>> So basically, if you use analytics event tables in performance sensitive
>>> areas in your CEP execution plans, do consider using caching if it is
>>> possible to do so.
>>>
>>> The unit tests are updated with caching, and the updated docs can be
>>> found here [1].
>>>
>>> [1]
>>> https://docs.wso2.com/display/DAS310/Understanding+Event+Streams+and+Event+Tables#UnderstandingEventStreamsandEventTables-AnalyticseventtableAnalyticseventtable
>>>
>>> Cheers,
>>> Anjana.
>>> --
>>> *Anjana Fernando*
>>> Senior Technical Lead
>>> WSO2 Inc. | http://wso2.com
>>> lean . enterprise . middleware
>>>
>>
>>
>>
>> --
>>
>> Thanks & regards,
>> Nirmal
>>
>> Team Lead - WSO2 Machine Learner
>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>> Mobile: +94715779733
>> Blog: http://nirmalfdo.blogspot.com/
>>
>>
>>
>
>
> --
> *Anjana Fernando*
> Senior Technical Lead
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Caching Real time analytics data

2016-07-06 Thread Sriskandarajah Suhothayan
This is a valid case, I believe this can be added to the UI publisher.
@Thilini can you implement caching at UI publisher such that it when a
gadget connects during the connection time the gadget will get all the
cached events, and then update itself like how it does now ?

For the proper fix is, events should be pushed to a persistent store via
DAS and the gadgets should be written to fetch the data from the store
during startup and then update itself via real-time events.

We have plans to implement this in the next release on top of DAS.

Regards
Suho



On Wed, Jul 6, 2016 at 11:19 PM, Sachith Withana  wrote:

> Agreed Sinthuja.
>
> But what about for a smaller window size (5/10/15 mins)?
>
> The reason why I bought up this issue is, in my case, I use several real
> time gadgets.
> And at the startup, they are all empty for that user until the data gets
> pushed in.
>
> As an end user, I would like to see the last status of the real time
> analytics when I log in.
>
> Thanks,
> Sachith
>
> On Wed, Jul 6, 2016 at 12:09 PM, Sinthuja Ragendran 
> wrote:
>
>> Hi Sachith,
>>
>> If the use-case is to display the 1 hour analytics data from CEP, then
>> IMO the he/she need to simply store the CEP results into a persistence
>> store (DAS or RDBMS via RDBMS event publisher), and then let the gadget
>> read from the persistence store. I don't think caching is a good option in
>> such cases because anyhow if the server crashes due to some reason the data
>> is not going to be shown.
>>
>> Thanks,
>> Sinthuja.
>>
>> On Wed, Jul 6, 2016 at 10:12 PM, Sachith Withana 
>> wrote:
>>
>>> Hi all,
>>>
>>> In the dashboard, the real time data is only shown if the user is logged
>>> into the dashboard at the time of the data is being pushed.
>>>
>>> If the data is being pushed every hour, a new user who logs in would
>>> potentially have to wait up to one hour to see the real time data, and if
>>> the user refreshes, then has to wait another hour to see the data, and
>>> would loose the current data completely.
>>>
>>> I understand from the CEP perspective it's similar to fire and forget,
>>> but can't we add some level of caching to prevent this?
>>>
>>> Regards,
>>> Sachith
>>> --
>>> Sachith Withana
>>> Software Engineer; WSO2 Inc.; http://wso2.com
>>> E-mail: sachith AT wso2.com
>>> M: +94715518127
>>> Linked-In: 
>>> https://lk.linkedin.com/in/sachithwithana
>>>
>>
>>
>>
>> --
>> *Sinthuja Rajendran*
>> Technical Lead
>> WSO2, Inc.:http://wso2.com
>>
>> Blog: http://sinthu-rajan.blogspot.com/
>> Mobile: +94774273955
>>
>>
>>
>
>
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127
> Linked-In: 
> https://lk.linkedin.com/in/sachithwithana
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RabbitMQ Input Adopter in CEP

2016-07-07 Thread Sriskandarajah Suhothayan
It's better if we can avoid using Axis2 Transport if we need to restart
CEP/DAS for each and every changes.
Please verify.

Regards
Suho

On Thu, Jul 7, 2016 at 2:14 PM, Yashothara Shanmugarajah <
yashoth...@wso2.com> wrote:

> ​​
> Hi All,
>
> I have planned to develop RabbitMQ Input Adapter (Receiver) for CEP. RabbitMQ
> task is getting the messages and forward it to the receiver.  A "Consumer
> (C)" is a program that mostly waits to receive messages. Queue is the one
> which stores messages for a sometime. Exchange on one side it receives
> messages from producers and the other side it pushes them to queues. The
> exchange must know exactly what to do with a message it receives. As I am
> going to do Input Adapter I have to focus on Consumer part.  RabbitMQ
> receiver have to connect to broker and listen for messages from particular
> queue and exchange. I planned to use Axis2 transport as CEP is integrated
> with Axis2 Transport. I planned to use binary message builder and after
> that read the message and create the internal event format.
>
> Your comments and suggestions are highly appreciated.
>
> Thanks.
> Best Regards,
> Yashothara.S
>
> Software Engineer
> WSO2
>
>


-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] RabbitMQ Input Adopter in CEP

2016-07-07 Thread Sriskandarajah Suhothayan
+1

Regards
Suho

On Thu, Jul 7, 2016 at 3:31 PM, Malaka Silva  wrote:

> I guess we can check for possibilities of reusing inbound endpoint code
> here. This way we can identify what is missing to do this from both sides.
> WDYT?
>
> On Thu, Jul 7, 2016 at 2:48 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> It's better if we can avoid using Axis2 Transport if we need to restart
>> CEP/DAS for each and every changes.
>> Please verify.
>>
>> Regards
>> Suho
>>
>> On Thu, Jul 7, 2016 at 2:14 PM, Yashothara Shanmugarajah <
>> yashoth...@wso2.com> wrote:
>>
>>> ​​
>>> Hi All,
>>>
>>> I have planned to develop RabbitMQ Input Adapter (Receiver) for CEP. 
>>> RabbitMQ
>>> task is getting the messages and forward it to the receiver.  A "Consumer
>>> (C)" is a program that mostly waits to receive messages. Queue is the one
>>> which stores messages for a sometime. Exchange on one side it receives
>>> messages from producers and the other side it pushes them to queues. The
>>> exchange must know exactly what to do with a message it receives. As I am
>>> going to do Input Adapter I have to focus on Consumer part.  RabbitMQ
>>> receiver have to connect to broker and listen for messages from particular
>>> queue and exchange. I planned to use Axis2 transport as CEP is integrated
>>> with Axis2 Transport. I planned to use binary message builder and after
>>> that read the message and create the internal event format.
>>>
>>> Your comments and suggestions are highly appreciated.
>>>
>>> Thanks.
>>> Best Regards,
>>> Yashothara.S
>>>
>>> Software Engineer
>>> WSO2
>>>
>>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Technical Lead & Team Lead of WSO2 Complex Event Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>
>
>
> --
>
> Best Regards,
>
> Malaka Silva
> Senior Technical Lead
> M: +94 777 219 791
> Tel : 94 11 214 5345
> Fax :94 11 2145300
> Skype : malaka.sampath.silva
> LinkedIn : http://www.linkedin.com/pub/malaka-silva/6/33/77
> Blog : http://mrmalakasilva.blogspot.com/
>
> WSO2, Inc.
> lean . enterprise . middleware
> http://www.wso2.com/
> http://www.wso2.com/about/team/malaka-silva/
> <http://wso2.com/about/team/malaka-silva/>
> https://store.wso2.com/store/
>
> Save a tree -Conserve nature & Save the world for your future. Print this
> email only if it is absolutely necessary.
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-07-18 Thread Sriskandarajah Suhothayan
Hi

Based on the request of IS team we have recently added support for loading
template files from the registry.
I think with this feature we can do the mapping at Event Publisher side,
then IS can send only the core data for the notification. I think building
the whole message at IS is too much customization for emails.

Please set up a meeting so we can discuss the possible ways to implementing
this.

Regards
Suho

On Mon, Jul 18, 2016 at 5:52 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> We are trying to do some improvements to the notification sending module
> where we have integrated analytics common features in IS, in order to send
> several notifications (ex:Email, SMS).
>
> Current implementation is in [1], there only the email notification was
> focused where we are directly publishing to the EmailEventAdapter.
>
> Now we are trying to send notifications via publishing an event to the
> Event stream without directly calling an Output Adapter. The approach we
> have taken is as follows.
>
>
>
>
>
>
>
>
>
> *1) In server start up following will be created.i) A stream for each and
> every notification type including the necessary attributes.Ex: Email
> Notification - a Stream with the subject, body and footer as attributesii)
> Event Publishers, registered for each and every stream in the required
> Output event adapter type. Ex: Email Notification - event Publisher in
> email output event adapter type.2) Publishing an event to
> EventStreamService, which includes an arbitrary data map with the necessary
> data needed for the specific notification type.  Ex: Email Notification
> - Please find the code segments in [2] for having a better understanding.*
>
> There in IS side, we are selecting a specific email template and will be
> filled out the place holders before sending the subject, body and footer as
> arbitrary map attributes.
>
> But even-though we passed an arbitrary data map, when we are sending an
> email from the EmailEventAdapter, it won't filter out the subject, body or
> header from that arbitrary data map.
> As I have understood, if someone pass an event with an arbitrary data map,
> the email body will be set as [3] (Refer [4]), it won't filter out the
> content(Refer [5]).
> Is this has to be worked if we provide *output mappings* for event
> publisher as* {{subject}{body}{footer}}* to convert the event to the
> supported format?
>
> I have gone through the code [6], where the event data will be passed
> through EventStreamProducer, but there also seems like it's not possible
> to send an email in required format(subject, body and footer).
>
> Really appreciate your comments/suggestions to understand the correct
> approach to be taken.
>
> [1]
> https://github.com/wso2-extensions/identity-event-handler-email/blob/master/components/event-handler-email/org.wso2.carbon.identity.event.handler.email/src/main/java/org.wso2.carbon.identity.event.handler.email/handler/EmailEventHandler.java#L164
> [2]
> https://drive.google.com/a/wso2.com/file/d/0Bz_EQkE2mOgBY00yYVpGelZJNms/view?usp=sharing
> [3]
> https://drive.google.com/a/wso2.com/file/d/0Bz_EQkE2mOgBNEMtYjJvSFB2emM/view?usp=sharing
> [4]
> https://github.com/wso2/carbon-analytics-common/blob/master/components/event-publisher/org.wso2.carbon.event.publisher.core/src/main/java/org/wso2/carbon/event/publisher/core/internal/type/text/TextOutputMapper.java#L139
> [5]
> https://github.com/wso2/carbon-analytics-common/blob/master/components/event-publisher/event-output-adapters/org.wso2.carbon.event.output.adapter.email/src/main/java/org/wso2/carbon/event/output/adapter/email/EmailEventAdapter.java#L233
> [6]
> https://github.com/wso2/carbon-event-processing/blob/master/components/event-simulator/org.wso2.carbon.event.simulator.core/src/main/java/org/wso2/carbon/event/simulator/core/internal/CarbonEventSimulator.java#L183
>
> Thanks and Regards
> --
> Indunil Upeksha Rathnayake
> Software Engineer | WSO2 Inc
> Emailindu...@wso2.com
> Mobile   0772182255
>



-- 

*S. Suhothayan*
Technical Lead & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-07-18 Thread Sriskandarajah Suhothayan
Since Option 2 is now possible I think you can move to it now. The
advantage is, with this approach you are not restricted to emails and you
can now use SOAP, REST and other adopters to trigger some actions based on
notifications, which will make IS much more powerful than just sending
emails.

I'm available from 2.30 pm at PG.

Regards
Suho

On Tue, Jul 19, 2016 at 11:17 AM, Johann Nallathamby 
wrote:

> Hi Suho,
>
> On Mon, Jul 18, 2016 at 11:44 PM, Sriskandarajah Suhothayan  > wrote:
>
>> Hi
>>
>> Based on the request of IS team we have recently added support for
>> loading template files from the registry.
>> I think with this feature we can do the mapping at Event Publisher side,
>> then IS can send only the core data for the notification. I think building
>> the whole message at IS is too much customization for emails.
>>
>
> As discussed previously both methods should work.
>
> Replacing placeholder with data in the arbitrary data map was in the
> master at the time and now it should have been release AFAIU. This is what
> Indunil was trying.
>
> And also you guys have added the support to pick registry templates based
> on some place holder values in the registry path. What we discussed was to
> send the 'locale' value as a stream attribute for our use case. If this
> approach works this is also fine for us.
>
> We tried with option1 just to get something working quickly.
>
>
>>
>> Please set up a meeting so we can discuss the possible ways to
>> implementing this.
>>
>> Regards
>> Suho
>>
>> On Mon, Jul 18, 2016 at 5:52 PM, Indunil Upeksha Rathnayake <
>> indu...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> We are trying to do some improvements to the notification sending module
>>> where we have integrated analytics common features in IS, in order to send
>>> several notifications (ex:Email, SMS).
>>>
>>> Current implementation is in [1], there only the email notification was
>>> focused where we are directly publishing to the EmailEventAdapter.
>>>
>>> Now we are trying to send notifications via publishing an event to the
>>> Event stream without directly calling an Output Adapter. The approach we
>>> have taken is as follows.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *1) In server start up following will be created.i) A stream for each
>>> and every notification type including the necessary attributes.Ex:
>>> Email Notification - a Stream with the subject, body and footer as
>>> attributesii) Event Publishers, registered for each and every stream in the
>>> required Output event adapter type. Ex: Email Notification - event
>>> Publisher in email output event adapter type.2) Publishing an event to
>>> EventStreamService, which includes an arbitrary data map with the necessary
>>> data needed for the specific notification type.  Ex: Email Notification
>>> - Please find the code segments in [2] for having a better understanding.*
>>>
>>> There in IS side, we are selecting a specific email template and will be
>>> filled out the place holders before sending the subject, body and footer as
>>> arbitrary map attributes.
>>>
>>> But even-though we passed an arbitrary data map, when we are sending an
>>> email from the EmailEventAdapter, it won't filter out the subject, body or
>>> header from that arbitrary data map.
>>> As I have understood, if someone pass an event with an arbitrary data
>>> map, the email body will be set as [3] (Refer [4]), it won't filter out the
>>> content(Refer [5]).
>>> Is this has to be worked if we provide *output mappings* for event
>>> publisher as* {{subject}{body}{footer}}* to convert the event to the
>>> supported format?
>>>
>>> I have gone through the code [6], where the event data will be passed
>>> through EventStreamProducer, but there also seems like it's not
>>> possible to send an email in required format(subject, body and footer).
>>>
>>> Really appreciate your comments/suggestions to understand the correct
>>> approach to be taken.
>>>
>>> [1]
>>> https://github.com/wso2-extensions/identity-event-handler-email/blob/master/components/event-handler-email/org.wso2.carbon.identity.event.handler.email/src/main/java/org.wso2.carbon.identity.event.handler.email/handler/EmailEventHandler.java#L164
>>> [2]
>>> https://drive.google.com/a/wso2.com/file/d/0Bz_EQkE2mOgBY00yYV

Re: [Architecture] [CEP] [Siddhi] Siddhi Extension for Markov Models

2016-07-21 Thread Sriskandarajah Suhothayan
On Thu, Jul 21, 2016 at 12:32 PM, Malith Jayasinghe 
wrote:

>
> On Thu, Jul 21, 2016 at 11:46 AM, Ashen Weerathunga 
> wrote:
>
>> Hi all,
>>
>> I'm writing a siddhi extension for Markov models. It can be used to
>> detect abnormal user behaviors of many real world applications such as
>> detecting abnormal API request patterns, detecting fraudulent bank
>> transactions etc. There are different variations in Markov models.
>> Therefore this implementation will be done using Markov chain[1] which is a
>> basic Markov model.
>>
>> Markov chain consists of following key features [2].
>>
>>- Set of states
>>- Transition between states
>>- Future depends on the present
>>- Future does not depend on the past
>>
>> Transition probabilities between states will be updated in real time with
>> new input events and abnormal state transition notifications will be sent
>> as for the user defined probability threshold.
>>
>
> Could you explain a bit more about how you are detecting an abnormal state
> transition? For example, it is done based on the transition matrix of the
> markov chain? If so how/where do we define this matrix?
>

It should be done based on the matirx, for this version we dont need to
define the matrix, system should learn based on the inputs.

Regards
Suho

>
>
>> This will be implemented as a stream processor and it will have following
>> input and output parameters.
>>
>> *Input parameters*
>>
>> Parameter Type Required/Optional Description
>> id String required id of the user
>> state String required current state of the user
>> duration int | long | time required max duration that will be considered
>> as a continuation of the previous state of the particular user
>> trainingBatchSize int | long required no of events required to train the
>> model initially. Notifications will not be given until the no of input
>> events reach this limit
>> abnormalTransitionProbability double required transisiiton probability
>> threshold that should be used to identify abnormal state transitions
>>
>> *Output Parameters*
>>
>> Parameter Type Name Description
>> id String user id id of the user
>> startState String start state start satate of the user
>> endState String end state end state of the user
>> transitionProbability double transition probability transition
>> probability from start state to end state
>> notify boolean notify notification whether it is a abnormal transition
>> or not
>>
>> As an example following will return notification as true if a user has
>> done a state transition which has a probability less than or equal to 0.01,
>>
>>
>> from inputStream#markovModels:markovChain(id, state, 60 min, 500, 0.01)
>> select *
>> insert into outputStream;
>>
>>
>> Please let me know if you have any suggestions on this.
>>
>> [1]https://en.wikipedia.org/wiki/Markov_chain
>> [2]http://bit-player.org/wp-content/extras/markov/#/
>>
>> Thanks and Regards,
>> Ashen
>> --
>> *Ashen Weerathunga*
>> Software Engineer
>> WSO2 Inc.: http://wso2.com
>> lean.enterprise.middleware
>>
>> Email: as...@wso2.com
>> Mobile: +94 716042995 <94716042995>
>> LinkedIn: *http://lk.linkedin.com/in/ashenweerathunga
>> *
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Malith Jayasinghe
>
>
> WSO2, Inc. (http://wso2.com)
> Email   : mali...@wso2.com
> Mobile : 0770704040
> Lean . Enterprise . Middleware
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [PET] Support UniqueBatchWindow(Time, Length) for Siddhi

2016-07-21 Thread Sriskandarajah Suhothayan
We really don't need to have both in the same implementation. But we can
have both in common repo called UniqueBatchWindow.

Regards
Suho

On Thu, Jul 21, 2016 at 2:44 PM, Rajjaz Mohammed  wrote:

> Hi All,
>
> I have planned to develop UniqueTimeBatchWindow, UniqueLengthBatchWindow
> Extension for Siddhi. We already have TimeWindow, TimeBatchWindow and
> UniqueTimeWindow. Same to length
> also.
>
> Currently, I'm planning to implement UniqueBatchWindow which is support
> for both time and length.
>
> Please add your suggestions if you have.
>
> --
> Thank you
> Best Regards
>
> *Rajjaz HM*
> Associate Software Engineer
> Platform Extension Team
> WSO2 Inc. 
> lean | enterprise | middleware
> Mobile | +94752833834|+94777226874
> Email   | raj...@wso2.com
> LinkedIn  | Blogger
>  | WSO2 Profile
> 
> [image: https://wso2.com/signature] 
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [PET] Support UniqueBatchWindow(Time, Length) for Siddhi

2016-07-21 Thread Sriskandarajah Suhothayan
Sorry, I miss read the mail. +1

Regards
Suho

On Thu, Jul 21, 2016 at 3:02 PM, Rajjaz Mohammed  wrote:

> Hi Suho,
>
> We really don't need to have both in the same implementation. But we can
>> have both in common repo called UniqueBatchWindow.
>>
>> Both are not in same implementations.
>
>>
>> I have planned to develop UniqueTimeBatchWindow, UniqueLengthBatchWindow
>>> Extension for Siddhi. We already have TimeWindow, TimeBatchWindow and
>>> UniqueTimeWindow. Same to length also.
>>>
>>>  UniqueTimeBatchWindow, UniqueLengthBatchWindow are going to be two
> seperate extensions.
>
> Currently, I'm planning to implement UniqueBatchWindow which is support
>>> for both time and length.
>>>
>>> Please add your suggestions if you have.
>>>
>>> --
>>> Thank you
>>> Best Regards
>>>
>>> *Rajjaz HM*
>>> Associate Software Engineer
>>> Platform Extension Team
>>> WSO2 Inc. 
>>> lean | enterprise | middleware
>>> Mobile | +94752833834|+94777226874
>>> Email   | raj...@wso2.com
>>> LinkedIn  | Blogger
>>>  | WSO2 Profile
>>> 
>>> [image: https://wso2.com/signature] 
>>>
>>
>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>> Processor
>> *WSO2 Inc. *http://wso2.com
>> * *
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ twitter:
>> http://twitter.com/suhothayan  | linked-in:
>> http://lk.linkedin.com/in/suhothayan *
>>
>
>
>
> --
> Thank you
> Best Regards
>
> *Rajjaz HM*
> Associate Software Engineer
> Platform Extension Team
> WSO2 Inc. 
> lean | enterprise | middleware
> Mobile | +94752833834|+94777226874
> Email   | raj...@wso2.com
> LinkedIn  | Blogger
>  | WSO2 Profile
> 
> [image: https://wso2.com/signature] 
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [CEP] [Siddhi] Siddhi Extension for Markov Models

2016-07-21 Thread Sriskandarajah Suhothayan
On Thu, Jul 21, 2016 at 3:22 PM, Malith Jayasinghe  wrote:

> Hi Ashen,
> Thanks for the explanation.
>
> Note that there are case where the state transition matrix is already
> known (and fixed). People derive these metrics by analyzing very large data
> sets which come from long running experiments. In such cases we want the
> user to provide this transition matrix as an input.
>

This is a valid usecase, when we have the initial implementation adding
this support will be easy, so we'll try to support on the 2nd phase. One
option is to load from a CSV file, we also need to investigate how we can
plug RDBMS to this.
When we have them we can also support Lasantha's usecase.

Reagrds
Suho

>
>

> On Thu, Jul 21, 2016 at 2:59 PM, Ashen Weerathunga  wrote:
>
>> Hi Malith,
>>
>> You don't need to define the matrix. You need to have an input stream
>> which consists of user id and state. So based on the input data it will
>> create the transition matrix itself and give notifications according to
>> that. But It will need a considerable amount of data to build a matrix
>> with reasonable probabilities. That why we have a parameter called 
>> 'trainingBatchSize'.
>> Therefore the user can define how many events will be enough to build the
>> matrix. So that first batch(trainingBatchSize) of data will be used to
>> train the model. Only after that, it will start to send notifications. But
>> the probabilities of the transition matrix will keep updating with each and
>> every incoming event. That's how we are planning to create the transition
>> matrix.
>>
>> Then there is another input parameter called '
>> abnormalTransitionProbability' which is also need to be defined by the
>> user. So let's say its value is defined as 0.01. So if a new event comes
>> from a particular user id it will check the transition probability from his
>> previous state to current state from the transition matrix. If that
>> probability value is less than or equal to 0.01 it will be considered as an
>> abnormal behavior.
>>
>> Please share if you have any suggestions on this.
>>
>> Thanks,
>> Ashen
>>
>> On Thu, Jul 21, 2016 at 12:32 PM, Malith Jayasinghe 
>> wrote:
>>
>>>
>>> On Thu, Jul 21, 2016 at 11:46 AM, Ashen Weerathunga 
>>> wrote:
>>>
 Hi all,

 I'm writing a siddhi extension for Markov models. It can be used to
 detect abnormal user behaviors of many real world applications such as
 detecting abnormal API request patterns, detecting fraudulent bank
 transactions etc. There are different variations in Markov models.
 Therefore this implementation will be done using Markov chain[1] which is a
 basic Markov model.

 Markov chain consists of following key features [2].

- Set of states
- Transition between states
- Future depends on the present
- Future does not depend on the past

 Transition probabilities between states will be updated in real time
 with new input events and abnormal state transition notifications will be
 sent as for the user defined probability threshold.

>>>
>>> Could you explain a bit more about how you are detecting an abnormal
>>> state transition? For example, it is done based on the transition matrix of
>>> the markov chain? If so how/where do we define this matrix?
>>>
>>>
 This will be implemented as a stream processor and it will have
 following input and output parameters.

 *Input parameters*

 Parameter Type Required/Optional Description
 id String required id of the user
 state String required current state of the user
 duration int | long | time required max duration that will be
 considered as a continuation of the previous state of the particular user
 trainingBatchSize int | long required no of events required to train
 the model initially. Notifications will not be given until the no of input
 events reach this limit
 abnormalTransitionProbability double required transisiiton probability
 threshold that should be used to identify abnormal state transitions

 *Output Parameters*

 Parameter Type Name Description
 id String user id id of the user
 startState String start state start satate of the user
 endState String end state end state of the user
 transitionProbability double transition probability transition
 probability from start state to end state
 notify boolean notify notification whether it is a abnormal transition
 or not

 As an example following will return notification as true if a user has
 done a state transition which has a probability less than or equal to 0.01,


 from inputStream#markovModels:markovChain(id, state, 60 min, 500, 0.01)
 select *
 insert into outputStream;


 Please let me know if you have any suggestions on this.

 [1]https://en.wikipedia.org/wiki/Markov_chain
 [2]http://bit-player.org/wp-content/extras/marko

Re: [Architecture] [ IoT Analytics] Building a sample analytics story: Smart Home Analytics

2016-07-21 Thread Sriskandarajah Suhothayan
Thanks for the great use cases, we'll consider them too, as the first
attempt we are trying to show what we collect in a meaningful way, then one
by one we'll be adding predictions and grouped analytics on top of this.

Regards
Suho

On Fri, Jul 22, 2016 at 10:09 AM, Malith Jayasinghe 
wrote:

> + for doing advanced analysis/prediction based on the data is received
> from sensors. Each of these scenarios will require different types of
> analysis and prediction algorithms. It might be a good idea to figure out
> which ones are important and then work in that order.
>
> Note that it is important to have the ability to visualize these
> information in real-time (e.g. energy consumption of devices etc) and being
> able to summarize these information (e.g. total energy consumption of all
> heating devices, all cooling devices etc) in real time and also being able
> to compare different sets of data (e.g. total energy consumption in two
> different days etc) . These type summarizations and comparisons will not
> require advanced analysis.
>
> On Wed, Jul 20, 2016 at 10:46 AM, Dilan Udara Ariyaratne 
> wrote:
>
>> Hi Geesara,
>>
>> I would rather argue that the level of information that we are going to
>> expose here is too low-level in terms of what an end-user would expect
>> from a so-called "Smart Home Analytics" system.
>>
>> The type of user stories that you have described here only carries a
>> common set of analytics requirements that any sort of analytics system
>> would decide as good-to-have.
>> But, this is not the higher-level need or end goal of a "Smart Home
>> Analytics" system should be.
>>
>> Consider few user-stories as follows.
>>
>> [1] Depending on the continuous data being pumped by a set of motion
>> sensors that a user has placed in various locations in his house and
>>  based on the fact that there is currently no one in the house,
>> consider how cool it could be if we can detect any suspicious activity by
>> the exact location and
>>  alert the user, let's say by SMS and show any necessary information
>> and possible actions to take on a dashboard.
>>
>> [2] Depending on the continuous data being pumped by a set of energy
>> usage meters that a user has placed in all the electric items in his house
>> and
>>  based on the average energy usage patterns in history for the past
>> few months, consider how cool it could be if we can predict how his energy
>> consumption will be
>>  for the next few days in the month, show corresponding information
>> on a dashboard and if it's going to be high and costly, suggest any
>> possible usage plans to minimize it.
>>
>> [3] Depending on the continuous data being pumped by a set of humidity
>> sensors, wind speed meters placed in the garden and based on predicted
>> weather information
>>  for the day received from weather stations, consider how cool it
>> could be if we can predict how much water is needed to be applied for
>> plants in the garden today
>>  and alert the user of corresponding actions and show any necessary
>> information in the form of a dashboard gadget.
>>
>> This is where we should ideally aim when thinking of a sample analytics
>> story for a Smart Home and make sure that we provide the necessary
>> development infrastructure
>> in terms of our middleware stack for any third-party developer to build
>> such similar end-user experiences on top of our platform.
>>
>> We already have the key ingredients in terms of this rich middleware
>> stack that I am talking about.
>> [1] We do have our Connected Device Management Framework and on top of
>> that, our IoTServer building up for registering any type of smart device to
>> the platform and aquire data.
>> [2] We do have our Data Analytics Server, Complex Event Processor and
>> Machine Learner building up together in order to perform any sort of
>> analytics related work.
>> [3] And finally, we do have our Dashboard Server on top all these to
>> govern the visualization aspect of captured information.
>>
>> We just need to find the missing links and connect the dots.
>>
>> Cheers,
>> Dilan.
>>
>> *Dilan U. Ariyaratne*
>> Senior Software Engineer
>> WSO2 Inc. 
>> Mobile: +94766405580 <%2B94766405580>
>> lean . enterprise . middleware
>>
>>
>> On Wed, Jul 13, 2016 at 5:19 PM, Geesara Prathap 
>> wrote:
>>
>>> Hi All,
>>>
>>> Since we do have all the basic components which are required to build a
>>> fully fledged real world analytics story with IoT Analytics Framework,
>>> there was a suggestion that we need to build some analytics stories in the
>>> context of IoT Analytics. So along with that, this is one of the examples
>>>  we are going to build.
>>>
>>> In this use case, we are mainly focusing on analyzing sensor data in a
>>> timely manner. So when sensors are publishing data it may be required to do
>>> analytics in real time and do some decision making on a particular event
>>> stream and so on. Then it goes on with batch processing 

Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-07-22 Thread Sriskandarajah Suhothayan
On Fri, Jul 22, 2016 at 12:00 PM, Indunil Upeksha Rathnayake <
indu...@wso2.com> wrote:

> Hi,
>
> Please find the meeting notes in [1].  I have following considerations
> regarding the improvements we have discussed.
>
> (1) Even though we have configured to load the email template from
> EventPublisher(analytics side), the placeholder values has to be sent as
> meta data/correlation data/payload data/arbitrary data, since in analytics
> side, the user claim values are not getting from the user store.
> In order to send the placeholder values from IS side, anyway we have to
> load the email template and retrieve the placeholders. So as I have
> understood, for email notifications, it's not needed to use the email
> template loading part in analytics, since it'll be a redundant task. (Refer
> [2])
>

Here we can set the claim values as arbitrary data, and the notification
specific details as the meta, correlation & payload data.
Then we can use the template loading only at the analytics side.


> (2) The email templates has to be changed as follows.
> i) if the value will be provided in an arbitrary data map, the
> placeholder should be with a prefix "arbitrary_"
> (ex:{{arbitrary_givenname}})
>
ii) if the value will be provided in an meta data map, the placeholder
> should be changed as {{...}} (ex:{{givenname}})
>
> No we should not use "arbitrary_" for any cases, its internal information
and the names should not have "arbitrary_" even if its in arbitrary data
map or otherwise.

(3) Only Text OutputMapping Content can be filled from a value in an
> arbitrary data map using prefix "arbitrary_" .  It's not possible to use a
> value of an arbitrary data map, in a Dynamic adapter properties, only a
> value from a meta data/correlation data/payload data map can be used. I
> think that need to be extended to use even an arbitrary value as a dynamic
> adapter property.(Refer [3])
>

@Gobi can you please fix this if that's the case.


>
> (4) The default stream definitions and publisher definitions has to be
> deployed on super tenant as well as other tenants as well. And when a new
> tenant is added, those streams and publishers has to be deployed for that
> particular tenant as well.
>
> We can have a tenant creation handler to do this copying during that
tenant creation time. WDYT?

Really appreciate your ideas/suggestions regarding the above mentioned
> concerns.
>
> [1] Invitation: [Architecture] [Discussion] Improvement to use Siddhi
> str... @ Wed Jul 20, 2016 4:30pm - 5:30pm (IST) (indu...@wso2.com)
>
> [2]
> https://github.com/wso2/carbon-analytics-common/blob/master/components/event-publisher/org.wso2.carbon.event.publisher.core/src/main/java/org/wso2/carbon/event/publisher/core/internal/type/text/TextOutputMapper.java#L108
>
> [3]
> https://github.com/wso2/carbon-analytics-common/blob/master/components/event-publisher/org.wso2.carbon.event.publisher.core/src/main/java/org/wso2/carbon/event/publisher/core/internal/EventPublisher.java#L311
>
> Thanks and Regards
> --
> Indunil Upeksha Rathnayake
> Software Engineer | WSO2 Inc
> Emailindu...@wso2.com
> Mobile   0772182255
>
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-07-22 Thread Sriskandarajah Suhothayan
On Fri, Jul 22, 2016 at 3:00 PM, Johann Nallathamby  wrote:

>
>
> On Fri, Jul 22, 2016 at 8:33 AM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi,
>>
>> On Fri, Jul 22, 2016 at 12:28 PM, Sriskandarajah Suhothayan <
>> s...@wso2.com> wrote:
>>
>>>
>>>
>>> On Fri, Jul 22, 2016 at 12:00 PM, Indunil Upeksha Rathnayake <
>>> indu...@wso2.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Please find the meeting notes in [1].  I have following considerations
>>>> regarding the improvements we have discussed.
>>>>
>>>> (1) Even though we have configured to load the email template from
>>>> EventPublisher(analytics side), the placeholder values has to be sent as
>>>> meta data/correlation data/payload data/arbitrary data, since in analytics
>>>> side, the user claim values are not getting from the user store.
>>>> In order to send the placeholder values from IS side, anyway we have to
>>>> load the email template and retrieve the placeholders. So as I have
>>>> understood, for email notifications, it's not needed to use the email
>>>> template loading part in analytics, since it'll be a redundant task. (Refer
>>>> [2])
>>>>
>>>
>>> Here we can set the claim values as arbitrary data, and the notification
>>> specific details as the meta, correlation & payload data.
>>> Then we can use the template loading only at the analytics side.
>>>
>> In this case, from IS side, without parsing only the user claims needed
>> for a particular email template(i.e.user claim values for the placeholders
>> in email template), we have to pass all the user claims as arbitrary data
>> values. In that case there's no need for loading the template from the
>> registry in IS side. So that in analytics side, all the values needed for
>> filling out the template will be there. Will check on that.
>>
>
> I don't think it will be a good solution. There can be sensitive
> information in the claims which we can't send. So for this release it's OK
> if we read the template in both sides - security is more important than
> performance; or read it only in IS side - but additionally send all those
> claims as arbitrary data as well, so if some one wants can use them in CEP
> side by their output adaptors.
>

I think then we can have a common configuration in IS side to specify what
are the claims that should be added to notifications.

Regards
Suho


>
>
>>>
>>>> (2) The email templates has to be changed as follows.
>>>> i) if the value will be provided in an arbitrary data map, the
>>>> placeholder should be with a prefix "arbitrary_"
>>>> (ex:{{arbitrary_givenname}})
>>>>
>>> ii) if the value will be provided in an meta data map, the
>>>> placeholder should be changed as {{...}} (ex:{{givenname}})
>>>>
>>>> No we should not use "arbitrary_" for any cases, its internal
>>> information and the names should not have "arbitrary_" even if its in
>>> arbitrary data map or otherwise.
>>>
>>> (3) Only Text OutputMapping Content can be filled from a value in an
>>>> arbitrary data map using prefix "arbitrary_" .  It's not possible to use a
>>>> value of an arbitrary data map, in a Dynamic adapter properties, only a
>>>> value from a meta data/correlation data/payload data map can be used. I
>>>> think that need to be extended to use even an arbitrary value as a dynamic
>>>> adapter property.(Refer [3])
>>>>
>>>
>>> @Gobi can you please fix this if that's the case.
>>>
>>>
>>>>
>>>> (4) The default stream definitions and publisher definitions has to be
>>>> deployed on super tenant as well as other tenants as well. And when a new
>>>> tenant is added, those streams and publishers has to be deployed for that
>>>> particular tenant as well.
>>>>
>>>> We can have a tenant creation handler to do this copying during that
>>> tenant creation time. WDYT?
>>>
>>> Really appreciate your ideas/suggestions regarding the above mentioned
>>>> concerns.
>>>>
>>>> [1] Invitation: [Architecture] [Discussion] Improvement to use Siddhi
>>>> str... @ Wed Jul 20, 2016 4:30pm - 5:30pm (IST) (indu...@wso2.com)
>>>>
>>>> [2]
>>>> https:/

Re: [Architecture] [CEP] Improvement to External Time Batch Window to Allow Specifying Timeout with First Event's Time as Start Time

2016-07-25 Thread Sriskandarajah Suhothayan
Hi Charini,

Is this implemented? If so can you add this to the docs if it's not done so
far?

Regards
Suho


On Mon, Jul 25, 2016 at 12:30 PM, Charini Nanayakkara 
wrote:

> Hi Grainier,
>
> Answers to your queries are as follows.
>
> 1. What will happen to the events that arrive before, 0th-millisecond of
>> an hour? And is this 0th-millisecond taken relative to current-time or the
>> external-time?
>>
>
>  The third parameter is taken relative to external time, whereas the
> events arriving before the 0th millisecond of an hour, would be processed
> at the next hour. For example, assume that the external time of an event is
> 10.45. If the third parameter is 0, then the relevant event would be
> processed at 11.
>
>>
>>> 2. What do you mean by if the value is not provided? Does this
>> introduces an overload method to externalTimeBatch
>>
> The 3rd and 4th parameters of external time batch are optional. Therefore
> if a time batch is provided as #window.externalTimeBatch(external_time, 2
> min), the  "external_time" of the 1st event arriving to the relevant stream
> would be taken as the start time.
>
>>
>> from LoginEvents#window.externalTimeBatch(timestamp, 1 sec, 0, 3 sec)
>>
>> 3. With above impl, if an event which belongs to the current batch,
>> arrives after the given timeout, will it be processed as a new batch?
>>
>  In such a scenario, we would obtain two outputs for the same batch. One
> output would be obtained when timeout is elapsed. Another output would be
> obtained if an event of the same batch arrives after the timeout has
> expired. However in the second instance, all the events of the relevant
> batch would be considered. Not just the new events.
>
> Thanks,
> Charini
>
>>
>> Regards,
>> Grainier.
>>
>> On Tue, Jul 12, 2016 at 8:15 AM, Charini Nanayakkara 
>> wrote:
>>
>>> Hi Imesh,
>>>
>>> Specifying a timeout is already allowed in Siddhi. An example is as
>>> follows.
>>>
>>> from LoginEvents#window.externalTimeBatch(timestamp, 1 sec, 0, 3 sec)
>>> select timestamp, ip, count() as total
>>> insert all events into uniqueIps
>>>
>>> In this instance, events would be batched based on "timestamp" value. A
>>> batch would comprise of events arriving within 1 sec (as per the
>>> "timestamp"). The third parameter 0 specifies that batching must start from
>>> the 0th millisecond of an hour. If this value was not provided, the default
>>> start time would have been the timestamp value of the 1st event. The 4th
>>> parameter indicates the timeout. When 3rd parameter is not provided, output
>>> for a 1 sec batch is obtained only if that entire batch is completed (i.e.
>>> Siddhi learns that data worth of 1 sec has arrived only when it gets an
>>> event belonging to next batch). However, this timeout allows us to obtain
>>> an output in 3 seconds (based on UTC time) , even if a 1 sec batch is not
>>> completed.
>>>
>>> The issue with this implementation is, it disallows us to use the
>>> timeout while using 1st event's timestamp as start time. The suggested
>>> solution allows us to use either a variable or constant as 3rd parameter.
>>> Thus, subsequent to the implementation, we should be able to provide
>>> "timestamp" attribute as the 3rd parameter, from which Siddhi would derive
>>> 1st event's timestamp value to be used as start time (from
>>> LoginEvents#window.externalTimeBatch(timestamp, 1 sec, timestamp, 3 sec)).
>>> However the capability of specifying a constant value (as in the given
>>> example) too would be retained.
>>>
>>> Thank you
>>> Charini
>>>
>>> On Tue, Jul 12, 2016 at 7:24 AM, Imesh Gunaratne  wrote:
>>>
 Hi Charini,

 A great thought!

 Would it be possible for you to explain this requirement with an
 example written in Siddhi? Specifically how to generate a custom event on
 the timeout.

 Thanks


 On Monday, July 11, 2016, Charini Nanayakkara 
 wrote:

> Hi All,
>
> I have planned to improve the current implementation of external time
> batch window, to allow accepting first event's time as start time, when
> specifying a timeout.
>
> In the current implementation, the 3rd parameter allows user to
> provide a user defined start time (whereas the default is to use first
> event's time as start time). This value is required to be a constant. The
> 4th parameter is reserved for specifying a timeout, which is valuable in 
> an
> instance where output needs to be given if events don't arrive for some
> time. However, this implementation disallows a user to use the default
> start time (first event's start time) and timeout together.
>
> Therefore, I intend to change the implementation such that user can
> either provide a variable or a constant as 3rd parameter. This enables the
> external time field to be given as 3rd parameter, from which Siddhi can
> retrieve 1st event's time to be used as start time. Alternatively, a
> constant value 

Re: [Architecture] [CEP] Improvement to External Time Batch Window to Allow Specifying Timeout with First Event's Time as Start Time

2016-07-25 Thread Sriskandarajah Suhothayan
Great




On Tue, Jul 26, 2016 at 7:58 AM, Charini Nanayakkara 
wrote:

> Hi Suho,
>
> I have already implemented and added this to docs (
> https://docs.wso2.com/display/CEP420/Inbuilt+Windows#InbuiltWindows-externalTimeBatch).
> Yesterday I learnt of a 5th parameter added to external time batch by you,
> which I assume is being handled by Ramindu.
>
> Regards,
> Charini
>
> On Tue, Jul 26, 2016 at 5:36 AM, Sriskandarajah Suhothayan 
> wrote:
>
>> Hi Charini,
>>
>> Is this implemented? If so can you add this to the docs if it's not done
>> so far?
>>
>> Regards
>> Suho
>>
>>
>> On Mon, Jul 25, 2016 at 12:30 PM, Charini Nanayakkara 
>> wrote:
>>
>>> Hi Grainier,
>>>
>>> Answers to your queries are as follows.
>>>
>>> 1. What will happen to the events that arrive before, 0th-millisecond of
>>>> an hour? And is this 0th-millisecond taken relative to current-time or the
>>>> external-time?
>>>>
>>>
>>>  The third parameter is taken relative to external time, whereas the
>>> events arriving before the 0th millisecond of an hour, would be processed
>>> at the next hour. For example, assume that the external time of an event is
>>> 10.45. If the third parameter is 0, then the relevant event would be
>>> processed at 11.
>>>
>>>>
>>>>> 2. What do you mean by if the value is not provided? Does this
>>>> introduces an overload method to externalTimeBatch
>>>>
>>> The 3rd and 4th parameters of external time batch are optional.
>>> Therefore if a time batch is provided as
>>> #window.externalTimeBatch(external_time, 2 min), the  "external_time" of
>>> the 1st event arriving to the relevant stream would be taken as the start
>>> time.
>>>
>>>>
>>>> from LoginEvents#window.externalTimeBatch(timestamp, 1 sec, 0, 3 sec)
>>>>
>>>> 3. With above impl, if an event which belongs to the current batch,
>>>> arrives after the given timeout, will it be processed as a new batch?
>>>>
>>>  In such a scenario, we would obtain two outputs for the same batch. One
>>> output would be obtained when timeout is elapsed. Another output would be
>>> obtained if an event of the same batch arrives after the timeout has
>>> expired. However in the second instance, all the events of the relevant
>>> batch would be considered. Not just the new events.
>>>
>>> Thanks,
>>> Charini
>>>
>>>>
>>>> Regards,
>>>> Grainier.
>>>>
>>>> On Tue, Jul 12, 2016 at 8:15 AM, Charini Nanayakkara >>> > wrote:
>>>>
>>>>> Hi Imesh,
>>>>>
>>>>> Specifying a timeout is already allowed in Siddhi. An example is as
>>>>> follows.
>>>>>
>>>>> from LoginEvents#window.externalTimeBatch(timestamp, 1 sec, 0, 3 sec)
>>>>> select timestamp, ip, count() as total
>>>>> insert all events into uniqueIps
>>>>>
>>>>> In this instance, events would be batched based on "timestamp" value.
>>>>> A batch would comprise of events arriving within 1 sec (as per the
>>>>> "timestamp"). The third parameter 0 specifies that batching must start 
>>>>> from
>>>>> the 0th millisecond of an hour. If this value was not provided, the 
>>>>> default
>>>>> start time would have been the timestamp value of the 1st event. The 4th
>>>>> parameter indicates the timeout. When 3rd parameter is not provided, 
>>>>> output
>>>>> for a 1 sec batch is obtained only if that entire batch is completed (i.e.
>>>>> Siddhi learns that data worth of 1 sec has arrived only when it gets an
>>>>> event belonging to next batch). However, this timeout allows us to obtain
>>>>> an output in 3 seconds (based on UTC time) , even if a 1 sec batch is not
>>>>> completed.
>>>>>
>>>>> The issue with this implementation is, it disallows us to use the
>>>>> timeout while using 1st event's timestamp as start time. The suggested
>>>>> solution allows us to use either a variable or constant as 3rd parameter.
>>>>> Thus, subsequent to the implementation, we should be able to provide
>>>>> "timestamp" attribute as the 3rd parameter, from which Siddhi would derive
>&g

Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-08-01 Thread Sriskandarajah Suhothayan
HI Indunil

Any update on this? Was the provided solution working?

We released CEP 4.2-RC1. If we need new features/improvements for this
effort, we can incorporate them in the next component release.

Regards
Suho

On Fri, Jul 22, 2016 at 3:10 PM, Sriskandarajah Suhothayan 
wrote:

>
>
> On Fri, Jul 22, 2016 at 3:00 PM, Johann Nallathamby 
> wrote:
>
>>
>>
>> On Fri, Jul 22, 2016 at 8:33 AM, Indunil Upeksha Rathnayake <
>> indu...@wso2.com> wrote:
>>
>>> Hi,
>>>
>>> On Fri, Jul 22, 2016 at 12:28 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jul 22, 2016 at 12:00 PM, Indunil Upeksha Rathnayake <
>>>> indu...@wso2.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Please find the meeting notes in [1].  I have following considerations
>>>>> regarding the improvements we have discussed.
>>>>>
>>>>> (1) Even though we have configured to load the email template from
>>>>> EventPublisher(analytics side), the placeholder values has to be sent as
>>>>> meta data/correlation data/payload data/arbitrary data, since in analytics
>>>>> side, the user claim values are not getting from the user store.
>>>>> In order to send the placeholder values from IS side, anyway we have
>>>>> to load the email template and retrieve the placeholders. So as I have
>>>>> understood, for email notifications, it's not needed to use the email
>>>>> template loading part in analytics, since it'll be a redundant task. 
>>>>> (Refer
>>>>> [2])
>>>>>
>>>>
>>>> Here we can set the claim values as arbitrary data, and the
>>>> notification specific details as the meta, correlation & payload data.
>>>> Then we can use the template loading only at the analytics side.
>>>>
>>> In this case, from IS side, without parsing only the user claims needed
>>> for a particular email template(i.e.user claim values for the placeholders
>>> in email template), we have to pass all the user claims as arbitrary data
>>> values. In that case there's no need for loading the template from the
>>> registry in IS side. So that in analytics side, all the values needed for
>>> filling out the template will be there. Will check on that.
>>>
>>
>> I don't think it will be a good solution. There can be sensitive
>> information in the claims which we can't send. So for this release it's OK
>> if we read the template in both sides - security is more important than
>> performance; or read it only in IS side - but additionally send all those
>> claims as arbitrary data as well, so if some one wants can use them in CEP
>> side by their output adaptors.
>>
>
> I think then we can have a common configuration in IS side to specify what
> are the claims that should be added to notifications.
>
> Regards
> Suho
>
>
>>
>>
>>>>
>>>>> (2) The email templates has to be changed as follows.
>>>>> i) if the value will be provided in an arbitrary data map, the
>>>>> placeholder should be with a prefix "arbitrary_"
>>>>> (ex:{{arbitrary_givenname}})
>>>>>
>>>> ii) if the value will be provided in an meta data map, the
>>>>> placeholder should be changed as {{...}} (ex:{{givenname}})
>>>>>
>>>>> No we should not use "arbitrary_" for any cases, its internal
>>>> information and the names should not have "arbitrary_" even if its in
>>>> arbitrary data map or otherwise.
>>>>
>>>> (3) Only Text OutputMapping Content can be filled from a value in an
>>>>> arbitrary data map using prefix "arbitrary_" .  It's not possible to use a
>>>>> value of an arbitrary data map, in a Dynamic adapter properties, only a
>>>>> value from a meta data/correlation data/payload data map can be used. I
>>>>> think that need to be extended to use even an arbitrary value as a dynamic
>>>>> adapter property.(Refer [3])
>>>>>
>>>>
>>>> @Gobi can you please fix this if that's the case.
>>>>
>>>>
>>>>>
>>>>> (4) The default stream definitions and publisher definitions has to be
>>>>> deployed on super tenant as well as other tenants

Re: [Architecture] [Arch] Adding CEP and ML samples to DAS distribution in a consistent way

2016-08-03 Thread Sriskandarajah Suhothayan
DAS team how about doing it for this release ?

Regards
Suho

On Wed, Aug 3, 2016 at 6:31 PM, Ramith Jayasinghe  wrote:

> I think we need to ship samples with product. otherwise, The first
> 5-minite experience of users will be negatively affected.
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Arch] Adding CEP and ML samples to DAS distribution in a consistent way

2016-08-04 Thread Sriskandarajah Suhothayan
Hi Niranda,

Are you guys adding all CEP samples too ?

Regards
Suho

On Thu, Aug 4, 2016 at 7:33 PM, Sinthuja Ragendran 
wrote:

> Hi,
>
> We also need to find a consistent way to maintain the integration tests as
> well. CEP, and ML features are being used in DAS, and there is no
> integration tests for those components getting executed in the DAS product
> build. Similarly there are many UI tests we have in dashboard server as
> well, but those are not executed in the products which are using those. As
> these are the core functionalities of DAS, IMHO we need to execute the
> testcases for each of these components during the product-das build time.
>
> Thanks,
> Sinthuja.
>
> On Thu, Aug 4, 2016 at 3:17 PM, Niranda Perera  wrote:
>
>> Hi Suho,
>>
>> As per the immediate DAS 310 release, we will continue to keep a local
>> copy of the samples. I have created a JIRA here [1] to add the suggestion
>> provided by Isuru.
>>
>> Best
>>
>> [1] https://wso2.org/jira/browse/DAS-481
>>
>> On Wed, Aug 3, 2016 at 10:02 PM, Sriskandarajah Suhothayan > > wrote:
>>
>>> DAS team how about doing it for this release ?
>>>
>>> Regards
>>> Suho
>>>
>>> On Wed, Aug 3, 2016 at 6:31 PM, Ramith Jayasinghe 
>>> wrote:
>>>
>>>> I think we need to ship samples with product. otherwise, The first
>>>> 5-minite experience of users will be negatively affected.
>>>>
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> *S. Suhothayan*
>>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>>> Processor
>>> *WSO2 Inc. *http://wso2.com
>>> * <http://wso2.com/>*
>>> lean . enterprise . middleware
>>>
>>>
>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 <https://twitter.com/N1R44>
>> https://pythagoreanscript.wordpress.com/
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> *Sinthuja Rajendran*
> Technical Lead
> WSO2, Inc.:http://wso2.com
>
> Blog: http://sinthu-rajan.blogspot.com/
> Mobile: +94774273955
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Arch] Adding CEP and ML samples to DAS distribution in a consistent way

2016-08-05 Thread Sriskandarajah Suhothayan
Dilini tried adding CEP samples to DAS it worked as expected, we'll send
you a pull of all CEP samples to DAS repo.

Regards
Suho

On Fri, Aug 5, 2016 at 12:34 PM, Gihan Anuruddha  wrote:

> We discussed this as well. So our plan to inject CEP integration test to
> DAS at product build time. We are not maintaining a separate copy, instead
> we use the same tests that CEP use.
>
> On Thu, Aug 4, 2016 at 7:33 PM, Sinthuja Ragendran 
> wrote:
>
>> Hi,
>>
>> We also need to find a consistent way to maintain the integration tests
>> as well. CEP, and ML features are being used in DAS, and there is no
>> integration tests for those components getting executed in the DAS product
>> build. Similarly there are many UI tests we have in dashboard server as
>> well, but those are not executed in the products which are using those. As
>> these are the core functionalities of DAS, IMHO we need to execute the
>> testcases for each of these components during the product-das build time.
>>
>> Thanks,
>> Sinthuja.
>>
>> On Thu, Aug 4, 2016 at 3:17 PM, Niranda Perera  wrote:
>>
>>> Hi Suho,
>>>
>>> As per the immediate DAS 310 release, we will continue to keep a local
>>> copy of the samples. I have created a JIRA here [1] to add the suggestion
>>> provided by Isuru.
>>>
>>> Best
>>>
>>> [1] https://wso2.org/jira/browse/DAS-481
>>>
>>> On Wed, Aug 3, 2016 at 10:02 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>>> DAS team how about doing it for this release ?
>>>>
>>>> Regards
>>>> Suho
>>>>
>>>> On Wed, Aug 3, 2016 at 6:31 PM, Ramith Jayasinghe 
>>>> wrote:
>>>>
>>>>> I think we need to ship samples with product. otherwise, The first
>>>>> 5-minite experience of users will be negatively affected.
>>>>>
>>>>>
>>>>> ___
>>>>> Architecture mailing list
>>>>> Architecture@wso2.org
>>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> *S. Suhothayan*
>>>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>>>> Processor
>>>> *WSO2 Inc. *http://wso2.com
>>>> * <http://wso2.com/>*
>>>> lean . enterprise . middleware
>>>>
>>>>
>>>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>>>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>>>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>>>> http://lk.linkedin.com/in/suhothayan 
>>>> <http://lk.linkedin.com/in/suhothayan>*
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>>
>>>
>>> --
>>> *Niranda Perera*
>>> Software Engineer, WSO2 Inc.
>>> Mobile: +94-71-554-8430
>>> Twitter: @n1r44 <https://twitter.com/N1R44>
>>> https://pythagoreanscript.wordpress.com/
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Sinthuja Rajendran*
>> Technical Lead
>> WSO2, Inc.:http://wso2.com
>>
>> Blog: http://sinthu-rajan.blogspot.com/
>> Mobile: +94774273955
>>
>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> W.G. Gihan Anuruddha
> Senior Software Engineer | WSO2, Inc.
> M: +94772272595
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Siddhi Visual Editor (Updated)

2016-08-07 Thread Sriskandarajah Suhothayan
Yes, it will be part of analysis tooling. It should be used within
notebooks for realtime analytics.

We have not decided where the notebook will be. Whether it will be a
separate tooling app or to use the cloud tooling framework that we are
evaluating.

Please give suggestion.

Regards
Suho

On Sunday, August 7, 2016, Sanjiva Weerawarana  wrote:

> Will this be a plugin for the new tooling platform?
>
> On Aug 4, 2016 11:33 AM, "Nayantara Jeyaraj"  > wrote:
>
>> Hi all,
>> I'm currently working on developing the Siddhi visual editor and it
>> has been modified as required from the previous post. I've used the
>> jsPlumb library and the interact.js to implement the functionalities
>> specified. I've attached the new specs and functionality herewith.
>> Regards
>> Tara
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> 
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>

-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Siddhi Visual Editor (Updated)

2016-08-07 Thread Sriskandarajah Suhothayan
We have not decided, it will be part of analysis tooling and it should be
used within notebooks for realtime analytics.
>
>
> We have not decided where the notebooks will be. Whether it will be a
> separate tooling app or to use the cloud tooling framework that we are
> evaluating.
>
> Please give your suggestions.
>
> Regards
> Suho
>
> On Sunday, August 7, 2016, Sanjiva Weerawarana  > wrote:
>
>> Will this be a plugin for the new tooling platform?
>>
>> On Aug 4, 2016 11:33 AM, "Nayantara Jeyaraj"  wrote:
>>
>>> Hi all,
>>> I'm currently working on developing the Siddhi visual editor and it
>>> has been modified as required from the previous post. I've used the
>>> jsPlumb library and the interact.js to implement the functionalities
>>> specified. I've attached the new specs and functionality herewith.
>>> Regards
>>> Tara
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>
> --
>
> *S. Suhothayan*
> Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
> *WSO2 Inc. *http://wso2.com
> * *
> lean . enterprise . middleware
>
>
> *cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
> twitter: http://twitter.com/suhothayan
>  | linked-in:
> http://lk.linkedin.com/in/suhothayan *
>
>

-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Siddhi Visual Editor (Updated)

2016-08-08 Thread Sriskandarajah Suhothayan
Siddhi visual editor is purely JS so it can be plugged in anywhere so I
think it will not be a problem.
Currently, a couple of interns are evaluating notebooks, with the results
I'll arrange a meeting soon to discuss on notebooks and tooling for
analytics.

Regards
Suho

On Mon, Aug 8, 2016 at 10:55 PM, Sanjiva Weerawarana 
wrote:

> I'm not sure the notebook concept is what's appropriate for realtime
> tooling. Will think about it more.
>
> On Mon, Aug 8, 2016 at 12:28 AM, Sriskandarajah Suhothayan 
> wrote:
>
>> Yes, it will be part of analysis tooling. It should be used within
>> notebooks for realtime analytics.
>>
>> We have not decided where the notebook will be. Whether it will be a
>> separate tooling app or to use the cloud tooling framework that we are
>> evaluating.
>>
>> Please give suggestion.
>>
>> Regards
>> Suho
>>
>>
>> On Sunday, August 7, 2016, Sanjiva Weerawarana  wrote:
>>
>>> Will this be a plugin for the new tooling platform?
>>>
>>> On Aug 4, 2016 11:33 AM, "Nayantara Jeyaraj"  wrote:
>>>
>>>> Hi all,
>>>> I'm currently working on developing the Siddhi visual editor and it
>>>> has been modified as required from the previous post. I've used the
>>>> jsPlumb library and the interact.js to implement the functionalities
>>>> specified. I've attached the new specs and functionality herewith.
>>>> Regards
>>>> Tara
>>>>
>>>> ___
>>>> Architecture mailing list
>>>> Architecture@wso2.org
>>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>>
>>>>
>>
>> --
>>
>> *S. Suhothayan*
>> Associate Director / Architect & Team Lead of WSO2 Complex Event
>> Processor
>> *WSO2 Inc. *http://wso2.com
>> * <http://wso2.com/>*
>> lean . enterprise . middleware
>>
>>
>> *cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
>> http://suhothayan.blogspot.com/ <http://suhothayan.blogspot.com/>twitter:
>> http://twitter.com/suhothayan <http://twitter.com/suhothayan> | linked-in:
>> http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
>>
>>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, CEO & Chief Architect; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* <http://wso2.com/>*
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
<http://suhothayan.blogspot.com/>twitter: http://twitter.com/suhothayan
<http://twitter.com/suhothayan> | linked-in:
http://lk.linkedin.com/in/suhothayan <http://lk.linkedin.com/in/suhothayan>*
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-08-09 Thread Sriskandarajah Suhothayan
Based on the chat with Johann he suggested to support claims at event
publisher.
@Indunil, can you get the full requirements and update the thread.

Regards
Suho

On Mon, Aug 1, 2016 at 11:24 PM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

>
>
> On Mon, Aug 1, 2016 at 8:38 PM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi Suhothayan,
>>
>> Hi Indunil,
>
> I like to add some comments on this.. Please find them below..
>
>
>> There was an issue in EventPublisherServiceDS where
>> setConfigurationContextService() method get invoked after the bundle get
>> activated. Due to that, when we are trying to invoke
>> deployEventPublisherConfiguration() of EventPublisherService from the
>> activate method of an osgi bundle in IS side, it's receiving a null
>> pointer(Since it refers the ConfigurationContextService object in
>> EventPublisherServiceValueHolder). I think you can resolve it by
>> changing the osgi reference cardinality in [1] as "1..1"(Mandatory), if
>> there is no specific reason for making it optional.
>>
>
> There is a valid reason for this..
> I believe, as you know we cannot guarantee about OSGI bundle loading in
> carbon environment.. In this case, there is a possibility where axis2
> deployment can start before bundle activation of a OSGI component. To avoid
> this we'll follow a similar approach like below,
>
> 
>
>org.wso2.carbon.event.publisher.core.EventPublisherService
>
> 
>
> Here, we are adding the reference of the corresponding OSGI service which
> is exposed by relevant OSGI module.. If you want to use above approach
> (Axis2RequiredServices), we cannot have 1..1 mapping for
> ConfigurationContextService since it causes cyclic dependency and affects
> bundle loading..
>
> In IS side we were able to get rid of the null pointer by adding an osgi
>> reference for ConfigurationContextService in the service component and
>> invoked the deployEventPublisherConfiguration() in activate() method.
>>
>
> No, above solution is not correct and will not work all the time.. There
> is a possibility where you'll encounter same issue when
> ConfigurationContextService is bind to you component first and takes
> sometime to resolve for Event Publisher..
>
> What is the usecase for creating an Event Publisher in server restart ?
> Can you ship the pack with an Event Publisher or deploy an event publisher
> for first event if it is not there..
>
>
>> And also there was an issue in filling out dynamic properties of an
>> output adapter from the arbitrary data values, and sent a PR for that.
>> Please review and merge the PR in [2].
>>
>
> Thanks, Merged it..
>
> Regards,
> Mohan
>
>
>>
>> [1] https://github.com/wso2/carbon-analytics-common/blob/
>> master/components/event-publisher/org.wso2.carbon.
>> event.publisher.core/src/main/java/org/wso2/carbon/event/
>> publisher/core/internal/ds/EventPublisherServiceDS.java#L56
>> [2] https://github.com/wso2/carbon-analytics-common/pull/306/files
>>
>> Thanks and Regards
>>
>> On Mon, Aug 1, 2016 at 3:06 PM, Sriskandarajah Suhothayan 
>> wrote:
>>
>>> HI Indunil
>>>
>>> Any update on this? Was the provided solution working?
>>>
>>> We released CEP 4.2-RC1. If we need new features/improvements for this
>>> effort, we can incorporate them in the next component release.
>>>
>>> Regards
>>> Suho
>>>
>>> On Fri, Jul 22, 2016 at 3:10 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Fri, Jul 22, 2016 at 3:00 PM, Johann Nallathamby 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Fri, Jul 22, 2016 at 8:33 AM, Indunil Upeksha Rathnayake <
>>>>> indu...@wso2.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> On Fri, Jul 22, 2016 at 12:28 PM, Sriskandarajah Suhothayan <
>>>>>> s...@wso2.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jul 22, 2016 at 12:00 PM, Indunil Upeksha Rathnayake <
>>>>>>> indu...@wso2.com> wrote:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> Please find the meeting notes in [1].  I have following
>>>>>>>> considerations regarding the improvements we have discussed.

Re: [Architecture] [Dev] [IS] [Analytics] Improvement to use Siddhi streams to send notifications

2016-08-11 Thread Sriskandarajah Suhothayan
I think getting the claims will improve the message formatting when sending
the message, and based on the discussion with Johann they cannot determine
what claims the message formatting will need. if IS need to send the claims
it has to also read the message to understand the necessary claims.

Any suggestions ?

Regards
Suho

On Thu, Aug 11, 2016 at 4:42 PM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

>
>
> On Thu, Aug 11, 2016 at 11:32 AM, Indunil Upeksha Rathnayake <
> indu...@wso2.com> wrote:
>
>> Hi Suhothayan,
>>
>> You can refer [1] for the current approach we have taken in IS side when
>> improving notification sending with siddhi streams. As per the discussion
>> we had previously, this approach has been taken in order to avoid the
>> performance degradation due to the redundant loading of email template in
>> IS and analytics. The main reason for the redundant loading is that only in
>> IS side, the user claims can be loaded which needs for filling out the
>> placeholders in email template.
>>
>> As per the current implementation you are having, we can provide the
>> registry path and let the email template get loaded in analytics side. For
>> that there has to be some improvement in analytics side to get the user
>> claims from user store and filling out the template with those claim
>> values. So that without loading the email template from IS side, we can do
>> it in analytics side.
>>
>> So the suggested improvements as follows.
>> *IS side:*
>>
>>
>>
>>
>>
>>
>> *1) Modified the publisher definition to include registry path of the
>> email template, specifying the notification type and locale as
>> placeholders2) When an email notification need to be send, an arbitrary map
>> (including the data needs to load the email template from registry) will be
>> published to the streamAnalytics side:1) Load the email template from the
>> registry (use the arbitrary data values we have provided)2) Extract the
>> placeholders in email template3) Get the user claims from user store and
>> fill out the placeholders in the template with the necessary claim values*
>>
>
> Above [1] and [2] are already implemented in Event Publisher level and can
> be usable for above usecase.. But [2] (which is mentioned as an improvement
> for Analytics side) is not valid IMO.. That is not something that we need
> to handle in analytics or Event Publisher side, it is architecturally
> incorrect to handle in that level.. What need to be done is, we need to get
> those claims from identity level and send those relevant claims in the
> event itself..
>
> What is the reason for defining [3] as an improvement for analytics ?
>
> Thanks,
> Mohan
>
>
>>
>>
>> We have used two prefixes in placeholders of email templates as
>> "user.claim.identity" and "user.claim", in order to specify that the
>> placeholders has to be filled with an identity claim and other wso2 claim
>> respectively. The claim URIs which we are using when retrieving necessary
>> user claims for the email templates, will be generated appending necessary
>> prefix to the "http://wso2.org/claims/";. As an example if the
>> placeholder is "user.claim.givenname", the claim URI should be "
>> http://wso2.org/claims/givenname";. So that placeholder has to be filled
>> with the user claim value corresponding to the above mentioned claim URI.
>> You can refer [2] for the implementation done in IS side, we can move that
>> logic to analytics side.
>>
>> [1] https://github.com/wso2-extensions/identity-event-handler-
>> notification/pull/26/files
>> [2] https://github.com/wso2-extensions/identity-event-handler-
>> notification/pull/26/files#diff-2200b351eeef81ebbb5ea7f0d1f1ecb7R119
>>
>> Thanks and Regards
>>
>> On Tue, Aug 9, 2016 at 9:50 PM, Sriskandarajah Suhothayan 
>> wrote:
>>
>>> Based on the chat with Johann he suggested to support claims at event
>>> publisher.
>>> @Indunil, can you get the full requirements and update the thread.
>>>
>>> Regards
>>> Suho
>>>
>>> On Mon, Aug 1, 2016 at 11:24 PM, Mohanadarshan Vivekanandalingam <
>>> mo...@wso2.com> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Aug 1, 2016 at 8:38 PM, Indunil Upeksha Rathnayake <
>>>> indu...@wso2.com> wrote:
>>>>
>>>>> Hi Suhothayan,
>>>>>
>>>>> Hi Indunil,
>>>>
>>>> I like to add some comments on this.. Pl

Re: [Architecture] [PET] Support UniqueBatchWindow(Time, Length) for Siddhi

2016-08-31 Thread Sriskandarajah Suhothayan
Yes go for Case 2.

Regards
Suho

On Wed, Aug 31, 2016 at 1:15 PM, Malaka Silva  wrote:

> Hi,
>
> IMO we should wait for 4 unique events and proceed. Otherwise it'll only
> provide the functionality already available with batch window?
>
> Case 2 is correct.
>
> @Suho / Mohan WDYT?
>
>
>
> On Wed, Aug 31, 2016 at 11:22 AM, Rajjaz Mohammed  wrote:
>
>> Hi All,
>>
>> In current UniqueLengthBatchWindow i return the unique events from each
>> Length of events(case 1) but there is another possiblity in the
>> implementation that return the length number of unique events(case 2) from
>> the events. Which one is the right one? please advice on this (we can add
>> first or last unique as optional parameter).
>> [image: Inline image 2]
>>
>>
>>
>> On Thu, Aug 18, 2016 at 3:06 PM, Dilini Muthumala 
>> wrote:
>>
>>> On Thu, Aug 18, 2016 at 1:00 PM, Rajjaz Mohammed 
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> In existing uniqueWindow allows us to specify any number of
>>>>> attributes(list)[1] for the key but in my current implementation, I'm
>>>>> changed to support single attribute[2] so it will be a variable instead of
>>>>> a list. but in *constructFinder* method,  we need to set list instead
>>>>> of a variable.  Please advice on this.
>>>>>
>>>>
>>> AFAIU, variableExpressionExecutors parameter passed in the
>>> constructFinder is not relevant when determining whether to support
>>> multiple attributes or not. Please correct me if I've missed anything.
>>>
>>>
>>> To add to what Rajjaz has mentioned,
>>>
>>> The existing unique window in Siddhi allows multiple attributes to be
>>> considered when checking for uniqueness.
>>>
>>> E.g.
>>> Here, unique window allows both ip and hour attributes to be considered
>>> when checking uniqueness.
>>> from LoginEvents#window.unique(ip, hour)
>>> select count(ip) as ipCount, ip, hour
>>> insert into uniqueIps ;
>>>
>>>
>>> Since unique window supports it, IMO it is good to support it in
>>> uniqueTimeBatch as well. WDYT?
>>>
>>>
>>> Below are the options I see to support this in uniqueTimeBatch.
>>>
>>> *Option#1:*
>>> *Allow multiple attributes, but start time for the window should be a
>>> constant.*
>>> E.g.
>>> Here 1000 is the start time. ip and hour will be considered when
>>> checking for uniqueness.
>>>
>>> from LoginEvents#unique.timeBatch(1 min, 1000, ip, hour)
>>> select count(ip) as ipCount, ip, hour
>>> insert into uniqueIps ;
>>>
>>> We cannot allow start time to be a variable because then we cannot
>>> determine whether it is a startTime or whether it is meant to be used when
>>> checking for uniqueness.
>>> E.g.
>>> from LoginEvents#unique.timeBatch(1 min, *time*, ip, hour)
>>> select count(ip) as ipCount, ip, hour
>>> insert into uniqueIps ;
>>>
>>> Here we cannot determine whether *time* is startTime or it is meant to
>>> be used for checking uniqueness (like ip and hour).
>>>
>>> *Option#2:*
>>> *Not allowing multiple attributes (i.e. only one attribute is allowed to
>>> check uniqueness), and let start time be a constant or a variable.*
>>> E.g.
>>> from LoginEvents#unique.timeBatch(1 min, ip, time)
>>> select count(ip) as ipCount, ip, hour
>>> insert into uniqueIps ;
>>>
>>> This is the current implementation of the UniqueBatchWindow (I changed
>>> the order of parameters in the current impl, to keep consistency with other
>>> examples).
>>>
>>> IMO, allowing start time to be a variable does not add much value,
>>> because even if we allow it to be read from an event attribute, we will
>>> only read it from the first event.
>>> Therefore, I would prefer option#1.
>>>
>>> WDYT? If we are to support multiple attributes, do we have better
>>> options?
>>>
>>> Thanks,
>>> Dilini
>>>
>>>
>>>
>>>
>>>
>>>>
>>>>> [1] https://github.com/wso2/siddhi/blob/master/modules/siddhi-co
>>>>> re/src/main/java/org/wso2/siddhi/core/query/processor/stream
>>>>> /window/UniqueWindowProcessor.java#L53
>>>>> [2] https://github.com/wso2-extensions/siddhi-window-un

Re: [Architecture] [C5] Spark/Lucene Integration in Stream Processor

2016-10-21 Thread Sriskandarajah Suhothayan
On Fri, Oct 21, 2016 at 2:00 PM, Anjana Fernando  wrote:

> Hi,
>
> So we are starting on porting the earlier DAS specific functionality to
> C5. And with this, we are planning on not embedding the Spark server
> functionality to the primary binary itself, but rather run it separately as
> another script in the same distribution. So basically, when running the
> server in the standalone mode, from a centralized script, we will start
> Spark processes and then the main stream processor server. And in a
> clustered setup, we will start the Spark processes separately, and do the
> clustering that is native to it, which is currently by integrating with
> ZooKeeper.
>
> +1


> So basically, for the minimum H/A setup, we would need two stream
> processing nodes and also ZK to build up the cluster, if we are using Spark
> also. So with C5, since we are not anyway not using Hazelcast, for other
> general coordination operations also we can use ZK, since it is already a
> requirement for Spark. And we have the added benefit of not getting the
> issues that comes with a peer-to-peer coordination library, such as split
> brain scenarios.
>
>
Also, aligning with the above approach, we are considering of directly
> integrate to Solr in running in external to stream processor, rather than
> doing the indexing in the embedded mode. Now also in DAS, we have a
> separate indexing mode (profile), so rather than using that, we can use
> Solr directly. So one of the main reasons for using this would be, it has
> additional functionality to base Lucene, where it comes OOTB functionality
> with aggregates etc.. which at the moment, we don't have full
> functionality. So the suggestion is, Solr will also come as a separate
> profile (script) with the distribution, and this will be started up if the
> indexing scenarios are required for the stream processor, which we can
> automatically start it up or selectively start it. Also, Solr clustering is
> also done with ZK, which we will anyway have with the new Spark clustering
> approach we are using.
>
> So the aim of getting out the non-WSO2 specific servers without embedding
> is, the simplicity it provides in our codebase, since we do not have to
> maintain the integration code that is required to embed it, and those
> servers can use its own recommended deployment patterns. For example, Spark
> isn't designed to be embedded in to other servers, so we had to mess around
> with some things to embed and cluster it internally. And also, upgrading
> dependencies such as that becomes very straightforward, since it's external
> to the base binary.
>

+1 for having Spark, Solr & ZK as external to Stream Processor's core
capability, In minimum HA setup we can start all three in both the nodes,
and when scaling the deployment we can scale the components based on the
load on each of them.

I'm +1 for shiping all 3 as part of Product Analytics, but when it comes to
Stream Processor I believe shipping Spark & Solr will be over kill for the
straming solution. We can ship all the necessary connectors and we can ask
the users to download Spark & Solr when needed.

Regards
Suho

>
> Cheers,
> Anjana.
> --
> *Anjana Fernando*
> Associate Director / Architect
> WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [C5] Spark/Lucene Integration in Stream Processor

2016-10-22 Thread Sriskandarajah Suhothayan
On Sat, Oct 22, 2016 at 10:45 AM, Nirmal Fernando  wrote:

>
>
> On Fri, Oct 21, 2016 at 2:00 PM, Anjana Fernando  wrote:
>
>> Hi,
>>
>> So we are starting on porting the earlier DAS specific functionality to
>> C5. And with this, we are planning on not embedding the Spark server
>> functionality to the primary binary itself, but rather run it separately as
>> another script in the same distribution. So basically, when running the
>> server in the standalone mode, from a centralized script, we will start
>> Spark processes and then the main stream processor server. And in a
>> clustered setup, we will start the Spark processes separately, and do the
>> clustering that is native to it, which is currently by integrating with
>> ZooKeeper.
>>
>
> Does this mean we still keep Spark binaries inside Stream Processor? If
> not how are we planning to start a Spark process from Stream Processor?
>

We don't need to have Spark binaries in Stream Processor and I believe its
wrong as its not the core functionality of that. But when it comes to
Product Analytics we may ship that. We need to decide on that.


>> So basically, for the minimum H/A setup, we would need two stream
>> processing nodes and also ZK to build up the cluster, if we are using Spark
>> also. So with C5, since we are not anyway not using Hazelcast, for other
>> general coordination operations also we can use ZK, since it is already a
>> requirement for Spark. And we have the added benefit of not getting the
>> issues that comes with a peer-to-peer coordination library, such as split
>> brain scenarios.
>>
>> Also, aligning with the above approach, we are considering of directly
>> integrate to Solr in running in external to stream processor, rather than
>> doing the indexing in the embedded mode. Now also in DAS, we have a
>> separate indexing mode (profile), so rather than using that, we can use
>> Solr directly. So one of the main reasons for using this would be, it has
>> additional functionality to base Lucene, where it comes OOTB functionality
>> with aggregates etc.. which at the moment, we don't have full
>> functionality. So the suggestion is, Solr will also come as a separate
>> profile (script) with the distribution, and this will be started up if the
>> indexing scenarios are required for the stream processor, which we can
>> automatically start it up or selectively start it. Also, Solr clustering is
>> also done with ZK, which we will anyway have with the new Spark clustering
>> approach we are using.
>>
>> So the aim of getting out the non-WSO2 specific servers without embedding
>> is, the simplicity it provides in our codebase, since we do not have to
>> maintain the integration code that is required to embed it, and those
>> servers can use its own recommended deployment patterns. For example, Spark
>> isn't designed to be embedded in to other servers, so we had to mess around
>> with some things to embed and cluster it internally. And also, upgrading
>> dependencies such as that becomes very straightforward, since it's external
>> to the base binary.
>>
>> Cheers,
>> Anjana.
>> --
>> *Anjana Fernando*
>> Associate Director / Architect
>> WSO2 Inc. | http://wso2.com
>> lean . enterprise . middleware
>>
>
>
>
> --
>
> Thanks & regards,
> Nirmal
>
> Team Lead - WSO2 Machine Learner
> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
> Mobile: +94715779733
> Blog: http://nirmalfdo.blogspot.com/
>
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [PROPOSAL] Distribute internal machine states

2016-12-06 Thread Sriskandarajah Suhothayan
Hi Pablo,

Thanks for your interest in the project and for the great proposal.

We are also considering distributing Siddhi using Kafka, we will consider
your ideas when implementing the solution and we are happy to have your
contributions.

Meanwhile, have you done any implementation regarding this? Do you have any
performance numbers?

Currently, we are thinking of partitioning streams based on given stream
attribute and process each partition in isolation, but your solution
provides a simple state sharing technique to solve this problem. My only
concern in your solution is, if the state change is frequent then this
solution will not be optimal and the same problem is also there for the
rolling aggregations as the window will be updated for each event. If we
are syncing the state for each event then there will be a lot of overhead.
Have you thought about it? Do you have any suggestions?

Regards
Suho

On Mon, Dec 5, 2016 at 3:57 PM, Pablo Casares Crespo <
pablocasarescre...@gmail.com> wrote:

> Hi all, the proposal as been attached as PDF.
>
>
>
>
>
>
> Best regards,
> Pablo.
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Introducing interface to get widget configuration files in Dashboard Component

2017-01-19 Thread Sriskandarajah Suhothayan
Yes, +1 for the approach. Typically CEP is providing the dashboard its own
store via the OSGi service.

Regards
Suho

On Wed, Jan 18, 2017 at 10:39 AM, Nisala Nanayakkara 
wrote:

> Hi,
>
> @CEP team : Appreciate your response about this matter.
>
> Thanks,
> Nisala
>
> On Mon, Jan 16, 2017 at 2:14 PM, Nisala Nanayakkara 
> wrote:
>
>> Hi All,
>>
>> According to the off-line discussion that we had with Suho, Chandana,
>> Kishanthan, Udara, Tanya and Tharik on CEP requirements of dashboard
>> components. We agreed to ship a template widget with the
>> dashboard-component which can be used generate widgets by passing given
>> configuration file. Please refer mail ‘Q1 Dashboard Release Plan for
>> Realtime Analytics(CEP)’ for more information.
>>
>> Since the above mentioned configuration file is generated by CEP team, we
>> decided to implement interface to get the configuration files. Then they
>> can register an OSGI service implementing the given interface. So that we
>> can get the list of configuration files, specific configuration file and
>> etc from their DB. I think that this is more convenient than using Java
>> reflection. WDYT ? Please feel free to provide your kind input about this
>> matter.
>>
>>
>> Thanks,
>> Nisala
>>
>>
>>
>>
>> --
>> *Nisala Niroshana Nanayakkara,*
>> Software Engineer
>> Mobile:(+94)717600022
>> WSO2 Inc., http://wso2.com/
>>
>
>
>
> --
> *Nisala Niroshana Nanayakkara,*
> Software Engineer
> Mobile:(+94)717600022
> WSO2 Inc., http://wso2.com/
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DAS Support for Mesos

2017-02-22 Thread Sriskandarajah Suhothayan
Can you evaluate YARN supporting Apache Mesos and Docker? , I think it will
be a better resource management infrastructure we can support, this is
because with this DAS can live with other Bigdata project cohesively and
people really like if they can manage DAS also like Spark inside YARN.
Streaming systems are also moving to support YARN (Flink and Samza) so we
might also need to move in near future.

Regards
Suho

On Wed, Feb 22, 2017 at 1:03 AM, Sachith Withana  wrote:

> Hi all,
>
> We had a discussion sometime back on supporting DAS on Apache Mesos which
> would enable us to dockerize Spark therefore DAS.
>
> My question is, are we going to support DCOS[1] or Apache Mesos for the
> Docker environment?
> DCOS is a commercialized version of Mesos/Mesosphere and seems to be
> widely used.
>
> What are we planning to use in our docker deployments? This would dominate
> which one we choose as well.
>
> [1] https://dcos.io/
>
> Thanks,
> Sachith
> --
> Sachith Withana
> Software Engineer; WSO2 Inc.; http://wso2.com
> E-mail: sachith AT wso2.com
> M: +94715518127 <071%20551%208127>
> Linked-In: https://lk.linkedin.com/in/
> sachithwithana
>



-- 

*S. Suhothayan*
Associate Director / Architect & Team Lead of WSO2 Complex Event Processor
*WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


[Architecture] WSO2 Data Analytics Server 4.0.0-M1 Released !

2017-03-29 Thread Sriskandarajah Suhothayan
Hi All,

The WSO2 Smart Analytics  team is pleased
to announce the release of WSO2 Data Analytics Server version 4.0.0-M1.

WSO2 Smart Analytics let digital business creating real-time, intelligent,
actionable business insights, and data products which are achieved by WSO2
Data Analytics Server's real-time, incremental  & intelligent data
processing capabilites.

WSO2 DAS can:

   - Receive events from various data sources
   - Process & correlate them in real-time with the sate of the art
   high-performance real-time Siddhi Complex Event Processing Engine that
   works with easy to learn the SQL-Like query language.
   - Process analysis that spans for longer time duration with its
   incremental processing capability by achieving high performance with low
   infrastructure cost.
   - Uses Machine Learning and other models to drive intelligent insights
   from the data
   - Notifications interesting event occurrences as alerts via multiple
   transports & let users visualize the results via customizable dashboards.

WSO2 DAS is released under Apache Software License Version 2.0
, one of the most
business-friendly licenses available today.

You can find the product at
https://github.com/wso2/product-das/releases/download/v4.0.0-M1/wso2das-4.0.0-M1.zip
Documentation at https://docs.wso2.com/display/DAS400/
Source code at https://github.com/wso2/product-das/releases/tag/v4.0.0-M1

WSO2 DAS 4.0.0-M1 includes following new features.

New Features

   - Receive and publish events from Siddhi with @Source and @Sink
   annotations
   - TCP sink and source
   - Kafka sink and source
   - Support for DAS Text Editor to develop Siddhi applications.

Reporting *Issues*
Issues can be reported using the public JIRA available at
https://wso2.org/jira/browse/DAS
Contact usWSO2 Data Analytics Server developers can be contacted via the
mailing lists:

   Developer List : d...@wso2.org | Subscribe
 | Mail Archive


Alternatively, questions can also be raised in the stackoverflow:
*Forum* http://stackoverflow.com/questions/tagged/wso2/

Support

We are committed to ensuring that your enterprise middleware deployment is
completely supported from evaluation to production. Our unique approach
ensures that all support leverages our open development methodology and is
provided by the very same engineers who build the technology.

For more details and to take advantage of this unique opportunity please
visit http://wso2.com/support/.

For more information on WSO2 Smart Analytics and Smart Analytics Solutions,
visit the WSO2 Smart Analytics Page .
*-The WSO2 WSO2 Smart Analytics Team- *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] [Meeting Notes] UES integration with BAM / CEP / APIM

2013-11-12 Thread Sriskandarajah Suhothayan
Just a suggestion. I would like this gadget gen tool to take a template
gadget and a conf file as the input, ask to enter the UI options based on
the conf and generate the gadget based on the template gadget file.
Whereby we can support many types of gadgets.
This is impingement because in the current BAM gadget gen tool we are only
restricted to two gadget .

Regards
Suho
On Nov 12, 2013 2:46 AM, "Tanya Madurapperuma"  wrote:

>
>
>
> On Tue, Nov 12, 2013 at 4:03 PM, Lasantha Fernando wrote:
>
>> Hi Tanya,
>>
>>
>> On 12 November 2013 15:50, Isabelle Mauny  wrote:
>>
>>> Sure - These are static dashboards. But a user must be able to create
>>> their own dashboards , on top on the ones we are delivering by default. In
>>> this case, they would create their own toolbox , leveraging the gadgetgen
>>> tool you discussed right?
>>>
>>> Thanks.
>>>
>>>
>>>
>>> --
>>> Isabelle Mauny
>>> Director, Product Management; WSO2, Inc.;  http://wso2.com/
>>> email: isabe...@wso2.com  - mobile: +34 616050684
>>>
>>>
>>> On Tue, Nov 12, 2013 at 11:10 AM, Tanya Madurapperuma wrote:
>>>



 On Tue, Nov 12, 2013 at 3:24 PM, Isabelle Mauny wrote:

> For APIM, we need consumer dashboards as well.
>

 Isabelle,
 Consumer dashboards are static dashboards that we can ship with APIM.
 Right ? IS there a need for consumers to go ahead and create dashboards
 themselves containing their preferred gadgets and layouts ? Dashboards that
 we discussed here are of that nature which will be created by the user
 himself.



> Isabelle.
>  __
>
>
> *Isabelle Mauny *Director, Product Management; WSO2, Inc.;
> http://wso2.com/
>
> On Nov 12, 2013, at 10:52 AM, Tanya Madurapperuma 
> wrote:
>
> Hi all,
>
> Following is the meeting notes of the $Subject.
>
> Participants : Anjana, Gokul, Mohan, Lasantha, SameeraP, Shiro,
> Lalaji, Joe, Chan, Gillian (over the phone), UES team
>
> BAM
> =
> 1. Some tool simialar to Gadget gen tool will developed in UES side
> and there will be a url from carbon console for the gadget gen tool
> 2. User can give the database query in the tool and get the data and
> then pass it to the UES gadgets
>
> CEP
> =
> 1. Events are published to BAM ---> create datasources---> create
> gadget---> publish to store ---> create dashboard
> 2. Reuse the suggested gadget gen tool
>
> For CEP usecase, it would be something like
>>
>> Events are published to a datasource such as cassandra(or web socket or
>> some subset of output adaptors supported by CEP) and a gadget will be
>> genarated accordingly -> publish to store -> create dashboard.
>>
>> Guess that is what you meant above..?
>>
>
> Yes. Thanks for correcting it.
>
>>
> APIM
> =
> 1. Required predefined dashboard for subscribers with same set of
> gadgets therefore APIM does not have a requirment of creating customizable
> dashboards.
>
> Future items
> ==
>
> 1. Personalization of dashboards - allowing whether user's changes to
> be overriden by admin's chnages?
> 2. multi-tenancy in ues
>
> Please add if I have missed anything.
>
> Thanks,
> Tanya.
>
> Thanks,
>> Lasantha
>>
>>>
> --
> Tanya Madurapperuma
>
> Software Engineer,
> WSO2 Inc. : wso2.com
> Mobile : +94718184439
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


 --
 Tanya Madurapperuma

 Software Engineer,
 WSO2 Inc. : wso2.com
 Mobile : +94718184439

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>>
>> --
>> *Lasantha Fernando*
>> Software Engineer - Data Technologies Team
>> WSO2 Inc. http://wso2.com
>>
>> email: lasan...@wso2.com
>> mobile: (+94) 71 5247551
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Tanya Madurapperuma
>
> Software Engineer,
> WSO2 Inc. : wso2.com
> Mobile : +94718184439
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail

Re: [Architecture] [Meeting Notes] UES integration with BAM / CEP / APIM

2013-11-13 Thread Sriskandarajah Suhothayan
On Wed, Nov 13, 2013 at 1:40 AM, Gayan Dhanushka  wrote:

> Hi Tanya,
>
> Adding one more thing, it would be great if we can have some gadgets with
> drill down as well.
>
> Thanks
> GayanD
>
> Gayan Dhanuska
> Software Engineer
> http://wso2.com/
> Lean Enterprise Middleware
>
> Mobile
> 071 666 2327
>
> Office
> Tel   : 94 11 214 5345
>  Fax  : 94 11 214 5300
>
> Twitter : https://twitter.com/gayanlggd
>
>
> On Wed, Nov 13, 2013 at 1:16 PM, Tanya Madurapperuma wrote:
>
>>
>>
>>
>> On Wed, Nov 13, 2013 at 10:53 AM, Subash Chaturanga wrote:
>>
>>>
>>> On Tue, Nov 12, 2013 at 8:19 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>>> Just a suggestion. I would like this gadget gen tool to take a template
>>>> gadget and a conf file as the input, ask to enter the UI options based on
>>>> the conf and generate the gadget based on the template gadget file.
>>>> Whereby we can support many types of gadgets.
>>>> This is impingement because in the current BAM gadget gen tool we are
>>>> only restricted to two gadget .
>>>>
>>> +1 . Even though we have the flexibility to write our own tbox,  even
>>> for BAM, it will be more like awesome if the gadgets have more flexibility.
>>> Not sure about the level of complexity that comes out when given a template
>>> gadget to configure. It will surely be better than writing our own jag ;-).
>>>
>>
>> Hi Suho and Subash,
>> Do you suggest to ship some default gadget templates ( ex: for bar
>> chart, line chart and etc ) with the tool which user can select and then
>> ask the user to give a conf file and then generate the gadget? Or else does
>> the user himself provides the gadget template as well?
>>
>>>
>>>
>> My suggestion was, we can't expect UES folks to write all the gadgets, so
if the user can create a template gadget and conf file (that contains
details how the template can be filled with custom data to generate the
gadget) and add both of them to the tool. Then the tool will have the
capability to give some level of customisation on the given gadget
template.
Through this the product teams and even the end users can allow this tool
to create new types of gadgets.

WDYT?

Suho



>>>
>>>> Regards
>>>> Suho
>>>> On Nov 12, 2013 2:46 AM, "Tanya Madurapperuma"  wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Nov 12, 2013 at 4:03 PM, Lasantha Fernando 
>>>>> wrote:
>>>>>
>>>>>> Hi Tanya,
>>>>>>
>>>>>>
>>>>>> On 12 November 2013 15:50, Isabelle Mauny  wrote:
>>>>>>
>>>>>>> Sure - These are static dashboards. But a user must be able to
>>>>>>> create their own dashboards , on top on the ones we are delivering by
>>>>>>> default. In this case, they would create their own toolbox , leveraging 
>>>>>>> the
>>>>>>> gadgetgen tool you discussed right?
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Isabelle Mauny
>>>>>>> Director, Product Management; WSO2, Inc.;  http://wso2.com/
>>>>>>> email: isabe...@wso2.com  - mobile: +34 616050684
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Nov 12, 2013 at 11:10 AM, Tanya Madurapperuma <
>>>>>>> ta...@wso2.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Nov 12, 2013 at 3:24 PM, Isabelle Mauny 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> For APIM, we need consumer dashboards as well.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Isabelle,
>>>>>>>> Consumer dashboards are static dashboards that we can ship with
>>>>>>>> APIM. Right ? IS there a need for consumers to go ahead and create
>>>>>>>> dashboards themselves containing their preferred gadgets and layouts ?
>>>>>>>> Dashboards that we discussed here are of that nature which will be 
>>>>>>>> created
>>>>>>&g

[Architecture] DevS CEP Plugin

2013-11-13 Thread Sriskandarajah Suhothayan
Hi all,

As we are planing to go for the CEP 3.0.0 plugin, I think we have to focus
more on the its GUI and the usability aspects of it.

CEP has two main concepts
1. Streams
2. Execution Plan.

Execution Plan creation can look like the CEP 3.0.0 UI.

But for Streams I think we have to do some Improvements. I'm not expecting
all this to be done for the next release, but this kind of a long term
vision, we have to find what should be and can be done now and execute
them. Please give your comments and improvements.

*1*.We need to have some sort of virtual Stream Store in DevS itself, this
will allow us to select streams from drop down at the Execution Plan
creation GUI.

1.1 This Stream Store will be populated by connecting DevS with CEP and/or
by exporting Streams from CEP and importing to DevS  and/or through
configs.

(for now we'll go with configs)


*2*. We can have a similar UI of CEP for Stream creation

*3*. Event Builder and Formatters will be associated to the Streams.

3.1 Stream listing UI will list its associated Builders and Formatters
under it.  Event Builder and Formatters won't have a separate listing
page/GUI. Therefore Builder and Formatter can be only created after
creating the Stream.


3.2 Need to figure out a proper way to export new/modified streams and
apply that to CEP.


3.3 Event Formatter creation GUI can look like the current CEP 3.0.0 UI.


3.4 Event Builder GUI need to be fixed, the Event Builder GUI also need to
use drop down to select the Stream. The mapping form need to be auto
created based on the selected stream whereby only allowing the user to fill
the incoming message related info.


*4*. Input and Output Adapter types and their Message configurations fields
for the Event Builder and Formatter need to be imported to the DevS.

4.1 The available Adapter types and their Message configurations fields
will be imported by connecting DevS with CEP and/or by exporting from CEP
and importing to DevS  and/or through configs.

(for now we'll go with configs)


Any suggestions appreciated!

Regards
Suho

-- 

*S. Suhothayan *
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] DevS CEP Plugin

2013-11-14 Thread Sriskandarajah Suhothayan
On Thu, Nov 14, 2013 at 2:07 AM, Harshana Martin  wrote:

> Hi All,
>
> Please see my comments inline.
>
>
> I think we will also need to go for CEP 3.0.1 release with this plugin,
because currently there is no CApp Deployer in CEP 3.0.0.


> On Thu, Nov 14, 2013 at 11:43 AM, Sriskandarajah Suhothayan  > wrote:
>
>>
>> Hi all,
>>
>> As we are planing to go for the CEP 3.0.0 plugin, I think we have to
>> focus more on the its GUI and the usability aspects of it.
>>
>> CEP has two main concepts
>> 1. Streams
>> 2. Execution Plan.
>>
>> Execution Plan creation can look like the CEP 3.0.0 UI.
>>
>> But for Streams I think we have to do some Improvements. I'm not
>> expecting all this to be done for the next release, but this kind of a long
>> term vision, we have to find what should be and can be done now and execute
>> them. Please give your comments and improvements.
>>
>> *1*.We need to have some sort of virtual Stream Store in DevS itself,
>> this will allow us to select streams from drop down at the Execution Plan
>> creation GUI.
>>
>> 1.1 This Stream Store will be populated by connecting DevS with CEP
>> and/or by exporting Streams from CEP and importing to DevS  and/or through
>> configs.
>>
>> (for now we'll go with configs)
>>
>>
> I believe these Streams are stored in the registry. In that we can provide
> users following options to select a stream as in ESB Editor.
>
> 1. From Workspace - Locate and list the stream definitions in the Eclipse
> Workspace
> 2. From Registry - Allow user to browse registry of the CEP and select
> from it.
>
> This approach is consistent across our other tools and users will feel
> comfortable around this since it is the general practice in in DevS.
>
Great,

The stream store for CEP can change to Registry, Cassandra, etc. I think we
need to fix this in the CEP side because now we have issues when
integrating CEP with BAM. All the stream related calls need to go via
DataBridge Stream Definition Store. To my understanding Eclipse need to
call to a Service of DataBridge Stream Definition Store and import the
streams
Is this possible ?
else when we add a config file that need to override the streams in the
DataBridge Stream Definition Store.


>> *2*. We can have a similar UI of CEP for Stream creation
>>
>
> +1
>
>>
>> *3*. Event Builder and Formatters will be associated to the Streams.
>>
>> 3.1 Stream listing UI will list its associated Builders and Formatters
>> under it.  Event Builder and Formatters won't have a separate listing
>> page/GUI. Therefore Builder and Formatter can be only created after
>> creating the Stream.
>>
>>
>> 3.2 Need to figure out a proper way to export new/modified streams and
>> apply that to CEP.
>>
>>
> Correct. Previously we used to have just one file. Now that we have
> multiple files, we may have to introduce a packaging mechanism for them
> with a new deployer. Need to discuss this further whether we can reuse the
> existing Registry Resource artifacts, etc for this and avoid introduction
> of new packaging mechanism.
>
I thinks Capp is good enough, let see.

>
>> 3.3 Event Formatter creation GUI can look like the current CEP 3.0.0 UI.
>>
>>
>> 3.4 Event Builder GUI need to be fixed, the Event Builder GUI also need
>> to use drop down to select the Stream. The mapping form need to be auto
>> created based on the selected stream whereby only allowing the user to fill
>> the incoming message related info.
>>
>>
> Aslong as Stream has the necessary information to do this, we can do it.
>
@Mohan, we need this to all, and not only for WSO2Event,
E.g in XML/JMS case we add the topic and then we add the xml mapping and
finally create an output stream.
My recommendation is we'll add the topic and then select the expected
output stream from the drop down which will given an easy way to fill the
xml mapping.
Does this make sense?


>> *4*. Input and Output Adapter types and their Message configurations
>> fields for the Event Builder and Formatter need to be imported to the DevS.
>>
>> 4.1 The available Adapter types and their Message configurations fields
>> will be imported by connecting DevS with CEP and/or by exporting from CEP
>> and importing to DevS  and/or through configs.
>>
>> (for now we'll go with configs)
>>
>>
> This is again have to consider how they are persisted in the CEP side at
> the moment and decide how we should do it.
>

There is no configs files for this in the CEP side, my suggestion is to let
the user write one and add that

Re: [Architecture] DevS CEP Plugin

2013-11-15 Thread Sriskandarajah Suhothayan
On Thu, Nov 14, 2013 at 10:05 PM, Lasantha Fernando wrote:

> Hi,
>
> On 15 November 2013 10:52, Mohanadarshan Vivekanandalingam  > wrote:
>
>> Hi Suho,
>>
>>
>> On Thu, Nov 14, 2013 at 8:40 PM, Sriskandarajah Suhothayan > > wrote:
>>
>>>
>>>
>>>
>>> On Thu, Nov 14, 2013 at 2:07 AM, Harshana Martin wrote:
>>>
>>>> Hi All,
>>>>
>>>> Please see my comments inline.
>>>>
>>>>
>>>> I think we will also need to go for CEP 3.0.1 release with this plugin,
>>> because currently there is no CApp Deployer in CEP 3.0.0.
>>>
>>>
>>>> On Thu, Nov 14, 2013 at 11:43 AM, Sriskandarajah Suhothayan <
>>>> s...@wso2.com> wrote:
>>>>
>>>>>
>>>>> Hi all,
>>>>>
>>>>> As we are planing to go for the CEP 3.0.0 plugin, I think we have to
>>>>> focus more on the its GUI and the usability aspects of it.
>>>>>
>>>>> CEP has two main concepts
>>>>> 1. Streams
>>>>> 2. Execution Plan.
>>>>>
>>>>> Execution Plan creation can look like the CEP 3.0.0 UI.
>>>>>
>>>>> But for Streams I think we have to do some Improvements. I'm not
>>>>> expecting all this to be done for the next release, but this kind of a 
>>>>> long
>>>>> term vision, we have to find what should be and can be done now and 
>>>>> execute
>>>>> them. Please give your comments and improvements.
>>>>>
>>>>> *1*.We need to have some sort of virtual Stream Store in DevS itself,
>>>>> this will allow us to select streams from drop down at the Execution Plan
>>>>> creation GUI.
>>>>>
>>>>> 1.1 This Stream Store will be populated by connecting DevS with CEP
>>>>> and/or by exporting Streams from CEP and importing to DevS  and/or through
>>>>> configs.
>>>>>
>>>>> (for now we'll go with configs)
>>>>>
>>>>>
>>>> I believe these Streams are stored in the registry. In that we can
>>>> provide users following options to select a stream as in ESB Editor.
>>>>
>>>> 1. From Workspace - Locate and list the stream definitions in the
>>>> Eclipse Workspace
>>>> 2. From Registry - Allow user to browse registry of the CEP and select
>>>> from it.
>>>>
>>>> This approach is consistent across our other tools and users will feel
>>>> comfortable around this since it is the general practice in in DevS.
>>>>
>>> Great,
>>>
>>> The stream store for CEP can change to Registry, Cassandra, etc. I
>>> think we need to fix this in the CEP side because now we have issues
>>> when integrating CEP with BAM. All the stream related calls need to go
>>> via DataBridge Stream Definition Store.
>>>
>>
> +1 for plugging in the databridge stream definition store which has the
> necessary abstractions to switch to registry,cassandra,in-memory when
> needed.
>
>
>>  To my understanding Eclipse need to call to a Service of DataBridgeStream 
>> Definition Store and import the streams
>>>
>>>
>>  Is this possible ?
>>> else when we add a config file that need to override the streams in the
>>> DataBridge Stream Definition Store.
>>>
>>>
>>>>> *2*. We can have a similar UI of CEP for Stream creation
>>>>>
>>>>
>>>> +1
>>>>
>>>>>
>>>>> *3*. Event Builder and Formatters will be associated to the Streams.
>>>>>
>>>>> 3.1 Stream listing UI will list its associated Builders and Formatters
>>>>> under it.  Event Builder and Formatters won't have a separate listing
>>>>> page/GUI. Therefore Builder and Formatter can be only created after
>>>>> creating the Stream.
>>>>>
>>>>>
>>>>> 3.2 Need to figure out a proper way to export new/modified streams and
>>>>> apply that to CEP.
>>>>>
>>>>>
>>>> Correct. Previously we used to have just one file. Now that we have
>>>> multiple files, we may have to introduce a packaging mechanism for them
>>>> with a new deployer. Need to discuss this further whether we can reuse the
>>>> existing Registry R

Re: [Architecture] Due Nov. 19 CEP 3.0/BAM 2.4 Release Answers

2013-11-20 Thread Sriskandarajah Suhothayan
+1

Thanks Daya for the analysis

Suho




On Wed, Nov 20, 2013 at 8:52 AM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

> Hi Daya,
>
> Great work.. :) Happy to hear that CEP 3.0.0 improved than 2.1.0.. We can
> do better analysis in future..
>
> Thanks & Regards,
> Mohan
>
> --
> *V. Mohanadarshan*
> *Software Engineer,*
> *Data Technologies Team,*
> *WSO2, Inc. http://wso2.com  *
> *lean.enterprise.middleware.*
>
> email: mo...@wso2.com
> phone:(+94) 771117673
>



-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] WSO2 CEP/Siddhi Storm Integration

2013-12-08 Thread Sriskandarajah Suhothayan
I'm working on the Siddhi syntax for the distributed processing

We can use the bellow execution plan for distributed processing.


http://wso2.org/carbon/eventprocessor
">
This execution plan is used to identify the possible fraud
transaction
local|active-passive|distributed











Here we'll have three modes of execution
1. local
2. active-passive
3. distributed

*Local mode*
This is the one we have now.

* Active-passive*
Here there will be 2 nodes, one active and the other passive. There will be
a handshake protocol between Active and passive, this will be used for
state replication and syncing when a node goes down and joins back.

*Distributed*
Here we use Annotations (These are ignored on the other modes), the
"parallel" annotation denotes the parallelism level and it can be "full"
for fully distributed, "petition" for distribute according to the
partition, or "single" for no distribution.
In the "petition" case all the queries need to be partitioned by the same
partition. We can also use curly braces {} to denote grouping
of parallelism whereby forcing all the queries to fall on the same Siddhi
instance.
We can combine storms reliable messaging and snapshot persistence to
achieve reliable messaging but this still needs more investigation.

Currently we'll mainly focus on the Active-passive case as it will provided
reliable and fault tolerant message processing easily and at the same time
we'll also work on the storm integration for the distributed case.

Thoughts?

Suho





On Wed, Nov 27, 2013 at 11:07 AM, Sanjiva Weerawarana wrote:

> +1 .. excellent job getting this off the ground! I'd love to see the
> numbers in a real distributed set up :).
>
>
> On Wed, Nov 27, 2013 at 1:47 PM, Srinath Perera  wrote:
>
>> Hi All,
>>
>> I have written a Siddhi bolt that you can use to run Siddhi using Storm
>> in a distributed setup.
>>
>> You can create a SiddhiBolt(s) given any Siddhi query like following.
>>
>> SiddhiBolt siddhiBolt = new SiddhiBolt(
>> new String[]{ "define stream PlayStream1 ( sid string, ts long,
>> x double, y double, z double, a double, v double);"},
>> new String[]{ "from PlayStream1#window.timeBatch(1sec) select
>> sid, avg(v) as avgV insert into AvgRunPlay;" },
>> new String[]{"AvgRunPlay"});
>>
>> Then those bolts can be used within Storm topology like any other bolt.
>> However, the name of components and streams used in CEP queries should
>> match.
>>
>> TopologyBuilder builder = new TopologyBuilder();
>> builder.setSpout("PlayStream1", new FootballDataSpout(), 1);
>> builder.setBolt("AvgRunPlay", siddhiBolt1,
>> 1).shuffleGrouping("PlayStream1");
>>
>> builder.setBolt("FastRunPlay", siddhiBolt2,1).shuffleGrouping("AvgRunPlay");
>> builder.setBolt("LeafEacho", new EchoBolt(),
>> 1).shuffleGrouping("FastRunPlay");
>>
>> I have done a quick performance test and got about 140K TPS in local
>> cluster. We need to test using distributed setup. Lasantha will integrate
>> this with CEP code base.
>>
>> Some potential TODO are
>> 1) Write two new bolts for Siddhi that support reliable processing and
>> transaction processing using Storm constructs. (for cases where we need
>> high reliability while processing)
>> 2) Integrate this with out data agent so we can send events into  Storm
>> setup as well.
>> 3) Extend the Siddhi language to support distributed processing, so above
>> topology can be written is Siddhi language itself.
>>
>> If performance confirmed to be in the same range, given stability of
>> Storm, I think we can go with Storm for planned Siddhi distributed
>> processing.
>>
>> Thanks
>> Srinath
>>
>> Code for the bolt can be found in
>> https://svn.wso2.org/repos/wso2/people/srinath/projects/siddhiStormIntegration/src/org/wso2/siddhi/storm/SiddhiBolt.java
>> .
>>
>> Code can be found from
>> https://svn.wso2.org/repos/wso2/people/srinath/projects/siddhiStormIntegration
>>
>> --
>> 
>> Srinath Perera, Ph.D.
>>   Director, Research, WSO2 Inc.
>>   Visiting Faculty, University of Moratuwa
>>   Member, Apache Software Foundation
>>   Research Scientist, Lanka Software Foundation
>>   Blog: http://srinathsview.blogspot.com/
>>   Photos: http://www.flickr.com/photos/hemapani/
>>Phone: 0772360902
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, Chairman & CEO; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: +1 650 745 4499 x5700; cell: +94 77 787
> 6880 | +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/
> Lean . Enterprise . Middleware
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc.

Re: [Architecture] Persisting runtime throttle data

2014-01-03 Thread Sriskandarajah Suhothayan
Is there any possibility of using Distributed CEP/Siddhi here? Because with
Siddhi we can have some flexibility in the way we want to throttle
and throttling is a common usecase of CEP. Its underline architecture also
uses Hazelcast or Storm for distributed processing.

Regards
Suho


On Tue, Dec 24, 2013 at 8:54 AM, Manoj Fernando  wrote:

> +1.  Changing caller contexts in to a Hazlecast map would require some
> significant changes to the throttle core, which may eventually be
> re-written.
>
> Will update the design.
>
> Thanks,
> Manoj
>
>
> On Mon, Dec 23, 2013 at 4:09 PM, Srinath Perera  wrote:
>
>> Manoj, above plan look good.
>>
>> I chatted with Azeez, and we cannot register a Entry listener as I
>> mentioned before because hazecast does not support entry listeners for
>> atomic long.
>>
>> --Srinath
>>
>>
>> On Mon, Dec 23, 2013 at 11:15 AM, Manoj Fernando  wrote:
>>
>>> Short update after the discussion with Azeez.
>>>
>>> - The need to re-write the throttle core is still at large, so the best
>>> was to see how we can decouple the persistence logic from the throttle core
>>> (at least as much as possible).
>>> - A cluster updatable global counter will be included to the
>>> ThrottleContext.  The idea is that each node will periodically broadcast
>>> the local counter info to the members in the cluster and the
>>> ThrottleConfiguration will update the value of the Global counter summing
>>> up the local counter values.
>>> - The ThrottleConfiguration will also push the global counter values to
>>> the Axis2 Configuration Context; a K, V pairs identified by the
>>> ThrottleContext ID.
>>> - A new platform component needs to be written to read the throttle
>>> related Axis2 Config Context list and persist them periodically (duration
>>> configurable).  The throttle core will have no visibility into this
>>> persistence logic, so this will be completely decoupled.
>>> - So who should do the persistence?  We can start with letting all nodes
>>> to persist first, but later (or in parallel) we can improve the Hazlecast's
>>> leader election (if that's not already there), so that the leader takes the
>>> responsibility of persisting.
>>> - The counters will be read off the persistence store at the time of
>>> Hazlecast Leader election takes place? (An alternative is to load the
>>> global counters at the init of ThrottleConfiguration but that means
>>> coupling throttle core with persistence.)
>>>
>>> I will update the design accordingly.
>>>
>>> Any more thoughts or suggestions?
>>>
>>> Regards,
>>> Manoj
>>>
>>>
>>> On Thu, Dec 19, 2013 at 12:30 PM, Manoj Fernando wrote:
>>>
 +1. Let me setup a time.

 Regards,
 Manoj


 On Thursday, December 19, 2013, Srinath Perera wrote:

> We need Azeez's feedback. Shall you, myself, and Azeez chat sometime
> and decide on the first Arch design?
>
>
> On Thu, Dec 19, 2013 at 11:55 AM, Manoj Fernando wrote:
>
> Hi Srinath,
>
> That sounds like a much cleaner solution.  We can perhaps use the
> native map-store declarative [1] which I think does something similar.  It
> may sound a little silly to ask... but are we keeping Hazlecast active on 
> a
> single node environment as well? :) Otherwise we will have to handle
> persistence on a single node in a different way.   This is with the
> assumption of needing to persist throttle data on a single node 
> environment
> as well (but questioning if we really need to do that is not totally
> invalid IMO).
>
> Shall we go ahead with the Hazlecast option targeting cluster
> deployments then?
>
> - Manoj
>
> [1] https://code.google.com/p/hazelcast/wiki/MapPersistence
>
>
> On Thu, Dec 19, 2013 at 10:51 AM, Srinath Perera wrote:
>
> One another way to do this use Hazelcast and then use "though cache"
> or "Change listener's" in Hazecast for persistence.
>
> --Srinath
>
>
> On Tue, Dec 17, 2013 at 4:49 PM, Manoj Fernando wrote:
>
> +1 for persisting through a single (elected?) node, and let Hazlecast
> do the replication.
>
> I took into consideration the need to persist periodically instead of
> at each and every request (by spawning a separate thread that has access 
> to
> the callerContext map)...  so yes... we should think in the same way for
> replicating the counters across the cluster as well.
>
> Instead of using a global counter, can we perhaps use the last updated
> timestamp of each CallerContext?  It's actually not a single counter we
> need to deal with, and each CallerContext instance will have separate
> counters mapped to their throttling policy AFAIK.  Therefore, I think its
> probably better to update CallerContext instances based on the last update
> timestamp.
>
> WDYT?
>
> If agree, then I need to figure out how to make delayed replication on
> haz

Re: [Architecture] Persisting runtime throttle data

2014-01-13 Thread Sriskandarajah Suhothayan
Siddhi support having Execution Plans, which can be mapped to one of the
current policies. I believe this will reduce the complexity of
the throttling execution logic.

Suho


On Mon, Jan 13, 2014 at 1:34 PM, Manoj Fernando  wrote:

> Yes, this is something important to consider when we re-write the throttle
> core eventually.  However, the persistence logic we want to bring in will
> not have any tight coupling with the throttle core.  As per the design we
> have finalized now, the throttle persistence module will retrieve the
> counters from the Axis2 context, and as long as the context is updated by
> the core (irrespective of the implementation), the persistence core will be
> re-usable.
>
> One thing we should consider is the backward compatibility with current
> throttle policy definitions IF we decide to bring in Siddhi into the
> picture.  In the case of API Manager for example, I think users are more
> used to managing policies the way it is done now (i.e. tier.xml), so IMO we
> should continue to support that.  Is there such thing as a policy
> definition plugin for Siddhi btw (may be not... right?) ?
>
> Regards,
> Manoj
>
>
> On Fri, Jan 3, 2014 at 4:55 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> Is there any possibility of using Distributed CEP/Siddhi here? Because
>> with Siddhi we can have some flexibility in the way we want to throttle
>> and throttling is a common usecase of CEP. Its underline architecture
>> also uses Hazelcast or Storm for distributed processing.
>>
>> Regards
>> Suho
>>
>>
>> On Tue, Dec 24, 2013 at 8:54 AM, Manoj Fernando  wrote:
>>
>>> +1.  Changing caller contexts in to a Hazlecast map would require some
>>> significant changes to the throttle core, which may eventually be
>>> re-written.
>>>
>>> Will update the design.
>>>
>>> Thanks,
>>> Manoj
>>>
>>>
>>> On Mon, Dec 23, 2013 at 4:09 PM, Srinath Perera wrote:
>>>
>>>> Manoj, above plan look good.
>>>>
>>>> I chatted with Azeez, and we cannot register a Entry listener as I
>>>> mentioned before because hazecast does not support entry listeners for
>>>> atomic long.
>>>>
>>>> --Srinath
>>>>
>>>>
>>>> On Mon, Dec 23, 2013 at 11:15 AM, Manoj Fernando wrote:
>>>>
>>>>> Short update after the discussion with Azeez.
>>>>>
>>>>> - The need to re-write the throttle core is still at large, so the
>>>>> best was to see how we can decouple the persistence logic from the 
>>>>> throttle
>>>>> core (at least as much as possible).
>>>>> - A cluster updatable global counter will be included to the
>>>>> ThrottleContext.  The idea is that each node will periodically broadcast
>>>>> the local counter info to the members in the cluster and the
>>>>> ThrottleConfiguration will update the value of the Global counter summing
>>>>> up the local counter values.
>>>>> - The ThrottleConfiguration will also push the global counter values
>>>>> to the Axis2 Configuration Context; a K, V pairs identified by the
>>>>> ThrottleContext ID.
>>>>> - A new platform component needs to be written to read the throttle
>>>>> related Axis2 Config Context list and persist them periodically (duration
>>>>> configurable).  The throttle core will have no visibility into this
>>>>> persistence logic, so this will be completely decoupled.
>>>>> - So who should do the persistence?  We can start with letting all
>>>>> nodes to persist first, but later (or in parallel) we can improve the
>>>>> Hazlecast's leader election (if that's not already there), so that the
>>>>> leader takes the responsibility of persisting.
>>>>> - The counters will be read off the persistence store at the time of
>>>>> Hazlecast Leader election takes place? (An alternative is to load the
>>>>> global counters at the init of ThrottleConfiguration but that means
>>>>> coupling throttle core with persistence.)
>>>>>
>>>>> I will update the design accordingly.
>>>>>
>>>>> Any more thoughts or suggestions?
>>>>>
>>>>> Regards,
>>>>> Manoj
>>>>>
>>>>>
>>>>> On Thu, Dec 19, 2013 at 12:30 PM, Manoj Fernando wrote:
>>>>>
>>>>>> +1. Let me setup a time.
&g

Re: [Architecture] [C5] Clustering API

2014-01-16 Thread Sriskandarajah Suhothayan
We also need an election API,

E.g for certain tasks only one/few node can be responsible and if that node
dies some one else need to take that task.

Here user should be able to give the Task Key and should be able to get to
know whether he is responsible for the task.

It is also impotent that the election logic is pluggable based on task

Regards
Suho


On Thu, Jan 16, 2014 at 4:56 PM, Afkham Azeez  wrote:

>
>
>
> On Thu, Jan 16, 2014 at 4:55 PM, Kishanthan Thangarajah <
> kishant...@wso2.com> wrote:
>
>> Adding more.
>>
>> Since we will follow the whiteboard pattern for adding new
>> MembershipListener's, we don't need to have the methods (
>> *addMembershipListener, **addMembershipListener*) explicitly at API
>> level. Users will implement their MembershipListener's and register it as
>> an OSGi service. The clustering component will discover these and add it
>> the cluster impl.
>>
>>
> +1
>
>
>
>>
>> On Wed, Jan 15, 2014 at 3:03 PM, Afkham Azeez  wrote:
>>
>>> Anjana & Suho,
>>> Please review this & let us know whether these APIs address your
>>> requirements.
>>>
>>> Azeez
>>>
>>>
>>> On Wed, Jan 15, 2014 at 1:40 PM, Kishanthan Thangarajah <
>>> kishant...@wso2.com> wrote:
>>>
 This thread is to discuss about $subject.

 Our current clustering API's contains stuffs that are mixture of both
 user level and developer level API. We will have to separate out these with
 the clear definition.

 For clustering API (user level), we will have the following methods. We
 can discuss clustering SPI's on a separate thread.

 *void sendMessage(ClusterMessage clusterMessage);*

 *void sendMessage(ClusterMessage clusterMessage,
 List members);*

 *List getMembers();*

 *void addMembershipListener(MembershipListener membershipListener);*

 *void removeMembershipListener(MembershipListener
 membershipListener);*

 In here we also thought of having MembershipListener (A listener which
 gets notified when changes occur in Membership) related API at user level.
 This will be useful when user wants to get some event notification when the
 current membership changes. Adding a new MembershipListener will follow the
 white board pattern.

 The API for MembershipListener

 *void memberAdded(MembershipEvent event);*

 *void memberRemoved(MembershipEvent event);*

 MembershipEvent will be of two types (member added or removed).

 Thoughts?

 Thanks,
 Kishanthan.
 --
 *Kishanthan Thangarajah*
 Senior Software Engineer,
 Platform Technologies Team,
 WSO2, Inc.
 lean.enterprise.middleware

 Mobile - +94773426635
 Blog - *http://kishanthan.wordpress.com
 *
 Twitter - *http://twitter.com/kishanthan
 *

>>>
>>>
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; http://www.apache.org/
>>> * *
>>> *email: **az...@wso2.com* 
>>> * cell: +94 77 3320919 <%2B94%2077%203320919> blog: *
>>> *http://blog.afkham.org* 
>>> *twitter: 
>>> **http://twitter.com/afkham_azeez*
>>> * linked-in: **http://lk.linkedin.com/in/afkhamazeez
>>> *
>>>
>>> *Lean . Enterprise . Middleware*
>>>
>>
>>
>>
>> --
>> *Kishanthan Thangarajah*
>> Senior Software Engineer,
>> Platform Technologies Team,
>> WSO2, Inc.
>> lean.enterprise.middleware
>>
>> Mobile - +94773426635
>> Blog - *http://kishanthan.wordpress.com
>> *
>> Twitter - *http://twitter.com/kishanthan *
>>
>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919> blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> * linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Archi

Re: [Architecture] [C5] Clustering API

2014-01-16 Thread Sriskandarajah Suhothayan
Based on the Anjana's suggestions, to support different products having
different way of coordination.

My suggestion is as follows

//This has to be a *one time thing* I'm not sure how we should have API for
this!
//ID is Task or GroupID
//Algorithm-class can be a class or name registered in carbon TBD
void preformElection(ID, Algorithm-class);

//Register current node to do/join the Task denoted by the ID
void registerAsTaskWorker(ID);

//Check is the current node is the coordinator
boolean isCoordinator(ID);

//Get the coordinator for the ID.
NodeID  getCoordinator(ID);

We also need a Listener for Coordinator

CoordinatorListener

  void coordinatorChanged(ID,NodeID);

WDYT?

Suho


On Thu, Jan 16, 2014 at 8:32 PM, Anjana Fernando  wrote:

> Hi,
>
> On Thu, Jan 16, 2014 at 5:10 AM, Sriskandarajah Suhothayan 
> wrote:
>
>> We also need an election API,
>>
>> E.g for certain tasks only one/few node can be responsible and if that
>> node dies some one else need to take that task.
>>
>> Here user should be able to give the Task Key and should be able to get
>> to know whether he is responsible for the task.
>>
>> It is also impotent that the election logic is pluggable based on task
>>
>
> The task scenarios are similar to what we do in our scheduled tasks
> component. I'm not sure if that type of functionality should be included in
> this API, or did you mean, you need the election API to build on top of it?
> ..
>
> Also, another requirement we have is, creating groups within a cluster.
> That is, when we work on the cluster, sometimes we need a node a specific
> group/groups. And it each group will have it's own coordinator. So then,
> there wouldn't be a single coordinator for the full physical cluster. I
> know we can build this functionality on a higher layer than this API, but
> then, effectively the isCoordinator for the full cluster will not be used,
> and also, each component that uses similar group functionality will roll up
> their own implementation of this. So I'm thinking if we build in some
> robust group features to this API itself, it will be very convenient for it
> consumers.
>
> So what I suggest is like, while a member joins for the full cluster
> automatically, can we have another API method like, joinGroup(groupId),
> then later when we register a membership listener, we can give the groupId
> as an optional parameter to register a membership listener for a specific
> group. And as for the isCoordinator functionality, we can also overload
> that method to provide a gropuId, or else, in the membership listener
> itself, we can have an additional method like "coordinatorChanged(String
> memberId)" or else, maybe more suitable, "assumedCoordinatorRole()" or
> something like that to simply say, you just became the coordinator of this
> full cluster/group.
>
> Cheers,
> Anjana.
>
>
>>
>> Regards
>> Suho
>>
>>
>> On Thu, Jan 16, 2014 at 4:56 PM, Afkham Azeez  wrote:
>>
>>>
>>>
>>>
>>> On Thu, Jan 16, 2014 at 4:55 PM, Kishanthan Thangarajah <
>>> kishant...@wso2.com> wrote:
>>>
>>>> Adding more.
>>>>
>>>> Since we will follow the whiteboard pattern for adding new
>>>> MembershipListener's, we don't need to have the methods (
>>>> *addMembershipListener, **addMembershipListener*) explicitly at API
>>>> level. Users will implement their MembershipListener's and register it as
>>>> an OSGi service. The clustering component will discover these and add it
>>>> the cluster impl.
>>>>
>>>>
>>> +1
>>>
>>>
>>>
>>>>
>>>> On Wed, Jan 15, 2014 at 3:03 PM, Afkham Azeez  wrote:
>>>>
>>>>> Anjana & Suho,
>>>>> Please review this & let us know whether these APIs address your
>>>>> requirements.
>>>>>
>>>>> Azeez
>>>>>
>>>>>
>>>>> On Wed, Jan 15, 2014 at 1:40 PM, Kishanthan Thangarajah <
>>>>> kishant...@wso2.com> wrote:
>>>>>
>>>>>> This thread is to discuss about $subject.
>>>>>>
>>>>>> Our current clustering API's contains stuffs that are mixture of both
>>>>>> user level and developer level API. We will have to separate out these 
>>>>>> with
>>>>>> the clear definition.
>>>>>>
>>>>>> For clustering API (user level), we will have the following methods.
>>>

Re: [Architecture] [C5] Clustering API

2014-01-16 Thread Sriskandarajah Suhothayan
I'm OK to have a separate API to handle the task stuff, but in that case
will it have access to Hazelcast or other internal stuff?
and should it be a part of kernel ?

I'm not sure what are the bits and pieces we need from Hazelcast to create
this API and exposing all of them will make the Caching API ugly :)

Regards,
Suho




On Fri, Jan 17, 2014 at 11:44 AM, Supun Malinga  wrote:

> Hi,
>
> Also in here we should consider the use cases of OC as well IMO..
>
> thanks,
>
>
> On Fri, Jan 17, 2014 at 11:24 AM, Afkham Azeez  wrote:
>
>> I think this is making clustering more specific to running tasks.
>> Handling tasks should be implemented at a layer above clustering.
>>
>>
>> On Fri, Jan 17, 2014 at 11:06 AM, Sriskandarajah Suhothayan <
>> s...@wso2.com> wrote:
>>
>>> Based on the Anjana's suggestions, to support different products having
>>> different way of coordination.
>>>
>>> My suggestion is as follows
>>>
>>> //This has to be a *one time thing* I'm not sure how we should have API
>>> for this!
>>> //ID is Task or GroupID
>>> //Algorithm-class can be a class or name registered in carbon TBD
>>> void preformElection(ID, Algorithm-class);
>>>
>>> //Register current node to do/join the Task denoted by the ID
>>> void registerAsTaskWorker(ID);
>>>
>>> //Check is the current node is the coordinator
>>> boolean isCoordinator(ID);
>>>
>>> //Get the coordinator for the ID.
>>> NodeID  getCoordinator(ID);
>>>
>>> We also need a Listener for Coordinator
>>>
>>> CoordinatorListener
>>>
>>>   void coordinatorChanged(ID,NodeID);
>>>
>>> WDYT?
>>>
>>> Suho
>>>
>>>
>>> On Thu, Jan 16, 2014 at 8:32 PM, Anjana Fernando wrote:
>>>
>>>> Hi,
>>>>
>>>> On Thu, Jan 16, 2014 at 5:10 AM, Sriskandarajah Suhothayan <
>>>> s...@wso2.com> wrote:
>>>>
>>>>> We also need an election API,
>>>>>
>>>>> E.g for certain tasks only one/few node can be responsible and if that
>>>>> node dies some one else need to take that task.
>>>>>
>>>>> Here user should be able to give the Task Key and should be able to
>>>>> get to know whether he is responsible for the task.
>>>>>
>>>>> It is also impotent that the election logic is pluggable based on task
>>>>>
>>>>
>>>> The task scenarios are similar to what we do in our scheduled tasks
>>>> component. I'm not sure if that type of functionality should be included in
>>>> this API, or did you mean, you need the election API to build on top of it?
>>>> ..
>>>>
>>>> Also, another requirement we have is, creating groups within a cluster.
>>>> That is, when we work on the cluster, sometimes we need a node a specific
>>>> group/groups. And it each group will have it's own coordinator. So then,
>>>> there wouldn't be a single coordinator for the full physical cluster. I
>>>> know we can build this functionality on a higher layer than this API, but
>>>> then, effectively the isCoordinator for the full cluster will not be used,
>>>> and also, each component that uses similar group functionality will roll up
>>>> their own implementation of this. So I'm thinking if we build in some
>>>> robust group features to this API itself, it will be very convenient for it
>>>> consumers.
>>>>
>>>> So what I suggest is like, while a member joins for the full cluster
>>>> automatically, can we have another API method like, joinGroup(groupId),
>>>> then later when we register a membership listener, we can give the groupId
>>>> as an optional parameter to register a membership listener for a specific
>>>> group. And as for the isCoordinator functionality, we can also overload
>>>> that method to provide a gropuId, or else, in the membership listener
>>>> itself, we can have an additional method like "coordinatorChanged(String
>>>> memberId)" or else, maybe more suitable, "assumedCoordinatorRole()" or
>>>> something like that to simply say, you just became the coordinator of this
>>>> full cluster/group.
>>>>
>>>> Cheers,
>>>> Anjana.
>>>>
>>>>
>>>>>
>>>>> Regards
>&

Re: [Architecture] HTTP Input Event Adaptor For CEP

2014-01-17 Thread Sriskandarajah Suhothayan
I believe the 2nd approach will be clear because from the URL itself we
know to which topic/service we are sending the event. And this is also the
approach used in the old WS-Eventing approach as well.

Suho


On Thu, Jan 16, 2014 at 8:09 PM, Mohanadarshan Vivekanandalingam <
mo...@wso2.com> wrote:

> Hi All,
>
> We have started working on implementing HTTP input event adaptor for CEP.
> Using this adaptor, we can send any type of message (No need to be a
> defined format) to CEP for processing.
> HTTP event adaptor will have the ability to forward the incoming messages
> to a topic, based on the user configuration. Here, we can follow two
> approaches on developing the adaptor. We are looking
> for a best option based on below,
>
> 1) We can have a single http endpoint (eg : 
> *https://localhost:9443/message_endpoint
> *) and all the users can send
> events to this specific endpoint. Here user need to set a custom header
> which specifying the corresponding topic where events needs to be forwarded.
>
> 2) We can create dynamic endpoints based on the configuration given by the
> user. For example if the topic is stockQuote then event adaptor can
> register a dynamic endpoint like  *https://localhost:9443/endpoint/stockQuote
>  *. then users can send
> events to corresponding dynamic endpoint.
>
> What would be the best option that we can follow on [1] & [2]. Appreciate
> your ideas..
>
> Thanks & Regards,
> Mohan
>
>
> --
> *V. Mohanadarshan*
> *Software Engineer,*
> *Data Technologies Team,*
> *WSO2, Inc. http://wso2.com  *
> *lean.enterprise.middleware.*
>
> email: mo...@wso2.com
> phone:(+94) 771117673
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Should we have one place to define event streams?

2014-01-20 Thread Sriskandarajah Suhothayan
I would like to propose we switch to a model where we write queries and
>>>> streams and deploy them as a toolbox. Then, just mention the stream name in
>>>> the event publisher to keep things simple.
>>>>
>>>> --Srinath
>>>>
>>>>
>>>>
>>>> On Sat, Jan 19, 2013 at 5:41 PM, Tharindu Mathew wrote:
>>>>
>>>>> A store discover already supported event streams seems to be a good
>>>>> idea. It would be good idea which we can implement through a UI or a 
>>>>> store.
>>>>>
>>>>> But, what do you mean by defining event streams everywhere?
>>>>>
>>>>> Clients can define any stream they want, but it is only defined at the
>>>>> stream definition store. If there is an error, it is shown in the stream
>>>>> definition store and returned to the client (I hope it is returned at
>>>>> least, otherwise it is a bug).
>>>>>
>>>>> This model was first proposed to get out of the eventing hell that we
>>>>> put ourselves into. Right now, if you want to publish something, you just
>>>>> define it and publish. You don't have to switch between multiple servers
>>>>> just to define and publish some events, which was extremely annoying.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Sat, Jan 19, 2013 at 2:28 PM, Amila Suriarachchi wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 18, 2013 at 7:54 PM, Sriskandarajah Suhothayan <
>>>>>> s...@wso2.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jan 18, 2013 at 6:46 PM, Sanjiva Weerawarana <
>>>>>>> sanj...@wso2.com> wrote:
>>>>>>>
>>>>>>>> +1 for the stream definition store idea (a new use of the Store).
>>>>>>>>
>>>>>>>> +1
>>>>>>>
>>>>>>>> Suho can we change our client API to take the streamID as a param?
>>>>>>>> That way a user can look it up in the store and use it directly.
>>>>>>>>
>>>>>>>> Currently we do have server side APIs to get StreamDefintons from
>>>>>>> StreamId or StreamName & Version.
>>>>>>>
>>>>>>
>>>>>> I think we need to separate out the event definition from runtime
>>>>>> event publishing.
>>>>>>
>>>>>> If we look at how brokers used in CEP, first users need to define
>>>>>> broker at the broker Manager and use the broker id at the CEP bucket 
>>>>>> level.
>>>>>> For event streams we can have default set of streams (which BAM 
>>>>>> publishers
>>>>>> and other default Agents use) and let users to define at run time. At the
>>>>>> event publishing side they can use the stream id.
>>>>>>
>>>>>> thanks,
>>>>>> Amila.
>>>>>>
>>>>>>
>>>>>>>
>>>>>>>  Sanjiva.
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Jan 18, 2013 at 6:33 PM, Sriskandarajah Suhothayan <
>>>>>>>> s...@wso2.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Jan 18, 2013 at 4:23 PM, Srinath Perera 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Suho,
>>>>>>>>>>
>>>>>>>>>> Yes, users can define them. But they do not  always have to, and
>>>>>>>>>> we should recommend that users go and do that once rather than doing 
>>>>>>>>>> it
>>>>>>>>>> every time.
>>>>>>>>>>
>>>>>>>>>  Yes this is possible even now. First time when we define stream
>>>>>>>>> we get a streamId and then we can simply reuse that (even after 
>>>>>>>>> restart).
>>>>>>>>> The issue is, our clients are not capable to store the streamId
>>>>>>>>

Re: [Architecture] Are we missing a common EmailSenderService

2014-01-21 Thread Sriskandarajah Suhothayan
Actually in CEP we have a common mechanism to send notifications using
EventFormatter and OutputEventAdaptor. With this we can send any type of
Notification.

You can simply install this features and integrate with them to send
Email/SMS

May be you can use these components and implement an easy to use UI only
for Email too.

Regards
Suho


On Tue, Jan 21, 2014 at 2:55 PM, Ashansa Perera  wrote:

> Yes Harsha, there is an email verification service ( for confirming user)
> , but not a common service to send emails. But still we do have the methods
> for building the configuration, etc. If you look at the given reference in
> my initial mail, you will see that we have used those to complete the
> service call.
>
>
> On Tue, Jan 21, 2014 at 1:32 PM, Harsha Thirimanna wrote:
>
>> +1 for this,
>>  Just go through the "email-verification" component. Implementation and
>> config load for the email are there already.
>>
>>
>> *Harsha Thirimanna*
>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>> * *
>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>> * twitter: **http://twitter.com/ *
>> *harshathirimann linked-in: **http:
>> **//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>> *
>>
>>  *Lean . Enterprise . Middleware*
>>
>>
>>
>> On Tue, Jan 21, 2014 at 1:26 PM, Pushpalanka Jayawardhana > > wrote:
>>
>>> Hi,
>>>
>>> +1.
>>> I also recently had a look at this component to find possibilities to
>>> send HTML formatted emails.
>>>
>>> If we can have a separate email sending service it would be better if we
>>> add this support as well.
>>> This was easily achievable with Apache Commons 
>>> Emaillibrary,
>>>  keeping the freedom to send alternate plain/text as well.
>>>
>>> Thanks,
>>>
>>> Pushpalanka Jayawardhana
>>>
>>> Software Engineer
>>>
>>> WSO2 Lanka (pvt) Ltd
>>> [image: 
>>> Facebook]
>>>  [image:
>>> Twitter]
>>>  [image:
>>> LinkedIn]
>>>  [image:
>>> Blogger]
>>>  [image:
>>> SlideShare]
>>> Mobile: +94779716248
>>> 
>>>
>>>
>>> On Tue, Jan 21, 2014 at 1:07 PM, Ashansa Perera wrote:
>>>
 Do we have a *service* which can be used to send the emails?
 I found an email sender component under components/stratos. But still
 it is specific to stratos.
 Wouldn't it be useful to have a common email sending service where you
 can give the configuration file as a parameter?

 We in AppFactory wanted a similar service and we have created a one[1]
 But as I feel a common email sending service would be useful platform
 wide.
 WDYT?

 [1]
 https://svn.wso2.org/repos/wso2/scratch/appfactorycc/components/appfac/org.wso2.carbon.appfactory.utilities/1.1.0/src/main/java/org/wso2/carbon/appfactory/utilities/services/EmailSenderService.java
 --
 Thanks & Regards,

 Ashansa Perera
 Software Engineer
 WSO2, Inc

 ___
 Architecture mailing list
 Architecture@wso2.org
 https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


>>>
>>>
>>> --
>>>
>>> Pushpalanka Jayawardhana
>>>
>>> Software Engineer
>>>
>>> WSO2 Lanka (pvt) Ltd
>>> [image: 
>>> Facebook]
>>>  [image:
>>> Twitter]
>>>  [image:
>>> LinkedIn]
>>>  [image:
>>> Blogger]
>>>  [image:
>>> SlideShare]
>>> Mobile: +94779716248
>>> http://c.content.wso2.com/signatures/us.png
>>>
>>> ___
>>> Architecture mailing list
>>> Architecture@wso2.org
>>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>>
>>>
>>
>> ___
>> Architecture mailing list
>> Architecture@wso2.org
>> https:/

Re: [Architecture] [Dev] Proposed code repository restructuring & move to GitHub

2014-01-21 Thread Sriskandarajah Suhothayan
How about WSO2 Commons projects, E.g Siddhi ?

Currently its in commons and under dependencies/commons

Where should we have this?

I believe projects like Siddhi also need to be top level repos may be
"commons-siddhi" and it don't need be in dependencies/commons anymore.

WDYT?

Suho


On Tue, Jan 21, 2014 at 3:13 PM, Eranda Sooriyabandara wrote:

> Hi All
> Please find the updated governance component in [1].
>
> thanks
> Eranda
>
> [1]. https://github.com/wso2/carbon-governance
>
>
> On Tue, Jan 21, 2014 at 2:56 PM, Shariq Muhammed  wrote:
>
>> On Tue, Jan 21, 2014 at 2:42 PM, Eranda Sooriyabandara 
>> wrote:
>>
>>> Hi Shariq,
>>> Yeah, we may not needed those to be build again and again. So let's add
>>> related stubs to service-stubs directory in each repo.
>>>
>>
>> Yea lets structure it that way.
>>
>>
>>>
>>> thanks
>>> Eranda
>>>
>>>
>>> On Tue, Jan 21, 2014 at 12:51 PM, Shariq Muhammed wrote:
>>>
 On Tue, Jan 21, 2014 at 12:38 PM, Kishanthan Thangarajah <
 kishant...@wso2.com> wrote:

> Yes, we don't need to separately say "service-stubs", it should be
> under the components level as just another component.
>

 Initially we extracted out the service stubs because it doesn't change
 frequently. So we can reduce the build time because we don't need to do
 wsdl2java in each build cycle. Looks like we are going to add it back?


>
>
>
> On Tue, Jan 21, 2014 at 12:30 PM, Eranda Sooriyabandara <
> era...@wso2.com> wrote:
>
>> Hi Kicha,
>> There will be no service stubs directory it will be a additional
>> component in the same level as BE + FE components.
>>
>> thanks
>> Eranda
>>
>>
>> On Tue, Jan 21, 2014 at 11:41 AM, Kishanthan Thangarajah <
>> kishant...@wso2.com> wrote:
>>
>>> Hi Eranda,
>>>
>>> Where have you put the service-stubs related to governance
>>> component? It should come under the same repo as carbon-component-
>>> governance.
>>>
>>>
>>> On Tue, Jan 21, 2014 at 12:29 AM, Eranda Sooriyabandara <
>>> era...@wso2.com> wrote:
>>>
 Hi All,
 As a PoC I just completed the carbon-component-governance. Please
 find it in [1] and let me know your comments and suggestions. Please 
 keep
 in mind that this is not in a buildable state since other
 components need to build before this.

 thanks
 Eranda

 [1] https://github.com/wso2/carbon-component-governance


 On Fri, Jan 17, 2014 at 9:26 PM, Afkham Azeez wrote:

>  [Sorry for the very long mail. I want to document all that I had
> in mind & the stuff we discussed. I would recommend all devs to
> take some time to read this]
>
>
> I would like to summarize he discussion we had a couple of days
> back.
>
> *The Problems*
> The problems we are trying to solve are as follows:
>
> 1. Trunk & branches structures being completely different
> 2. Branches containing directories with version numbers
> 3. It is impossible to move to GitHub with the current structure
> because of #2
> 4. It is very easy to break the build by changing already released
> code. The room for human error is high.
> 5. Bamboo builds are eternally broken because the build fails at
> some point & Bamboo cannot continue any further
> 6. When we branch, the trunk quickly becomes obsolete, and
> remains in that broken state until the next major platform release.
> 7. Everybody has to build all components/features, even if those
> are not related to their products
> 8. Fixed versions in branches instead of using SNAPSHOT versions.
> This makes it impossible to upload build artifacts to Maven/Nexus
> repos. This leads to #7.
> 9. Impossible to integrate code quality tools such as EraInsight
> because of #5
>
> *Proposed solution*
> We have come up with the following solution after much
> deliberation & thought.
>
> Rationale:
> We started looking at other open source projects out there. We
> took Axis2 as an example. Axis2 had many dependencies includingAxiom,
> XmlSchema, Woden, WSS4J etc. Those 3rd party dependencies were
> also developed by some Axis2 contributors, but we never branched all 
> of
> those together and brought them into the same code branch. We used to 
> start
> what we used to call a release train, where the upstream code would 
> have to
> be released first before the downstream code such as Axis2 & Synapse 
> could
> be released. This way, we never had any of the problems outlined 
> above.
>
> If you look at 

Re: [Architecture] Are we missing a common EmailSenderService

2014-01-21 Thread Sriskandarajah Suhothayan
In that case we have to fix that.

Suho


On Tue, Jan 21, 2014 at 3:15 PM, Harsha Thirimanna  wrote:

> Yes, I just mentioned about the code implementation is available in that
> module. It is not published as a common service.  :).
>
>
> *Harsha Thirimanna*
> Senior Software Engineer; WSO2, Inc.; http://wso2.com
> * *
> * email: **hars...@wso2.com* * cell: +94 71 5186770*
> * twitter: **http://twitter.com/ *
> *harshathirimann linked-in: **http:
> **//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
> *
>
>  *Lean . Enterprise . Middleware*
>
>
>
> On Tue, Jan 21, 2014 at 2:55 PM, Ashansa Perera  wrote:
>
>> Yes Harsha, there is an email verification service ( for confirming user)
>> , but not a common service to send emails. But still we do have the methods
>> for building the configuration, etc. If you look at the given reference in
>> my initial mail, you will see that we have used those to complete the
>> service call.
>>
>>
>> On Tue, Jan 21, 2014 at 1:32 PM, Harsha Thirimanna wrote:
>>
>>> +1 for this,
>>>  Just go through the "email-verification" component. Implementation and
>>> config load for the email are there already.
>>>
>>>
>>> *Harsha Thirimanna*
>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>> * *
>>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>>> * twitter: **http://twitter.com/ *
>>> *harshathirimann linked-in: **http:
>>> **//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>>> *
>>>
>>>  *Lean . Enterprise . Middleware*
>>>
>>>
>>>
>>> On Tue, Jan 21, 2014 at 1:26 PM, Pushpalanka Jayawardhana <
>>> la...@wso2.com> wrote:
>>>
 Hi,

 +1.
 I also recently had a look at this component to find possibilities to
 send HTML formatted emails.

 If we can have a separate email sending service it would be better if
 we add this support as well.
 This was easily achievable with Apache Commons 
 Emaillibrary,
  keeping the freedom to send alternate plain/text as well.

 Thanks,

 Pushpalanka Jayawardhana

 Software Engineer

 WSO2 Lanka (pvt) Ltd
 [image: 
 Facebook]
  [image:
 Twitter]
  [image:
 LinkedIn]
  [image:
 Blogger]
  [image:
 SlideShare]
 Mobile: +94779716248
 


 On Tue, Jan 21, 2014 at 1:07 PM, Ashansa Perera wrote:

> Do we have a *service* which can be used to send the emails?
> I found an email sender component under components/stratos. But still
> it is specific to stratos.
> Wouldn't it be useful to have a common email sending service where you
> can give the configuration file as a parameter?
>
> We in AppFactory wanted a similar service and we have created a one[1]
> But as I feel a common email sending service would be useful platform
> wide.
> WDYT?
>
> [1]
> https://svn.wso2.org/repos/wso2/scratch/appfactorycc/components/appfac/org.wso2.carbon.appfactory.utilities/1.1.0/src/main/java/org/wso2/carbon/appfactory/utilities/services/EmailSenderService.java
> --
> Thanks & Regards,
>
> Ashansa Perera
> Software Engineer
> WSO2, Inc
>
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


 --

 Pushpalanka Jayawardhana

 Software Engineer

 WSO2 Lanka (pvt) Ltd
 [image: 
 Facebook]
  [image:
 Twitter]
  [image:
 LinkedIn]
  [image:
 Blogger]

Re: [Architecture] Are we missing a common EmailSenderService

2014-01-21 Thread Sriskandarajah Suhothayan
I believe the easy approach is to get the code from CEP Email Output
Adaptor and Email-verification code and create a simple E-mail sender
service.

This service can then be used by both CEP Email Output Adaptor and
Email-verification.

In this process please keep in mind the performance aspects and make it
general as possible so it can be used by all.

Regards
Suho


On Tue, Jan 21, 2014 at 5:28 PM, Gayan Dhanushka  wrote:

> +1 for having a common email sender. I have seen in one support issue
> related to IS whether we can send emails when registering tenants, adding
> users to tenants etc. This scenario is valid for the whole products stack.
>
> Gayan Dhanuska
> Software Engineer
> http://wso2.com/
> Lean Enterprise Middleware
>
> Mobile
> 071 666 2327
>
> Office
> Tel   : 94 11 214 5345
>  Fax  : 94 11 214 5300
>
> Twitter : https://twitter.com/gayanlggd
>
>
> On Tue, Jan 21, 2014 at 3:37 PM, Sriskandarajah Suhothayan 
> wrote:
>
>> In that case we have to fix that.
>>
>> Suho
>>
>>
>> On Tue, Jan 21, 2014 at 3:15 PM, Harsha Thirimanna wrote:
>>
>>> Yes, I just mentioned about the code implementation is available in that
>>> module. It is not published as a common service.  :).
>>>
>>>
>>> *Harsha Thirimanna*
>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>> * <http://www.apache.org/>*
>>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>>> * twitter: **http://twitter.com/ <http://twitter.com/afkham_azeez>*
>>> *harshathirimann linked-in: **http:
>>> <http://lk.linkedin.com/in/afkhamazeez>**//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>>> <http://www.linkedin.com/pub/harsha-thirimanna/10/ab8/122>*
>>>
>>>  *Lean . Enterprise . Middleware*
>>>
>>>
>>>
>>> On Tue, Jan 21, 2014 at 2:55 PM, Ashansa Perera wrote:
>>>
>>>> Yes Harsha, there is an email verification service ( for confirming
>>>> user) , but not a common service to send emails. But still we do have the
>>>> methods for building the configuration, etc. If you look at the given
>>>> reference in my initial mail, you will see that we have used those to
>>>> complete the service call.
>>>>
>>>>
>>>> On Tue, Jan 21, 2014 at 1:32 PM, Harsha Thirimanna wrote:
>>>>
>>>>> +1 for this,
>>>>>  Just go through the "email-verification" component. Implementation
>>>>> and config load for the email are there already.
>>>>>
>>>>>
>>>>> *Harsha Thirimanna*
>>>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>>>> * <http://www.apache.org/>*
>>>>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>>>>> * twitter: **http://twitter.com/ <http://twitter.com/afkham_azeez>*
>>>>> *harshathirimann linked-in: **http:
>>>>> <http://lk.linkedin.com/in/afkhamazeez>**//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>>>>> <http://www.linkedin.com/pub/harsha-thirimanna/10/ab8/122>*
>>>>>
>>>>>  *Lean . Enterprise . Middleware*
>>>>>
>>>>>
>>>>>
>>>>> On Tue, Jan 21, 2014 at 1:26 PM, Pushpalanka Jayawardhana <
>>>>> la...@wso2.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> +1.
>>>>>> I also recently had a look at this component to find possibilities to
>>>>>> send HTML formatted emails.
>>>>>>
>>>>>> If we can have a separate email sending service it would be better if
>>>>>> we add this support as well.
>>>>>> This was easily achievable with Apache Commons 
>>>>>> Email<http://commons.apache.org/proper/commons-email/userguide.html>library,
>>>>>>  keeping the freedom to send alternate plain/text as well.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Pushpalanka Jayawardhana
>>>>>>
>>>>>> Software Engineer
>>>>>>
>>>>>> WSO2 Lanka (pvt) Ltd
>>>>>> [image: 
>>>>>> Facebook]<http://s.wisestamp.com/links?url=http%3A%2F%2Fwww.facebook.com%2Fpushpalanka>
>>>>>>  [image:
>>>>>> Twitter]<http://s.wisestamp.com/links?url=http%3A%2F%2Ftwitter.com%2FPushpalanka>
>>>>>&

Re: [Architecture] Are we missing a common EmailSenderService

2014-01-21 Thread Sriskandarajah Suhothayan
I think with this effort we can also remove the axis2 SMTP transport
bindings.

Suho


On Tue, Jan 21, 2014 at 5:48 PM, Sriskandarajah Suhothayan wrote:

> I believe the easy approach is to get the code from CEP Email Output
> Adaptor and Email-verification code and create a simple E-mail sender
> service.
>
> This service can then be used by both CEP Email Output Adaptor and
> Email-verification.
>
> In this process please keep in mind the performance aspects and make it
> general as possible so it can be used by all.
>
> Regards
> Suho
>
>
> On Tue, Jan 21, 2014 at 5:28 PM, Gayan Dhanushka  wrote:
>
>> +1 for having a common email sender. I have seen in one support issue
>> related to IS whether we can send emails when registering tenants, adding
>> users to tenants etc. This scenario is valid for the whole products stack.
>>
>> Gayan Dhanuska
>> Software Engineer
>> http://wso2.com/
>> Lean Enterprise Middleware
>>
>> Mobile
>> 071 666 2327
>>
>> Office
>> Tel   : 94 11 214 5345
>>  Fax  : 94 11 214 5300
>>
>> Twitter : https://twitter.com/gayanlggd
>>
>>
>> On Tue, Jan 21, 2014 at 3:37 PM, Sriskandarajah Suhothayan > > wrote:
>>
>>> In that case we have to fix that.
>>>
>>> Suho
>>>
>>>
>>> On Tue, Jan 21, 2014 at 3:15 PM, Harsha Thirimanna wrote:
>>>
>>>> Yes, I just mentioned about the code implementation is available in
>>>> that module. It is not published as a common service.  :).
>>>>
>>>>
>>>> *Harsha Thirimanna*
>>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>>> * <http://www.apache.org/>*
>>>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>>>> * twitter: **http://twitter.com/ <http://twitter.com/afkham_azeez>*
>>>> *harshathirimann linked-in: **http:
>>>> <http://lk.linkedin.com/in/afkhamazeez>**//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>>>> <http://www.linkedin.com/pub/harsha-thirimanna/10/ab8/122>*
>>>>
>>>>  *Lean . Enterprise . Middleware*
>>>>
>>>>
>>>>
>>>> On Tue, Jan 21, 2014 at 2:55 PM, Ashansa Perera wrote:
>>>>
>>>>> Yes Harsha, there is an email verification service ( for confirming
>>>>> user) , but not a common service to send emails. But still we do have the
>>>>> methods for building the configuration, etc. If you look at the given
>>>>> reference in my initial mail, you will see that we have used those to
>>>>> complete the service call.
>>>>>
>>>>>
>>>>> On Tue, Jan 21, 2014 at 1:32 PM, Harsha Thirimanna 
>>>>> wrote:
>>>>>
>>>>>> +1 for this,
>>>>>>  Just go through the "email-verification" component. Implementation
>>>>>> and config load for the email are there already.
>>>>>>
>>>>>>
>>>>>> *Harsha Thirimanna*
>>>>>> Senior Software Engineer; WSO2, Inc.; http://wso2.com
>>>>>> * <http://www.apache.org/>*
>>>>>> * email: **hars...@wso2.com* * cell: +94 71 5186770*
>>>>>> * twitter: **http://twitter.com/ <http://twitter.com/afkham_azeez>*
>>>>>> *harshathirimann linked-in: **http:
>>>>>> <http://lk.linkedin.com/in/afkhamazeez>**//www.linkedin.com/pub/harsha-thirimanna/10/ab8/122
>>>>>> <http://www.linkedin.com/pub/harsha-thirimanna/10/ab8/122>*
>>>>>>
>>>>>>  *Lean . Enterprise . Middleware*
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Jan 21, 2014 at 1:26 PM, Pushpalanka Jayawardhana <
>>>>>> la...@wso2.com> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> +1.
>>>>>>> I also recently had a look at this component to
>>>>>>> find possibilities to send HTML formatted emails.
>>>>>>>
>>>>>>> If we can have a separate email sending service it would be better
>>>>>>> if we add this support as well.
>>>>>>> This was easily achievable with Apache Commons 
>>>>>>> Email<http://commons.apache.org/proper/commons-email/userguide.html>library,
>>>>>>>  keeping the freedom to send alternate pla

Re: [Architecture] [Dev] Proposed code repository restructuring & move to GitHub

2014-01-21 Thread Sriskandarajah Suhothayan
Yes most of the commons projects are not under development we can only move
the actively developed projects.

One suggestion, When looking at the WSO2 org in Git we have lots of junk
repos. Are we going to remove then?
Or can't we have a separate Git org called "WSO2 Middleware Platform" and
have all the carbon related repos there.

WDYT?

Suho


On Jan 22, 2014 8:02 AM, "Afkham Azeez"  wrote:

>
>
>
> On Wed, Jan 22, 2014 at 7:28 AM, Eranda Sooriyabandara wrote:
>
>> Hi all,
>>
>>
>> On Tuesday, January 21, 2014, Senaka Fernando  wrote:
>>
>>> Hi all,
>>>
>>> +1 for Sagara's proposal. These projects have a life outside the Carbon
>>> Platform. But, we need to find a place to host them. If everything ends up
>>> on GitHub should these be in their too? If so, are they a WSO2 repository?
>>> Or is it a separate TLP?
>>>
>>
>> Here Sagara, Senaka came up with a good point. We don't need each and
>> every one to have a git repo in wso2 space but we can still use svn for
>> managing these codes where not much of the developers of carbon worry
>> about. WDYT?
>
>
> Let's bring these dependencies into GitHub or any other repo as & when
> needed. The changes we make to such dependencies need to be minimized.
>
>
>>
>> Thanks
>> Eranda
>>
>>
>>> Thanks,
>>> Senaka.
>>>
>>>
>>> On Tue, Jan 21, 2014 at 9:11 PM, Sagara Gunathunga wrote:
>>>
>>>
>>>
>>>
>>> On Tue, Jan 21, 2014 at 3:34 PM, Sriskandarajah Suhothayan <
>>> s...@wso2.com> wrote:
>>>
>>> How about WSO2 Commons projects, E.g Siddhi ?
>>>
>>> Currently its in commons and under dependencies/commons
>>>
>>> Where should we have this?
>>>
>>> I believe projects like Siddhi also need to be top level repos may be
>>> "commons-siddhi" and it don't need be in dependencies/commons anymore.
>>>
>>> WDYT?
>>>
>>>
>>> Ideally these projects should be treated as external dependencies to
>>> Carbon code base just like Apache XMLSchema or Axiom only difference here
>>> is those project are managed by WSO2. We should create separate repos for
>>> each of them and Carbon should only take them as Maven dependencies only.
>>> For naming I guess "Siddhi" is a good name because "commons" part does not
>>> make any meaning here.
>>>
>>> in my POV these should be the project we need to move out of Carbon code
>>> base.
>>>
>>>
>>> Jaggery ( we just need to get rid of SVN externals as code base it
>>> already on GitHub)
>>> Caramel
>>> Charon
>>> Balana
>>> Siddhi
>>>
>>> Thanks !
>>>
>>>
>>>
>>>
>>>
>>> Suho
>>>
>>>
>>> On Tue, Jan 21, 2014 at 3:13 PM, Eranda Sooriyabandara 
>>> wrote:
>>>
>>> Hi All
>>> Please find the updated governance component in [1].
>>>
>>> thanks
>>> Eranda
>>>
>>> [1]. https://github.com/wso2/carbon-governance
>>>
>>>
>>> On Tue, Jan 21, 2014 at 2:56 PM, Shariq Muhammed wrote:
>>>
>>> On Tue, Jan 21, 2014 at 2:42 PM, Eranda Sooriyabandara 
>>> wrote:
>>>
>>> Hi Shariq,
>>> Yeah, we may not needed those to be build again and again. So let's add
>>> related stubs to service-stubs directory in each repo.
>>>
>>>
>>> Yea lets structure it that way.
>>>
>>>
>>>
>>> thanks
>>> Eranda
>>>
>>>
>>> On Tue, Jan 21, 2014 at 12:51 PM, Shariq Muhammed wrote:
>>>
>>> On Tue, Jan 21, 2014 at 12:38 PM, Kishanthan Thangarajah
>>>
>>>
>>>
>>> *[image: http://wso2.com] <http://wso2.com> Senaka Fernando*
>>> Senior Technical Lead; WSO2 Inc.; http://wso2.com
>>>
>>>
>>>
>>> * Member; Apache Software Foundation; http://apache.org
>>> <http://apache.org>E-mail: senaka AT wso2.com <http://wso2.com>**P: +1
>>> 408 754 7388 <%2B1%20408%20754%207388>; ext: 51736*;
>>>
>>>
>>> *M: +94 77 322 1818 <%2B94%2077%20322%201818> Linked-In:
>>> http://linkedin.com/in/senakafernando
>>> <http://linkedin.com/in/senakafernando>*Lean . Enterprise . Middleware
>>>
>>
>>
>> -

Re: [Architecture] CEP UI re-factoring and adding much more functionality

2014-01-21 Thread Sriskandarajah Suhothayan
On Wed, Jan 22, 2014 at 11:18 AM, Lasantha Fernando wrote:

> Hi Mohan,
>
> +1 for the design. IMO, the in-flow and out-flow UI will be very useful to
> get an idea about how the events are flowing, which is currently a bit
> lacking in CEP, I think. Great addition!
>
> Will the user be able to sample events generated in the stream UI to test
> a flow, or will that part come under a separate component?
>

Based on the current plan the Try-it for streams will become a separate
component. In future when we have this we can integrate that with the
sample event generation UI.

Currently the use of Sample event generation UI is, allowing users to
create sample events, edit them, and finally copy and send them via
curl,JMS, etc..

Suho


> Thanks,
> Lasantha
>
>
>
> On 21 January 2014 19:43, Mohanadarshan Vivekanandalingam 
> wrote:
>
>>
>> Hi All,
>>
>> As you already knew that we have done major improvements and changes in
>> CEP 3.0.0 (which is a complete re-write) specially in UI aspect. But we
>> found, there are some gaps that we can fix and improve the usability
>> experience further. These changes are targeted for next CEP release which
>> is version 3.1.0. And below UI improvements also targeted on CEP tooling
>> aspect.
>>
>> Please see the below figures which are mock-up design flow of the event
>> stream UI and execution plan UI. Based on the below design we are trying to
>> achieve the default-event concepts and also giving opportunity to advanced
>> event configurations also. Appreciate any ideas and suggestions on this...
>>
>> Thanks & Regards,
>> Mohan
>>
>>
>> --
>> *V. Mohanadarshan*
>> *Software Engineer,*
>> *Data Technologies Team,*
>> *WSO2, Inc. http://wso2.com  *
>> *lean.enterprise.middleware.*
>>
>> email: mo...@wso2.com
>> phone:(+94) 771117673
>>
>
>
>
> --
> *Lasantha Fernando*
> Software Engineer - Data Technologies Team
> WSO2 Inc. http://wso2.com
>
> email: lasan...@wso2.com
> mobile: (+94) 71 5247551
>



-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
http://suhothayan.blogspot.com/  twitter:
http://twitter.com/suhothayan  | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] Refactoring BAM Data agents and Data-bridge.

2014-02-24 Thread Sriskandarajah Suhothayan
+1 If there any complications please let us know.

Suho


On Mon, Feb 24, 2014 at 4:52 PM, Anjana Fernando  wrote:

> Hi guys,
>
> Yeah, data-bridge and data agents can be moved to the commons project.
> Here, in data-bridge, there is a BAM specific module there, called "
> org.wso2.carbon.databridge.datasink.cassandra", please create a new
> component with that module, and move it to project which has the BAM
> product. And also, data-agents specific to products should also be moved to
> the respective product locations, for example, I heard, AS have a AS
> specific data agent, that type of a data agent should not be in commons. If
> you guys need any assistance in doing these changes, please come and meet
> the BAM team.
>
> Cheers,
> Anjana.
>
>
> On Mon, Feb 24, 2014 at 4:15 PM, Sagara Gunathunga wrote:
>
>>
>>
>>
>> On Mon, Feb 24, 2014 at 3:57 PM, Geeth Munasinghe  wrote:
>>
>>> Hi
>>> According to proposed project architecture on github, we need to move
>>> data-bridge and data-agents to carbon-commons project. When moving these
>>> two to carbon-common project, need to make sure that there is no
>>> dependencies resolving from other projects such as BAM and CEP. Because
>>> most of the projects will be depend on carbon-commons, and carbon-commons
>>> will be an upstream project to many projects.
>>>
>>> Can someone from BAM and CEP team attend to this please ?
>>>
>>
>> carbon-commons project should not depend on any other carbon-* project to
>> avoid cyclic dependencies. In above case we need to identify what are the
>> server side and client side (Agents) components and move agents into
>> carbon-common so that other products can easily package them while server
>> side components can be keep with specific carbon-* project. BTW BAM/CEP
>> needs some module restructuring to facilitate this.
>>
>> Appreciate someone from BAM/CEP can help Geeth and Eranda to resolve this
>> issue as they planning to build product packs by tomorrow.
>>
>>
>> Thanks !
>>
>>
>>
>>>
>>> Thanks
>>> Geeth
>>>
>>>
>>> *G. K. S. Munasinghe *
>>> *Software Engineer,*
>>> *WSO2, Inc. http://wso2.com  *
>>> *lean.enterprise.middleware.*
>>>
>>> email: ge...@wso2.com
>>> phone:(+94) 777911226
>>>
>>
>>
>>
>> --
>> Sagara Gunathunga
>>
>> Senior Technical Lead; WSO2, Inc.;  http://wso2.com
>> V.P Apache Web Services;http://ws.apache.org/
>> Linkedin; http://www.linkedin.com/in/ssagara
>> Blog ;  http://ssagara.blogspot.com
>>
>>
>
>
> --
> *Anjana Fernando*
> Technical Lead
>  WSO2 Inc. | http://wso2.com
> lean . enterprise . middleware
>



-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


Re: [Architecture] A few questions about WSO2 CEP/Siddhi

2014-03-09 Thread Sriskandarajah Suhothayan
Thanks for you interest in WSO2 CEP & Siddhi

Sorry for the late reply, I some how missed the mail


On Fri, Mar 7, 2014 at 4:48 AM, Leo Romanoff  wrote:

> Hi,
>
> I've started playing with WSO2 CEP recently and after some experiments
> with it I collected a few questions, which I cannot answer by reading the
> docs.  Hopefully I'll have them answered by posting them on this list.
>
> First of all, some background information. I have some previous experience
> with Esper and Drools. Currently, I'm mostly interested in using Siddhi in
> embedded mode, i.e. as a library. I could built a few small test projects
> using it without any serious problems. But now I'm trying to understand
> some more complex things about it.
>
> So, here are my questions:
>
> 1) How many rules/queries can be defined in one engine. How does it affect
> performance?
>
>For example, can I define (tens of) thousands of queries using the same
> (or multiple) instance of SiddhiManager? Would it make processing much
> slower? Or is the speed not proportional to the number of queries? E.g.
> when a new event arrives, does Siddhi test it in a linear fashion against
> each query or does Siddhi keep an internal state machine that tries to
> match an event against all rules at once?
>

SiddhiManager can have many queries, and if you chain the queries in a
liner fashion then all those queries will be executed one after the other
and you might see some performance degradation, but if you have have then
parallel then there wont be any issues.


>
> 2) Is it possible to easily disable/enable some queries?
>
> In my use-cases I have a lot of queries. Actually, I have a lot of tenants
> and each tenant may have something like 10-100 queries. Rather often (e.g.
> few times a day), tenants would like to disable/enable some of their
> queries. What is a proper way to do it? Is it a costly operation, i.e. does
> Siddhi need to perform a lot of processing to disable or enabled a query?
> Is it better to keep a dedicated SiddhiManager instance per tenant or is
> it OK to have one SiddhiManager instance which handles all those tenants
> with all their queries?
>
> The general norm is, you have to use a SiddhiManager per scenario, where
each scenario might contain one or more queries, with this modal its easy
if any tenant want to add a remove a scenario and it will not affect other
queries and tenants.

3) What is the semantics of distributed execution? I have found that Siddhi
> supports it by means of Hazelcast. But what does distributed execution
> means? E.g. what happens when I feed in an event at one of the instances?
> How can this distributed execution be controlled besides enabling/disabling
> it?
>
> Currently Siddhi share its states among its different nodes via Hazelcast.
We are currently working on alternative ways to improve its distribution
capability.

4) What is the semantics of async processing? I have found
> "setAsyncProcessing" method, but what would be the effect of enabling this
> kind of processing as compared to the usual way of operation? What are the
> benefits and what are the drawbacks? When should it be used?
>

By default Siddhi uses the request thread to do the event processing, but
if you want to handover the data to another thread process then you can
enable async processing. Its useful if you are doing very complex time
consuming process using Siddhi.

>
> 5) I figured out that Siddhi can persist its state and later on restore
> it. This is a cool feature, but I'd like to understand better what kind of
> information is being persisted. Is it only events whose processing is not
> finished yet? Does it include a set of queries currently defined in a
> given SiddhiManager? Does it include all of SiddhiManager's settings? What
> kind of information is restored and what kind of information should be
> provided again in addition to restored one? What is a typical situation
> when Siddhi persistence could/should be used?
>
>
> It only stores the state information of the processing, E.g the current
running Avg of the average calculation. This will be used when server
recovers from a failure.


> I hope that most of my questions are pretty simple to answer for those,
> who are familiar with Siddhi's architecture and its inner workings.
>
>
Hope these information is very useful.


Regards,
Suho

Thanks,
>Leo
> ___
> Architecture mailing list
> Architecture@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>



-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 <%28%2B94%29%20779%20756%20757> | blog:
http://suhothayan.blogspot.com/  twitter:
http://twitter.com/suhothayan  | linked-in:
http://lk.linkedin.com/in/suhothayan *
_

[Architecture] CEP 3.1.0 ALPHA Released!

2014-03-13 Thread Sriskandarajah Suhothayan
WSO2 CEP team is pleased to announce the 3.1.0-ALPHA release. This release
is now available for download at
http://svn.wso2.org/repos/wso2/people/mohan/CEP_3.1.0_Alpha/wso2cep-3.1.0.zip
The documentation site for this release can be found at
https://docs.wso2.org/display/CEP310/WSO2+Complex+Event+Processor+Documentation

WSO2 Complex Event Processor (CEP) can be used to identify the most
meaningful events within an event cloud, analyzes their impacts, and acts
on them in real time. Built to be extremely high performing it offers
significant time saving and affordable acquisition. WSO2 CEP Manager is
released under the Apache Software License
2.0
.

*Key Features*

   - Default XML, JSON, Text and Map mapping for CEP
   - Support HA by letting one CEP node to be actieve while the redundant
   CEP node will drop all notifications till the active node is processing
   - Enable CEP to receive events from Http Call (HTTP Adaptor)
   - Enable CEP to store realtime outputs to an RDBMs data source.
   - Enhance JMS output notifications with header properties so that they
   can be routed better
   - Siddhi event table cache
   - Need a way to print/ inspect current state of CEP Engine
   - Replacing WS-Eventing with soap input/output adapter
   - Kafka Input/Output Adaptor
   - Enable CEP to write its logs for debugging purposes


*Installing*

The only prerequisite required to try this release out is an installation
of JDK 1.6 (Sun/Oracle JDK 1.6.0_23 or higher recommended)

   - Download the WSO2 CEP 3.1.0-ALPHA release from
   http://svn.wso2.org/repos/wso2/people/mohan/CEP_3.1.0_Alpha/wso2cep-3.1.0.zip
   - Extract the downloaded archive
   - Go to the 'bin' directory and execute wso2server.sh (Unix/Linux) or
   wso2server.bat (Windows)
   - Point your web browser to http://localhost:9443/carbon to get started


*How to Run Samples?*

 Following steps explain in the documentation
https://docs.wso2.org/display/CEP310/Samples to get the samples up and
running:


*Bug Fixes and New Features in this Release*
Bug

   - [CEP-433 ] - Renew subscription
   page need "subscription mode" filed
   - [CEP-576 ] - Not forwarding
   thrift events according to tenant ID
   - [CEP-608 ] - Navigates to "Page
   not Found" when clicking on "
   http://docs.wso2.org/display/CEP300/Complex+Event+Processor+Documentation
   "
   - [CEP-624 ] - String attribute
   not working as expected in Rest-API
   - [CEP-629 ] -
   validateStreamDefinition() not working in Datapublisher - only checking the
   stream name not the stream id
   - [CEP-655 ] - Event builder does
   not order properties as defined in configuration when using map mapping
   - [CEP-665 ] - HTTP 500 error
   observed when login as tenant
   - [CEP-667 ] - error message needs
   to be modified when editing event stream
   - [CEP-669 ] - [intermitant] the
   execution plan throws a dead page and exception if we rename the stream and
   version
   - [CEP-670 ] - when creating an
   event formatter for a stream even if we set the advanced configurations
   upon saving null is shown on UI
   - [CEP-671 ] - Error thrown when
   adding event builder / formatter with xml input mapping type for http
   input/ output adapters
   - [CEP-672 ] - an error is thrown
   when we attempt to enable statistics for a stream
   - [CEP-673 ] - null point
   exception is thrown when attempting to send email and jdbc output events
   for an http input event
   - [CEP-674 ] -
   java.lang.NullPointerException: Tenant domain has not been set in
   CarbonContext Exceptions are thrown when attempting to test integration of
   CEP310 with ESB using BAM mediator
   - [CEP-675 ] - a null pointer
   exception is thrown when starting a CEP node with ./wso2server.sh
   -Ddisable.cassandra.server.startup=fals and clustering is enabled
   - [CEP-676 ] - Can't add custom
   output mappings in event formatter
   - [CEP-681 ] - when attempting to
   send a message from CEP via http input to MB via ws-event output adapter
   CEP thrown a Error while dispatching events java.lang.NullPointerException
   - [CEP-682 ] - system does not
   seem to allow an event table to be used as an export stream
   - [CEP-687 ] - "Caused by:
   java.lang.ClassNotFoundExceptio

Re: [Architecture] A few questions about WSO2 CEP/Siddhi

2014-03-25 Thread Sriskandarajah Suhothayan
On Tue, Mar 11, 2014 at 4:10 PM, Leo Romanoff  wrote:

> Hi,
>
> As you requested, I created the following issues:
>
> https://wso2.org/jira/browse/CEP-709 - about sharing stream
> representations
>
> https://wso2.org/jira/browse/CEP-710 - about performance problems due to
> linear iteration over rules
>
> https://wso2.org/jira/browse/CEP-711 - provide source jars for Siddhi
>
> > Siddhi does not support optional fields, we did this for performance
> actually.
>
> I see your point. But is it really true that it improves performance? And
> after all, I suggest supporting maps or optional fields only if a user
> demands it. I.e. current Object[] based approach is the default and only if
> a user explicitly asks for map-based representation or optional fields,
> then another representation is used.
>
> I could even imagine a mixture of both representations:
> - Object[] is still used for sending events
> - All mandatory fields (e.g. K fields) are the first K elements of this
> array.
> - All optional fields are put into a map which is passed as the last
> element of the array, i.e. it has index K.
> - If there are no optional elements allowed, there is no element at index K
>
> +1 for this approach, we'll add this to the road map

Suho

Best regards,
>   -Leo
>
>   Srinath Perera  schrieb am 10:34 Dienstag, 11.März
> 2014:
>
>
> First of all, thank you very much for your explanations and
> clarifications! It is very interesting and useful!
>
> Let me ask a few more questions and provide a few comments.
>
> > Hi All, these questions and answers are very educating. Shall we add
> them to our doc FAQs?
>
> I think it would be a very good idea to add something like this to the
> FAQs or to create some sort of an "architecture and implementation
> overview" document.
>
> 1) How many rules/queries can be defined in one engine. How does it affect
> performance?
>
>For example, can I define (tens of) thousands of queries using the same
> (or multiple) instance of SiddhiManager? Would it make processing much
> slower? Or is the speed not proportional to the number of queries? E.g.
> when a new event arrives, does Siddhi test it in a linear fashion against
> each query or does Siddhi keep an internal state machine that tries to
> match an event against all rules at once?
>
>
> > SiddhiManager can have many queries, and if you chain the queries in a
> liner fashion then all those queries will be executed
> > one after the other and you might see some performance degradation, but
> if you have have then parallel then there wont be
> > any issues.
>
> Well, before I got this answer, I created a few test-cases to check
> experimentally how it behaves. I created a single instance of a
> SiddhiManager, added 1 queries that all read from the same input
> stream, check if a specific attribute (namely, price) of an event is inside
> a given random interval ( [ price >= random_low and price <= random_high] )
> and output into randomly into one of 100 streams. Then I measured the time
> required to process 100 events using this setup. I also did exactly the
> same experiment with Esper.
>
> My findings were that Siddhi is much slower than Esper in this setup.
> After looking into the internal implementations of both, I realized the
> reason. Siddhi processes all queries that read from the same input stream
> in a linear fashion, sequentially. Even if many of the queries have almost
> the same condition, no optimization attempts are done by Siddhi. Esper
> detects that many queries have a condition on the same variable and create
> some sort of a decision tree. As a result, their running time in log N,
> where as Siddhi needs O(n).
>
> I'm not saying that this test-case if very typical or important, but may
> be Siddhi should try to analyze the complete set of queries and try to
> apply some optimizations, when it is possible? I.e. it is a bit of a global
> optimization applied. It could detect some common sub-expressions or
> sub-conditions in the queries and evaluate them only once, instead of doing
> it over and over again by evaluating each query separately.
>
> After getting these first results, I changed the setup, so that each query
> uses one of many input streams (e.g. one of 300) instead of using the same
> one. This greatly improved the situation, because now the number of queries
> per input stream was much smaller and thus processing was way faster. But
> even in this setup it is still about 5-6 times slower than Esper in this
> situation.
>
>
>  Could you share your testcases?, and we can have a look. Yes we have not
> much worked with 1000s of queries much,
>
>
> Yes, I could provide my testcases - the source code is actually pretty
> small.  What is the best way to do it? Should I simply attach a ZIP file
> with my project or better create a small github project?
>
>
> Could you report a JIRA here https://wso2.org/jira/browse/CEP and attach
> it?
>
>
>
> but likely it is something we can fix without muc

Re: [Architecture] A few questions about WSO2 CEP/Siddhi

2014-03-25 Thread Sriskandarajah Suhothayan
On Thu, Mar 20, 2014 at 5:13 PM, Leo Romanoff  wrote:

>
>
> On Mon, Mar 10, 2014 at 11:19 AM, Leo Romanoff  wrote:
>
>
> >>1) How many rules/queries can be defined in one engine. How does it
> affect performance?
> >>
> >>   For example, can I define (tens of) thousands of queries using the
> same (or multiple) instance of SiddhiManager? Would it make processing much
> slower? Or is the speed not proportional to the number of queries? E.g.
> when a new event arrives, does Siddhi test it in a linear fashion against
> each query or does Siddhi keep an internal state machine that tries to
> match an event against all rules at once?
> >>
> >
> >
> >> SiddhiManager can have many queries, and if you chain the queries in a
> liner fashion then all those queries will be executed
> >> one after the other and you might see some performance degradation, but
> if you have have then parallel then there wont be
> >
> >> any issues.
> >
> >
> >
> >Well, before I got this answer, I created a few test-cases to check
> experimentally how it behaves. I created a single instance of a
> SiddhiManager, added 1 queries that all read from the same input
> stream, check if a specific attribute (namely, price) of an event is inside
> a given random interval ( [ price >= random_low and price <= random_high] )
> and output into randomly into one of 100 streams. Then I measured the time
> required to process 100 events using this setup. I also did exactly the
> same experiment with Esper.
> >
> >
> >My findings were that Siddhi is much slower than Esper in this setup.
> After looking into the internal implementations of both, I realized the
> reason. Siddhi processes all queries that read from the same input stream
> in a linear fashion, sequentially. Even if many of the queries have almost
> the same condition, no optimization attempts are done by Siddhi. Esper
> detects that many queries have a condition on the same variable and create
> some sort of a decision tree. As a result, their running time in log N,
> where as Siddhi needs O(n).
> >
> >
> >I'm not saying that this test-case if very typical or important, but may
> be Siddhi should try to analyze the complete set of queries and try to
> apply some optimizations, when it is possible? I.e. it is a bit of a global
> optimization applied. It could detect some common sub-expressions or
> sub-conditions in the queries and evaluate them only once, instead of doing
> it over and over again by evaluating each query separately.
> >
> >
> >After getting these first results, I changed the setup, so that each
> query uses one of many input streams (e.g. one of 300) instead of using the
> same one. This greatly improved the situation, because now the number of
> queries per input stream was much smaller and thus processing was way
> faster. But even in this setup it is still about 5-6 times slower than
> Esper in this situation.
> >
> >
>
> I'd like to get a bit more specific on this point. For the sake of
> simplicity, let's say I need to model a lot of sensors (e.g. 10 or
> 100). All sensors produce the same events, e.g. SensorEvent(id string,
> value float), where id is the unique id of a sensor.
>
> For some/all of the sensors there are a few queries (e.g. 2-10) that
> analyze events from a single or multiple sensors. Obviously, to be able to
> refer only to events from specific sensors, each such query uses one or
> multiple filters like SensorEvent(id=SensorN) to get only the expected
> events. Now imagine that I have 1 or even 10 such queries in total
> (for all my sensors).
>
> The processing using Siddhi gets very slow in this case, because all
> events are put into the same event stream and this event stream has a huge
> number of listeners, i.e. queries reading from it. Currently, Siddhi goes
> over each query in linear fashion and checks it conditions. There are some
> workarounds, as I described above, e.g. allocating one event stream per
> sensor and then pre-filtering events received from sensors and putting them
> into a related event stream. But this quickly gets annoying because the
> whole idea of CEP is to delegate this kind of optimizations/decisions to
> the CEP engine and avoid manual event processing.
>
> I see different alternatives to solve it in a proper way:
>
> - one of the alternatives was described above already. It is pretty
> generic. Siddhi analyzes all queries and figures out that certain
> conditions are (almost) the same. Therefore it can evaluate the condition
> only once (e.g. SensorEvent.id) and then dispatch based on its value. May
> be some sort of a search tree could be used to figure out a set of queries
> with a matching filter (Esper seems to do something like this). I have
> filed an issue for this already.
>
> - yet another alternative that I had in mind was to something very similar
> to "partition by". In principle, "partition by" can already effectively
> split the input stream into partitions. The only problem is 

Re: [Architecture] Siddhi Time Seriers Extension - Performance

2014-04-11 Thread Sriskandarajah Suhothayan
The numbers are good :)

Seshika and WarunaP thanks for the great work

Suho


On Sat, Apr 12, 2014 at 10:01 AM, Sanjiva Weerawarana wrote:

> Wow those are big #s ... very impressive!
>
> What hardware? Is this over the network or in-memory driving Siddhi? What
> type of events? Whats the exact query?
>
> (I don't know anything about time series ..)
>
> Sanjiva.
>
>
> On Fri, Apr 11, 2014 at 5:35 AM, Seshika Fernando wrote:
>
>> Hi all,
>>
>> We tested the throughput performance of $subject. The Time Series
>> Extension is able to process an average of *82.8 Million *data points a
>> second in Simple Linear Regression, while it can handle an average of *18.6
>> Million *data points a second in Multiple Linear Regression.
>>
>> If we send in *n *events with *k* independent variables, the # of data
>> points is *n(k+1)*.
>> k+1 is taken considering y data stream and k number of x data streams.
>>
>> Please find attached the performance graphs.
>>
>> Best Regards,
>> Seshika
>>
>
>
>
> --
> Sanjiva Weerawarana, Ph.D.
> Founder, Chairman & CEO; WSO2, Inc.;  http://wso2.com/
> email: sanj...@wso2.com; office: (+1 650 745 4499 | +94  11 214 5345)
> x5700; cell: +94 77 787 6880 | +1 408 466 5099; voip: +1 650 265 8311
> blog: http://sanjiva.weerawarana.org/; twitter: @sanjiva
> Lean . Enterprise . Middleware
>



-- 

*S. Suhothayan*
Associate Technical Lead,
 *WSO2 Inc. *http://wso2.com
* *
lean . enterprise . middleware


*cell: (+94) 779 756 757 | blog: http://suhothayan.blogspot.com/
twitter: http://twitter.com/suhothayan
 | linked-in:
http://lk.linkedin.com/in/suhothayan *
___
Architecture mailing list
Architecture@wso2.org
https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture


  1   2   3   >