Re: [Dev] HTTP event receiver to accept batch of same type events

2015-12-16 Thread Lasantha Fernando
Hi Udara,

You can do this using an HTTP receiver as well. For this, you can use XML
input mapping and provide a parent selector xpath and if there are multiple
child elements within that larger XML element, they will be taken as
multiple events.

Alternatively, if you are using JSON input mapping, if you send it as a
json array, the objects in the array will be taken as individual events.

Thanks,
Lasantha

On 16 December 2015 at 02:59, Udara Rathnayake  wrote:

> Hi,
>
> My requirement is to receive a set of events in a single request at DAS
> side.
> eg:- array of events
>
> is $subject possible else what is the recommended approach other than a
> http event receiver?
>
> --
> Regards,
> UdaraR
>



-- 
*Lasantha Fernando*
Senior Software Engineer - Data Technologies Team
WSO2 Inc. http://wso2.com

email: lasan...@wso2.com
mobile: (+94) 71 5247551
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] WSO2 Carbon Kernel 4.4.3 Released

2015-12-16 Thread Nipuni Perera
WSO2 Carbon Kernel 4.4.3 Released

​

The Carbon team is pleased to announce the release of Carbon Kernel 4.4.3.
Listed below are the improvements and bug fixes that are introduced with
this release.


Improvements


   - [CARBON-14740 ] - Improve
   Carbon UI framework to load UI resources from fragment bundles.
   - [CARBON-15340 ] - Carbon
   Clustering Membership Scheme Extension Model
   - [CARBON-15625 ] - Add a
   property to carbon.xml for file name validation
   - [CARBON-15628 ] -
   AbstractUserStoreManager should use the available method to get user store
   domain name
   - [CARBON-15659 ] - Worker
   proxy port is missing in carbon.xml


Bug Fixes

   -

   [CARBON-14224 ] -
   "Referential integrity constraint violation" exception returned when
   accessing a service WSDLs when tenants are not loaded
   -

   [CARBON-14807 ] - Logging
   incorrect tenant ID in wso2carbon.log
   -

   [CARBON-15284 ] -
   wso2server.sh fails to start and other batch file problems
   -

   [CARBON-15344 ] - [IE11] Left
   hand menu items not visible to the user
   -

   [CARBON-15404 ] - When webapp
   name is 't' or webapp contains a folder named 't', CarbonContext operations
   fails
   -

   [CARBON-15420 ] - Roles added
   via admin service are not checked when user performing an action.
   -

   [CARBON-15434 ] - Improved
   fix for IDENTITY-3264
   -

   [CARBON-15448 ] - LDAP/AD SSO
   Logins cause two AD login events
   -

   [CARBON-15459 ] - Back-end
   validation doesn't work for Tenant admin while creating the tenant
   -

   [CARBON-15475 ] - Can't login
   in to management console when both proxycontextpath and webcontextroot is
   used together
   -

   [CARBON-15502 ] - LDAP
   filtering is not working properly
   -

   [CARBON-15536 ] - Cannot
   change password of a AD user in an OU within the OU.
   -

   [CARBON-15609 ] -
   SecurityVerificationTestCase fails intermittently in jdk 8
   -

   [CARBON-15613 ] - XSS and
   CSRF valve skip patterns does not work with tenant mode and should able to
   get all script patterns from extension file
   -

   [CARBON-15614 ] -
   ServerRestartHandler are not calling when the server restarts
   -

   [CARBON-15615 ] - Memory Leak
   Errors in backend on Graceful Server Shutdown/Restart
   -

   [CARBON-15616 ] - [email
   username] Wrong username displayed after signing in to management console
   -

   [CARBON-15620 ] - Warning
   when running in JDK 8
   -

   [CARBON-15627 ] -
   validationQuery parameter in Secondary UserStores are not read by the the
   code
   -

   [CARBON-15634 ] - Map cleared
   event does not properly handle local cache map clear scenario in
   HazelcastDistributedMapProvider
   -

   [CARBON-15637 ] - LDAP/AD
   users cannot be filtered based on claim values
   -

   [CARBON-15638 ] - Unable to
   authenticate a user with multiple entries in "GroupSearchBase"
   -

   [CARBON-15649 ] -
   [CarbonRemoteUserStoreManger] roles in remote user store are not listed
   when filtered with "ALL-USER-STORE-DOMAINS"
   -

   [CARBON-15655 ] - LDAP/AD
   tenant creation hangs with infinite loop when "memberOf" property is defined
   -

   [CARBON-15663 ] - Cache
   Identifier in user name causes NPE in UserRolesCache
   -

   [CARBON-15667 ] - Group List
   Filter in RWLDAP User store manager secondary user store config UI contains
   unbalanced parenthesis.
   -

   [CARBON-15674 ] - Request
   time out when sending a PATCH request as a tenant user
   -

   [CARBON-15676 ] - Fix License
   header in carbon.xml
   -

   [CARBON-15677 

Re: [Dev] PPaaS Artifact Migration Tool

2015-12-16 Thread Imesh Gunaratne
Udara should be able to provide more information on this, AFAIU we should
be able to map PersistenceBean.isRequired ->
Persistence.localPersistanceRequired.
May be other properties we can let the user fill in.

Thanks

On Wed, Dec 16, 2015 at 12:22 PM, Nishadi Kirielle  wrote:

> Hi Imesh,
>
> In the mapping between cartridges, there's a conflict between the
> Persistence.class of PPaaS 4.0.0 (package
> org.apache.stratos.cloud.controller.stub.pojo) and the
> PersistenceBean.class of PPaaS 4.1.0 (package
> org.apache.stratos.common.beans.cartridge).
>
> PersistenceBean
>   -isRequired
>
> Persistence
>   -localPersistanceRequired
>   -localPersistanceRequiredTracker
>   -localVolumesTracker
>
>
> Out of the above mentioned artifacts of PPaaS 4.1.0, what should be
> mapped with isRequired artifact of PPaaS 4.1.0?
>
> Thank you
>
>
> On Wed, Dec 16, 2015 at 10:04 AM, Nishadi Kirielle 
> wrote:
> > Hi Imesh,
> >
> > We will send a pull request as soon as possible. The remaining tasks and
> the
> > rough time plan as follows;
> >
> > 2015/12/16
> > 1. Update the bean classes and add dependencies
> > 2. Add test cases
> > 3. Make the file names configurable via a configuration file
> >
> > 2015/12/17
> > 4. Update errors in exception handling
> > 5. Update autoscaling policy mapping using average values
> > 6. Add default values to missing artifacts
> >
> > 2015/12/18
> > 7. Use of Apache http client library to write the rest client
> > 8. Update deploying scripts
> >
> > Thanks
> >
> > On Tue, Dec 15, 2015 at 4:33 PM, Imesh Gunaratne  wrote:
> >>
> >> Thanks Nishadi, may be you can update the same PR or send a new one with
> >> the improvements. Please try to list down the remaining tasks and a
> rough
> >> time plan.
> >>
> >> Thanks
> >>
> >> On Tue, Dec 15, 2015 at 1:42 PM, Nishadi Kirielle 
> >> wrote:
> >>>
> >>> Hi,
> >>> We have implemented the conversion of cartridge subscription artifacts
> to
> >>> application signups and domain mapping subscriptions. In addition, we
> were
> >>> able to integrate the https connection with the tool.[1] Currently, we
> are
> >>> trying to deploy the artifacts in PPaaS 4.1.0.
> >>>
> >>> [1]
> >>>
> https://github.com/nishadi/product-private-paas/tree/master/tools/migration/ppaas-artifact-converter
> >>>
> >>> Thanks
> >>>
> >>> On Tue, Dec 15, 2015 at 12:55 PM, Imesh Gunaratne 
> wrote:
> 
>  Hi Nishadi,
> 
>  Would you mind sharing the latest status of your efforts on this?
> 
>  Thanks
> 
>  On Wed, Dec 9, 2015 at 3:48 PM, Akila Ravihansa Perera
>   wrote:
> >
> > Hi,
> >
> > Here are some important improvements that you can do to boost your
> > development productivity and stability of the tool.
> >
> > 1. Develop a set of Unit tests with an embedded web container to mock
> > the PPaaS API.
> >
> >  - I've already done this as a demo for you to take as a reference
> > guide at [1]. I've used Jetty web container as an embedded server in
> my
> > JUnit test case to mock the API. I've hosted partition list API in
> my test
> > server and assert whether artifact loader reads the partition list
> > correctly.
> >
> > Advantage of this approach is that when you build your tool, it will
> > compile the code, test and validate the functionality and package.
> You don't
> > need to test the tool manually which is very time consuming. You may
> have to
> > refactor/re-organize stuff I've developed to make things clean.
> >
> > 2. Create a class ArtifactConverterRestClient as a wrapper around
> > HttpClient library and use it to fetch resources from Stratos API.
> You can
> > create methods like getPartitionList, getAutoscalePolicyList etc. in
> this.
> > Decouple your conversion logic from data transfer layer much as
> possible.
> > This will make it easy for you to write tests.
> >
> >
> > 3. Always use HTTPS if you are sending/receiving sensitive
> information.
> > In current implementation the tool is passing authentication
> credentials to
> > the server, therefore transport should be secure.
> >
> > 4. Make user input parameters configurable via configuration files.
> > Currently the tool expects username, password, url etc. as user
> inputs.
> > Make it read these values from a properties file and prompt only if
> those
> > values are missing.
> >
> > [1] https://github.com/nishadi/product-private-paas/pull/1
> >
> > Thanks.
> >
> > On Mon, Dec 7, 2015 at 12:19 PM, Nishadi Kirielle 
> > wrote:
> >>
> >> Hi,
> >>
> >> Thank you for the feedback.
> >>
> >> @Imesh:
> >> I have updated the README file[1] in mark down text format and will
> >> start writing the Wiki page.
> >>
> >> @Gayan:
> >> In the initial version, we have used sample json files as templates
> >> and used them to be default values. But as it has some confli

Re: [Dev] PPaaS Artifact Migration Tool

2015-12-16 Thread Nishadi Kirielle
Thank you

On Wed, Dec 16, 2015 at 1:56 PM, Imesh Gunaratne  wrote:
> Udara should be able to provide more information on this, AFAIU we should be
> able to map PersistenceBean.isRequired ->
> Persistence.localPersistanceRequired. May be other properties we can let the
> user fill in.
>
> Thanks
>
> On Wed, Dec 16, 2015 at 12:22 PM, Nishadi Kirielle  wrote:
>>
>> Hi Imesh,
>>
>> In the mapping between cartridges, there's a conflict between the
>> Persistence.class of PPaaS 4.0.0 (package
>> org.apache.stratos.cloud.controller.stub.pojo) and the
>> PersistenceBean.class of PPaaS 4.1.0 (package
>> org.apache.stratos.common.beans.cartridge).
>>
>> PersistenceBean
>>   -isRequired
>>
>> Persistence
>>   -localPersistanceRequired
>>   -localPersistanceRequiredTracker
>>   -localVolumesTracker
>>
>>
>> Out of the above mentioned artifacts of PPaaS 4.1.0, what should be
>> mapped with isRequired artifact of PPaaS 4.1.0?
>>
>> Thank you
>>
>>
>> On Wed, Dec 16, 2015 at 10:04 AM, Nishadi Kirielle 
>> wrote:
>> > Hi Imesh,
>> >
>> > We will send a pull request as soon as possible. The remaining tasks and
>> > the
>> > rough time plan as follows;
>> >
>> > 2015/12/16
>> > 1. Update the bean classes and add dependencies
>> > 2. Add test cases
>> > 3. Make the file names configurable via a configuration file
>> >
>> > 2015/12/17
>> > 4. Update errors in exception handling
>> > 5. Update autoscaling policy mapping using average values
>> > 6. Add default values to missing artifacts
>> >
>> > 2015/12/18
>> > 7. Use of Apache http client library to write the rest client
>> > 8. Update deploying scripts
>> >
>> > Thanks
>> >
>> > On Tue, Dec 15, 2015 at 4:33 PM, Imesh Gunaratne  wrote:
>> >>
>> >> Thanks Nishadi, may be you can update the same PR or send a new one
>> >> with
>> >> the improvements. Please try to list down the remaining tasks and a
>> >> rough
>> >> time plan.
>> >>
>> >> Thanks
>> >>
>> >> On Tue, Dec 15, 2015 at 1:42 PM, Nishadi Kirielle 
>> >> wrote:
>> >>>
>> >>> Hi,
>> >>> We have implemented the conversion of cartridge subscription artifacts
>> >>> to
>> >>> application signups and domain mapping subscriptions. In addition, we
>> >>> were
>> >>> able to integrate the https connection with the tool.[1] Currently, we
>> >>> are
>> >>> trying to deploy the artifacts in PPaaS 4.1.0.
>> >>>
>> >>> [1]
>> >>>
>> >>> https://github.com/nishadi/product-private-paas/tree/master/tools/migration/ppaas-artifact-converter
>> >>>
>> >>> Thanks
>> >>>
>> >>> On Tue, Dec 15, 2015 at 12:55 PM, Imesh Gunaratne 
>> >>> wrote:
>> 
>>  Hi Nishadi,
>> 
>>  Would you mind sharing the latest status of your efforts on this?
>> 
>>  Thanks
>> 
>>  On Wed, Dec 9, 2015 at 3:48 PM, Akila Ravihansa Perera
>>   wrote:
>> >
>> > Hi,
>> >
>> > Here are some important improvements that you can do to boost your
>> > development productivity and stability of the tool.
>> >
>> > 1. Develop a set of Unit tests with an embedded web container to
>> > mock
>> > the PPaaS API.
>> >
>> >  - I've already done this as a demo for you to take as a reference
>> > guide at [1]. I've used Jetty web container as an embedded server in
>> > my
>> > JUnit test case to mock the API. I've hosted partition list API in
>> > my test
>> > server and assert whether artifact loader reads the partition list
>> > correctly.
>> >
>> > Advantage of this approach is that when you build your tool, it will
>> > compile the code, test and validate the functionality and package.
>> > You don't
>> > need to test the tool manually which is very time consuming. You may
>> > have to
>> > refactor/re-organize stuff I've developed to make things clean.
>> >
>> > 2. Create a class ArtifactConverterRestClient as a wrapper around
>> > HttpClient library and use it to fetch resources from Stratos API.
>> > You can
>> > create methods like getPartitionList, getAutoscalePolicyList etc. in
>> > this.
>> > Decouple your conversion logic from data transfer layer much as
>> > possible.
>> > This will make it easy for you to write tests.
>> >
>> >
>> > 3. Always use HTTPS if you are sending/receiving sensitive
>> > information.
>> > In current implementation the tool is passing authentication
>> > credentials to
>> > the server, therefore transport should be secure.
>> >
>> > 4. Make user input parameters configurable via configuration files.
>> > Currently the tool expects username, password, url etc. as user
>> > inputs.
>> > Make it read these values from a properties file and prompt only if
>> > those
>> > values are missing.
>> >
>> > [1] https://github.com/nishadi/product-private-paas/pull/1
>> >
>> > Thanks.
>> >
>> > On Mon, Dec 7, 2015 at 12:19 PM, Nishadi Kirielle 
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> Thank you 

[Dev] DAS going OOM frequently

2015-12-16 Thread Sumedha Rubasinghe
We have DAS Lite included in IoT Server and several summarisation scripts
deployed. Server is going OOM frequently with following exception.

Shouldn't this[1] method be synchronised?

[1]
https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


>>>
[2015-12-16 15:11:00,004]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Light_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,005]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Magnetic_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,005]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Pressure_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,006]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Proximity_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,006]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Rotation_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:00,007]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Temperature_Sensor_Script for tenant id: -1234
[2015-12-16 15:11:01,132] ERROR
{org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
executing task: null
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
at java.util.HashMap$KeyIterator.next(HashMap.java:956)
at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
at
org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
at
org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
at
org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
at
org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
at
org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
at
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2015-12-16 15:12:00,001]  INFO
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
schedule task for: Accelerometer_Sensor_Script for tenant id: -1234

-- 
/sumedha

[Dev] [DEV][IS] Getting an Unique index or primary key violation exception.

2015-12-16 Thread Kamidu Punchihewa
Hi IS team,


I am getting a "Unique index or primary key violation" exception when
trying to refresh the access token with the refresh token grant type. The
error log is given below.

As per the discussion I had offline with Johann, this seems to be a known
issue which could occur when tested with a high load of concurrent calls.
But in my case it was only just 4 connections attempting to refresh tokens
concurrently.
Even though the exception is thrown, a new token pair can be generated by
sending another few calls.

The issue is the exception which is visible in the log.In a production
environment, since this exception is thrown in the IS back-end, it would be
a bit odd to the user to see an exception in the EMM console.

WDUT?


Error Log :

[2015-11-26 11:27:42,093] ERROR
> {org.wso2.carbon.webapp.authenticator.framework.WebappAuthenticationValve}
> -  Access token has expired , API : /mdm-android-agent/operation/device-info
> [2015-11-26 11:27:42,190] ERROR
> {org.wso2.carbon.webapp.authenticator.framework.WebappAuthenticationValve}
> -  Access token has expired , API : /mdm-admin/notifications/NEW
> [2015-11-26 11:27:42,299] ERROR
> {org.wso2.carbon.identity.oauth2.OAuth2Service} -  Error occurred while
> issuing the access token for Client ID : CJo5Izhh4aziaMV1gAKN8fovcpka, User
> ID null, Scope : [] and Grant Type : refresh_token
> org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Error when
> storing the access token for consumer key : CJo5Izhh4aziaMV1gAKN8fovcpka
> at
> org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:245)
> at
> org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.invalidateAndCreateNewToken(TokenMgtDAO.java:1103)
> at
> org.wso2.carbon.identity.oauth2.token.handlers.grant.RefreshGrantHandler.issue(RefreshGrantHandler.java:246)
> at
> org.wso2.carbon.identity.oauth2.token.AccessTokenIssuer.issue(AccessTokenIssuer.java:186)
> at
> org.wso2.carbon.identity.oauth2.OAuth2Service.issueAccessToken(OAuth2Service.java:196)
> at
> org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint.getAccessToken(OAuth2TokenEndpoint.java:273)
> at
> org.wso2.carbon.identity.oauth.endpoint.token.OAuth2TokenEndpoint.issueAccessToken(OAuth2TokenEndpoint.java:115)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:188)
> at
> org.apache.cxf.service.invoker.AbstractInvoker.invoke(AbstractInvoker.java:104)
> at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:204)
> at org.apache.cxf.jaxrs.JAXRSInvoker.invoke(JAXRSInvoker.java:101)
> at
> org.apache.cxf.interceptor.ServiceInvokerInterceptor$1.run(ServiceInvokerInterceptor.java:58)
> at
> org.apache.cxf.interceptor.ServiceInvokerInterceptor.handleMessage(ServiceInvokerInterceptor.java:94)
> at
> org.apache.cxf.phase.PhaseInterceptorChain.doIntercept(PhaseInterceptorChain.java:272)
> at
> org.apache.cxf.transport.ChainInitiationObserver.onMessage(ChainInitiationObserver.java:121)
> at
> org.apache.cxf.transport.http.AbstractHTTPDestination.invoke(AbstractHTTPDestination.java:249)
> at
> org.apache.cxf.transport.servlet.ServletController.invokeDestination(ServletController.java:248)
> at
> org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:222)
> at
> org.apache.cxf.transport.servlet.ServletController.invoke(ServletController.java:153)
> at
> org.apache.cxf.transport.servlet.CXFNonSpringServlet.invoke(CXFNonSpringServlet.java:171)
> at
> org.apache.cxf.transport.servlet.AbstractHTTPServlet.handleRequest(AbstractHTTPServlet.java:289)
> at
> org.apache.cxf.transport.servlet.AbstractHTTPServlet.doPost(AbstractHTTPServlet.java:209)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:646)
> at
> org.apache.cxf.transport.servlet.AbstractHTTPServlet.service(AbstractHTTPServlet.java:265)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at
> org.wso2.carbon.ui.filters.CRLFPreventionFilter.doFilter(CRLFPreventionFilter.java:59)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
> at

[Dev] [DEV][DSS] Write a new query for a google spreadsheet data service

2015-12-16 Thread Chanuka Dissanayake
Hi,

I have a spreadsheet which have column names with spaces, how could I
$subject. I couldn't find a answer from the docs [1].

[1] https://docs.wso2.com/display/DSS350/Google+Spreadsheet

Thanks & Regards,
Chanuka.

-- 
Chanuka Dissanayake
*Software Engineer | **WSO2 Inc.*; http://wso2.com

Mobile: +94 71 33 63 596
Email: chan...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [AppFac][Docker]Best Java Docker Client Library

2015-12-16 Thread Roshan Deniyage
Hi All,
   For App Factory build artifact feature, we are going with the stand
alone Jenkins server for the next release as well. This is the existing
method. The only change is instead of building user artifact and push it to
some git repository, we are going to build a docker image and push it to
our private docker registry.

For this we think of calling docker REST API inside our
appfactory-jenkins-plugin (existing custom plugin). So, need to have a java
docker client library and I found 4 libraries as below.

(1) https://github.com/docker-java/docker-java
 [based on jersey REST library and java 7]

(2) https://github.com/spotify/docker-client
  [Simple java client, seems like a primitive library]

(3) (1) https://github.com/shekhargulati/rx-docker-client
  [Asyn style library and use java 8 features]

(4) https://github.com/jclouds/jclouds-labs/tree/master/docker
 [This is used by the jCloud library]

I am going to go ahead with (1) since it gives the required functionalities.

If anyone has used any of those libraries or any other better library,
please give your suggestions.

Thanks,
Roshan Deniyage
Associate Technical Lead
WSO2, Inc: http://wso2.com

Mobile:  +94 777636406 / +1 408 667 6254
Twitter:  *https://twitter.com/roshku *
LinkedIn :  https://www.linkedin.com/in/roshandeniyage
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Sumedha,

Thank you for reporting the issue. I've fixed the concurrent modification
exception issue, where, actually both the methods "addIndexedTable" and
"removeIndexedTable" needed to be synchronized, since they both work on the
shared Set object there.

As for the OOM issue, can you please share a heap dump when the OOM
happened. So we can see what is causing this. And also, I see there are
multiple scripts running at the same time, so this actually can be a
legitimate error also, where the server actually doesn't have enough memory
to continue its operations. @Niranda, please share if there is any info on
tuning Spark's memory requirements.

Cheers,
Anjana.

On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
wrote:

> We have DAS Lite included in IoT Server and several summarisation scripts
> deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
> at
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
> at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> a

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi,

I have seen that same sort of exception occurs, when a HashMap is used by
multiple threads concurrently. It was necessary to use ConcurrentHashMap or
do the proper synchronization in our logic. This was explained as a state
corruption [3 - *(interesting read)*] and it is no wonder looking at source
of relevant methods [1].

As there is no such thing called ConcurrentHashSet (and as it is not the
best option), we should synchronizing removeIndexedTable,
refreshIndexedTableArray, as well as getAllIndexedTables to correct this
behaviour.

"getAllIndexedTables" should be synchronized because we might return a
"Set" which is in an inconsistent state, if "getAllIndexedTables" is
invoked while some other thread executes removeIndexedTable or
refreshIndexedTableArray. And may be, the method which called
"getAllIndexedTables"
might attempt to execute another state changing method available in "Set"
before refresh or remove completes, the "Set" ending up with state
corruption.

[2] is also interesting although it does not directly relate with this.
With "newSetFromMap" you can create a backed Set based on a
ConcurrentHashMap.


[1]
http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/AbstractCollection.java#AbstractCollection.toArray%28%29
[2]
http://docs.oracle.com/javase/6/docs/api/java/util/Collections.html#newSetFromMap%28java.util.Map%29
[3] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

Best Regards,
Ayoma.

On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
wrote:

> We have DAS Lite included in IoT Server and several summarisation scripts
> deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
> at
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
> at org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
> at
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkP

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana,

Sorry, I didn't notice that you have already replied this thread.

However, please consider my point on "getAllIndexedTables" as well.

Thank you,
Ayoma.

On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:

> Hi Sumedha,
>
> Thank you for reporting the issue. I've fixed the concurrent modification
> exception issue, where, actually both the methods "addIndexedTable" and
> "removeIndexedTable" needed to be synchronized, since they both work on the
> shared Set object there.
>
> As for the OOM issue, can you please share a heap dump when the OOM
> happened. So we can see what is causing this. And also, I see there are
> multiple scripts running at the same time, so this actually can be a
> legitimate error also, where the server actually doesn't have enough memory
> to continue its operations. @Niranda, please share if there is any info on
> tuning Spark's memory requirements.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
> wrote:
>
>> We have DAS Lite included in IoT Server and several summarisation scripts
>> deployed. Server is going OOM frequently with following exception.
>>
>> Shouldn't this[1] method be synchronised?
>>
>> [1]
>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>
>>
>> >>>
>> [2015-12-16 15:11:00,004]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Light_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,007]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:01,132] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.util.ConcurrentModificationException
>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>> at
>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>> at
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>> at
>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>> at
>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>> at
>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950)
>> at
>> org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)
>> at org.apache.spark.sql.DataFrame.(DataFrame.scala:128)
>> at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
>> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755)
>> at
>> org.wso2.carbon.analytics.spark.core.int

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
And, missed mentioning that when this this race condition / state
corruption happens all "get" operations performed on Set/Map get blocked
resulting in OOM situation. [1
] has
all that explained nicely. I have checked a heap dump in a similar
situation and if you take one, you will clearly see many threads waiting to
access this Set instance.

[1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:

> Hi Anjana,
>
> Sorry, I didn't notice that you have already replied this thread.
>
> However, please consider my point on "getAllIndexedTables" as well.
>
> Thank you,
> Ayoma.
>
> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:
>
>> Hi Sumedha,
>>
>> Thank you for reporting the issue. I've fixed the concurrent modification
>> exception issue, where, actually both the methods "addIndexedTable" and
>> "removeIndexedTable" needed to be synchronized, since they both work on the
>> shared Set object there.
>>
>> As for the OOM issue, can you please share a heap dump when the OOM
>> happened. So we can see what is causing this. And also, I see there are
>> multiple scripts running at the same time, so this actually can be a
>> legitimate error also, where the server actually doesn't have enough memory
>> to continue its operations. @Niranda, please share if there is any info on
>> tuning Spark's memory requirements.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
>> wrote:
>>
>>> We have DAS Lite included in IoT Server and several summarisation
>>> scripts deployed. Server is going OOM frequently with following exception.
>>>
>>> Shouldn't this[1] method be synchronised?
>>>
>>> [1]
>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>
>>>
>>> >>>
>>> [2015-12-16 15:11:00,004]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Light_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,007]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:01,132] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.util.ConcurrentModificationException
>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
>>> at
>>> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
>>> at
>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
>>> at
>>> org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)
>>> at
>>> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68)
>>> at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
>>> at
>>> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.sc

Re: [Dev] [DSS] Instructions to Generate Auth Credentials for google spreadsheet services

2015-12-16 Thread Rajith Vitharana
Hi Chanuka,

You need to have a host name instead of direct IP address(doesn't need to
be publicly available) since this is used for browser redirect, you can use
a local network shared host name as well. For example say the server is
running in your machine,then just use "localhost" as the host name(in this
case only 127.0.0.1 also works, only publicly shared IPs cannot be used),
say the server is running in local network, then put some hostname in your
"/etc/hosts" file pointing to that server and use that hostname in the
developer console.

Thanks,

On Wed, Dec 16, 2015 at 5:31 AM, Madhawa Gunasekara 
wrote:

> Hi Chanuka,
>
> It seems, You need to use a domain name(public available host name) for
> this purpose.
>
> Thanks,
> Madhawa
>
> On Wed, Dec 16, 2015 at 2:14 PM, Chanuka Dissanayake 
> wrote:
>
>> Hi
>>
>> I need $subject for DSS hosted in research cloud, Im getting following
>> error. What would be the solution for this. I couldn't find it in the docs
>> [1].
>>
>> [1] https://docs.wso2.com/display/DSS350/Google+Spreadsheet
>>
>> Thanks & Regards,
>> Chanuka.
>> --
>> Chanuka Dissanayake
>> *Software Engineer | **WSO2 Inc.*; http://wso2.com
>>
>> Mobile: +94 71 33 63 596
>> Email: chan...@wso2.com
>>
>
>
>
> --
> *Madhawa Gunasekara*
> Software Engineer
> WSO2 Inc.; http://wso2.com
> lean.enterprise.middleware
>
> mobile: +94 719411002 <+94+719411002>
> blog: *http://madhawa-gunasekara.blogspot.com
> *
> linkedin: *http://lk.linkedin.com/in/mgunasekara
> *
>



-- 
Rajith Vitharana

Software Engineer,
WSO2 Inc. : wso2.com
Mobile : +94715883223
Blog : http://lankavitharana.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi Ayoma,

Thanks for checking up on it, actually "getAllIndexedTables" doesn't return
the Set here, it returns an array that was previously populated in the
refresh operation, so no need to synchronize that method.

Cheers,
Anjana.

On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:

> And, missed mentioning that when this this race condition / state
> corruption happens all "get" operations performed on Set/Map get blocked
> resulting in OOM situation. [1
> ]
> has all that explained nicely. I have checked a heap dump in a similar
> situation and if you take one, you will clearly see many threads waiting to
> access this Set instance.
>
> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>
> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>
>> Hi Anjana,
>>
>> Sorry, I didn't notice that you have already replied this thread.
>>
>> However, please consider my point on "getAllIndexedTables" as well.
>>
>> Thank you,
>> Ayoma.
>>
>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando  wrote:
>>
>>> Hi Sumedha,
>>>
>>> Thank you for reporting the issue. I've fixed the concurrent
>>> modification exception issue, where, actually both the methods
>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
>>> they both work on the shared Set object there.
>>>
>>> As for the OOM issue, can you please share a heap dump when the OOM
>>> happened. So we can see what is causing this. And also, I see there are
>>> multiple scripts running at the same time, so this actually can be a
>>> legitimate error also, where the server actually doesn't have enough memory
>>> to continue its operations. @Niranda, please share if there is any info on
>>> tuning Spark's memory requirements.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
>>> wrote:
>>>
 We have DAS Lite included in IoT Server and several summarisation
 scripts deployed. Server is going OOM frequently with following exception.

 Shouldn't this[1] method be synchronised?

 [1]
 https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


 >>>
 [2015-12-16 15:11:00,004]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Light_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Magnetic_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Pressure_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Proximity_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Rotation_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,007]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Temperature_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:01,132] ERROR
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
 executing task: null
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
 at java.util.HashMap$KeyIterator.next(HashMap.java:956)
 at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
 at
 org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
 at
 org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
 at
 org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImpl.java:495)
 at
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelation.insert(AnalyticsRelation.java:162)
 at
 org.apache.spark.sql.sources.InsertIntoDataSource.run(commands.scala:53)
 at
 org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult

Re: [Dev] Which key of primary keystore is used to encrypt passwords in secure vault?

2015-12-16 Thread Rasika Perera
Hi Bhathiya,

In doc[1] you can configure which key to select on
"$PRODUCT_HOME/repository/conf/carbon.xml". Is it what you are looking for?


${carbon.home}/resources/security/wso2carbon.jks
JKS
wso2carbon <--- Password for key store
wso2carbon <--- Which key to select
wso2carbon  Hi Carbon team,
>
> Could you please tell me the $subject, when we have multiple keys stored
> in given keystore? Is that configurable?
>
> Thanks,
> --
> *Bhathiya Jayasekara*
> *Senior Software Engineer,*
> *WSO2 inc., http://wso2.com *
>
> *Phone: +94715478185 <%2B94715478185>*
> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
> *
> *Twitter: https://twitter.com/bhathiyax *
> *Blog: http://movingaheadblog.blogspot.com
> *
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
With Regards,

*Rasika Perera*
Software Engineer
M: +94 71 680 9060 E: rasi...@wso2.com
LinkedIn: http://lk.linkedin.com/in/rasika90

WSO2 Inc. www.wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DSS] Instructions to Generate Auth Credentials for google spreadsheet services

2015-12-16 Thread Anjana Fernando
Hi Rajith,

Please put the required docs to explain this scenario clearly. Seems this
information is not there.

Cheers,
Anjana.

On Wed, Dec 16, 2015 at 5:53 PM, Rajith Vitharana  wrote:

> Hi Chanuka,
>
> You need to have a host name instead of direct IP address(doesn't need to
> be publicly available) since this is used for browser redirect, you can use
> a local network shared host name as well. For example say the server is
> running in your machine,then just use "localhost" as the host name(in this
> case only 127.0.0.1 also works, only publicly shared IPs cannot be used),
> say the server is running in local network, then put some hostname in your
> "/etc/hosts" file pointing to that server and use that hostname in the
> developer console.
>
> Thanks,
>
> On Wed, Dec 16, 2015 at 5:31 AM, Madhawa Gunasekara 
> wrote:
>
>> Hi Chanuka,
>>
>> It seems, You need to use a domain name(public available host name) for
>> this purpose.
>>
>> Thanks,
>> Madhawa
>>
>> On Wed, Dec 16, 2015 at 2:14 PM, Chanuka Dissanayake 
>> wrote:
>>
>>> Hi
>>>
>>> I need $subject for DSS hosted in research cloud, Im getting following
>>> error. What would be the solution for this. I couldn't find it in the docs
>>> [1].
>>>
>>> [1] https://docs.wso2.com/display/DSS350/Google+Spreadsheet
>>>
>>> Thanks & Regards,
>>> Chanuka.
>>> --
>>> Chanuka Dissanayake
>>> *Software Engineer | **WSO2 Inc.*; http://wso2.com
>>>
>>> Mobile: +94 71 33 63 596
>>> Email: chan...@wso2.com
>>>
>>
>>
>>
>> --
>> *Madhawa Gunasekara*
>> Software Engineer
>> WSO2 Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> mobile: +94 719411002 <+94+719411002>
>> blog: *http://madhawa-gunasekara.blogspot.com
>> *
>> linkedin: *http://lk.linkedin.com/in/mgunasekara
>> *
>>
>
>
>
> --
> Rajith Vitharana
>
> Software Engineer,
> WSO2 Inc. : wso2.com
> Mobile : +94715883223
> Blog : http://lankavitharana.blogspot.com/
>



-- 
*Anjana Fernando*
Senior Technical Lead
WSO2 Inc. | http://wso2.com
lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Ayoma Wijethunga
Hi Anjana,

Yes. Agreed, sorry I misread that.  In that case OOM should be fine after
the fix.

Thank you,
Ayoma.

On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:

> Hi Ayoma,
>
> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
> return the Set here, it returns an array that was previously populated in
> the refresh operation, so no need to synchronize that method.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>
>> And, missed mentioning that when this this race condition / state
>> corruption happens all "get" operations performed on Set/Map get blocked
>> resulting in OOM situation. [1
>> ]
>> has all that explained nicely. I have checked a heap dump in a similar
>> situation and if you take one, you will clearly see many threads waiting to
>> access this Set instance.
>>
>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>
>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>>
>>> Hi Anjana,
>>>
>>> Sorry, I didn't notice that you have already replied this thread.
>>>
>>> However, please consider my point on "getAllIndexedTables" as well.
>>>
>>> Thank you,
>>> Ayoma.
>>>
>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Sumedha,

 Thank you for reporting the issue. I've fixed the concurrent
 modification exception issue, where, actually both the methods
 "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
 they both work on the shared Set object there.

 As for the OOM issue, can you please share a heap dump when the OOM
 happened. So we can see what is causing this. And also, I see there are
 multiple scripts running at the same time, so this actually can be a
 legitimate error also, where the server actually doesn't have enough memory
 to continue its operations. @Niranda, please share if there is any info on
 tuning Spark's memory requirements.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
 wrote:

> We have DAS Lite included in IoT Server and several summarisation
> scripts deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.checkAndInvalidateTableInfo(AnalyticsDataServiceImpl.java:504)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.setTableSchema(AnalyticsDataServiceImp

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Sumedha,

I checked the heapdump you provided, and the size of it is around 230MB. I
presume this was not a OOM scenario.

As per the Spark memory usage, when you use spark in the local mode, the
processing will happen inside that JVM itself. So, we have to make sure
that we allocate enough memory for that

Rgds

On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:

> Hi Ayoma,
>
> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
> return the Set here, it returns an array that was previously populated in
> the refresh operation, so no need to synchronize that method.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>
>> And, missed mentioning that when this this race condition / state
>> corruption happens all "get" operations performed on Set/Map get blocked
>> resulting in OOM situation. [1
>> ]
>> has all that explained nicely. I have checked a heap dump in a similar
>> situation and if you take one, you will clearly see many threads waiting to
>> access this Set instance.
>>
>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>
>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga  wrote:
>>
>>> Hi Anjana,
>>>
>>> Sorry, I didn't notice that you have already replied this thread.
>>>
>>> However, please consider my point on "getAllIndexedTables" as well.
>>>
>>> Thank you,
>>> Ayoma.
>>>
>>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Sumedha,

 Thank you for reporting the issue. I've fixed the concurrent
 modification exception issue, where, actually both the methods
 "addIndexedTable" and "removeIndexedTable" needed to be synchronized, since
 they both work on the shared Set object there.

 As for the OOM issue, can you please share a heap dump when the OOM
 happened. So we can see what is causing this. And also, I see there are
 multiple scripts running at the same time, so this actually can be a
 legitimate error also, where the server actually doesn't have enough memory
 to continue its operations. @Niranda, please share if there is any info on
 tuning Spark's memory requirements.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
 wrote:

> We have DAS Lite included in IoT Server and several summarisation
> scripts deployed. Server is going OOM frequently with following exception.
>
> Shouldn't this[1] method be synchronised?
>
> [1]
> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>
>
> >>>
> [2015-12-16 15:11:00,004]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Light_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,005]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Pressure_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Proximity_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,006]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Rotation_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:00,007]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: Temperature_Sensor_Script for tenant id: -1234
> [2015-12-16 15:11:01,132] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
> at
> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexedTableStore.java:37)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.refreshIndexedTableStoreEntry(AnalyticsDataServiceImpl.java:512)
> at
> org.wso2.carbon.analytics.dataservice.core.AnalyticsDataServiceImpl.invalidateAnalyticsTableInfo(AnalyticsDataServiceImpl.java:525)
> at
> org.wso2.carbon.analytics.dataservice.core.Analyt

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Gihan Anuruddha
Hi Niranda,

So let say we have to run embedded DAS in a memory restricted environment.
So where I can define the spark allocated memory configuration information?

Regards,
Gihan

On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:

> Hi Sumedha,
>
> I checked the heapdump you provided, and the size of it is around 230MB. I
> presume this was not a OOM scenario.
>
> As per the Spark memory usage, when you use spark in the local mode, the
> processing will happen inside that JVM itself. So, we have to make sure
> that we allocate enough memory for that
>
> Rgds
>
> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:
>
>> Hi Ayoma,
>>
>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>> return the Set here, it returns an array that was previously populated in
>> the refresh operation, so no need to synchronize that method.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga  wrote:
>>
>>> And, missed mentioning that when this this race condition / state
>>> corruption happens all "get" operations performed on Set/Map get blocked
>>> resulting in OOM situation. [1
>>> ]
>>> has all that explained nicely. I have checked a heap dump in a similar
>>> situation and if you take one, you will clearly see many threads waiting to
>>> access this Set instance.
>>>
>>> [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>>>
>>> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
>>> wrote:
>>>
 Hi Anjana,

 Sorry, I didn't notice that you have already replied this thread.

 However, please consider my point on "getAllIndexedTables" as well.

 Thank you,
 Ayoma.

 On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
 wrote:

> Hi Sumedha,
>
> Thank you for reporting the issue. I've fixed the concurrent
> modification exception issue, where, actually both the methods
> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
> since
> they both work on the shared Set object there.
>
> As for the OOM issue, can you please share a heap dump when the OOM
> happened. So we can see what is causing this. And also, I see there are
> multiple scripts running at the same time, so this actually can be a
> legitimate error also, where the server actually doesn't have enough 
> memory
> to continue its operations. @Niranda, please share if there is any info on
> tuning Spark's memory requirements.
>
> Cheers,
> Anjana.
>
> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe 
> wrote:
>
>> We have DAS Lite included in IoT Server and several summarisation
>> scripts deployed. Server is going OOM frequently with following 
>> exception.
>>
>> Shouldn't this[1] method be synchronised?
>>
>> [1]
>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>
>>
>> >>>
>> [2015-12-16 15:11:00,004]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Light_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,005]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,006]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:00,007]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>> [2015-12-16 15:11:01,132] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.util.ConcurrentModificationException
>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>> at java.util.AbstractCollection.toArray(AbstractCollection.java:195)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.refreshIndexedTableArray(AnalyticsIndexedTableStore.java:46)
>> at
>> org.wso2.carbon.analytics.dataservice.core.indexing.AnalyticsIndexedTableStore.addIndexedTable(AnalyticsIndexed

Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Niranda Perera
Hi Gihan,

The memory can be set by using the conf parameters ie. "
spark.executor.memory"

rgds

On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha  wrote:

> Hi Niranda,
>
> So let say we have to run embedded DAS in a memory restricted environment.
> So where I can define the spark allocated memory configuration information?
>
> Regards,
> Gihan
>
> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:
>
>> Hi Sumedha,
>>
>> I checked the heapdump you provided, and the size of it is around 230MB.
>> I presume this was not a OOM scenario.
>>
>> As per the Spark memory usage, when you use spark in the local mode, the
>> processing will happen inside that JVM itself. So, we have to make sure
>> that we allocate enough memory for that
>>
>> Rgds
>>
>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando  wrote:
>>
>>> Hi Ayoma,
>>>
>>> Thanks for checking up on it, actually "getAllIndexedTables" doesn't
>>> return the Set here, it returns an array that was previously populated in
>>> the refresh operation, so no need to synchronize that method.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga 
>>> wrote:
>>>
 And, missed mentioning that when this this race condition / state
 corruption happens all "get" operations performed on Set/Map get blocked
 resulting in OOM situation. [1
 ]
 has all that explained nicely. I have checked a heap dump in a similar
 situation and if you take one, you will clearly see many threads waiting to
 access this Set instance.

 [1] http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html

 On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
 wrote:

> Hi Anjana,
>
> Sorry, I didn't notice that you have already replied this thread.
>
> However, please consider my point on "getAllIndexedTables" as well.
>
> Thank you,
> Ayoma.
>
> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
> wrote:
>
>> Hi Sumedha,
>>
>> Thank you for reporting the issue. I've fixed the concurrent
>> modification exception issue, where, actually both the methods
>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>> since
>> they both work on the shared Set object there.
>>
>> As for the OOM issue, can you please share a heap dump when the OOM
>> happened. So we can see what is causing this. And also, I see there are
>> multiple scripts running at the same time, so this actually can be a
>> legitimate error also, where the server actually doesn't have enough 
>> memory
>> to continue its operations. @Niranda, please share if there is any info 
>> on
>> tuning Spark's memory requirements.
>>
>> Cheers,
>> Anjana.
>>
>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe > > wrote:
>>
>>> We have DAS Lite included in IoT Server and several summarisation
>>> scripts deployed. Server is going OOM frequently with following 
>>> exception.
>>>
>>> Shouldn't this[1] method be synchronised?
>>>
>>> [1]
>>> https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45
>>>
>>>
>>> >>>
>>> [2015-12-16 15:11:00,004]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Light_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Magnetic_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,005]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Pressure_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Proximity_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,006]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Rotation_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:00,007]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: Temperature_Sensor_Script for tenant id: -1234
>>> [2015-12-16 15:11:01,132] ERROR
>>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>>> executing task: null
>>> java.util.ConcurrentModificationException
>>> at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922)
>>> at java.util.HashMap$KeyIterator.next(HashMap.java:956)
>>> at java.util.AbstractCollection.toArray(AbstractCollection.java

[Dev] HTTPS REST Client

2015-12-16 Thread Malmee Weerasinghe
Hi All,

We have developed a HTTPS REST client using java in-built methods which
works properly. [1] This method is configured to allow self signed
certificates.

When using apache http client we get a certificate error :
javax.net.ssl.SSLPeerUnverifiedException: Host name '192.168.30.227' does
not match the certificate subject provided by the peer (CN=localhost,
O=WSO2, L=Mountain View, ST=CA, C=US).

What would be a good choice? Using apache HTTP client or java in built
methods. Your suggestions are highly appreciated.

[1]
https://github.com/nishadi/product-private-paas/blob/master/tools/migration/ppaas-artifact-converter/src/main/java/org/wso2/ppaas/tools/artifactmigration/RestClient.java

-- 
Malmee Weerasinghe
WSO2 Intern
mobile : (+94)* 71 7601905* |   email :   
mal...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] HTTPS REST Client

2015-12-16 Thread Isuru Haththotuwa
Hi Malmee,

If you have used java built in methods, that would mean your tool would not
be required to depend on an external library such as Apache HTTP client as
you have mentioned. This is fine for a simple use case. However, please
note if you need more functionality, such as support for all HTTP
operations, and other capabilities, etc. it would be advisable to use
Apache HTTP client or any other suitable existing library without
implementing it again by yourself.

For the certificate issue that you are getting, its possible to override
the default certificate validation mechanism and plug in your own
implementation which can disable certificate validation (for testing
purposes). For HTTP Client 4, please see [1].

[1].
http://stackoverflow.com/questions/2703161/how-to-ignore-ssl-certificate-errors-in-apache-httpclient-4-0

On Wed, Dec 16, 2015 at 8:29 PM, Malmee Weerasinghe  wrote:

> Hi All,
>
> We have developed a HTTPS REST client using java in-built methods which
> works properly. [1] This method is configured to allow self signed
> certificates.
>
> When using apache http client we get a certificate error :
> javax.net.ssl.SSLPeerUnverifiedException: Host name '192.168.30.227' does
> not match the certificate subject provided by the peer (CN=localhost,
> O=WSO2, L=Mountain View, ST=CA, C=US).
>
> What would be a good choice? Using apache HTTP client or java in built
> methods. Your suggestions are highly appreciated.
>
> [1]
> https://github.com/nishadi/product-private-paas/blob/master/tools/migration/ppaas-artifact-converter/src/main/java/org/wso2/ppaas/tools/artifactmigration/RestClient.java
>
> --
> Malmee Weerasinghe
> WSO2 Intern
> mobile : (+94)* 71 7601905* |   email :   
> mal...@wso2.com
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Thanks and Regards,

Isuru H.
+94 716 358 048* *
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Which key of primary keystore is used to encrypt passwords in secure vault?

2015-12-16 Thread Bhathiya Jayasekara
Exactly. Thank you.

On Wed, Dec 16, 2015 at 6:14 PM, Rasika Perera  wrote:

> Hi Bhathiya,
>
> In doc[1] you can configure which key to select on
> "$PRODUCT_HOME/repository/conf/carbon.xml". Is it what you are looking for?
>
> 
> ${carbon.home}/resources/security/wso2carbon.jks
> JKS
> wso2carbon <--- Password for key store
> wso2carbon <--- Which key to select
> wso2carbon  
>
> [1]
> https://docs.wso2.com/display/Carbon420/Configuring+Keystores+in+WSO2+Products
>
> Thanks,
> ~Rasika
>
> On Mon, Dec 14, 2015 at 4:58 PM, Bhathiya Jayasekara 
> wrote:
>
>> Hi Carbon team,
>>
>> Could you please tell me the $subject, when we have multiple keys stored
>> in given keystore? Is that configurable?
>>
>> Thanks,
>> --
>> *Bhathiya Jayasekara*
>> *Senior Software Engineer,*
>> *WSO2 inc., http://wso2.com *
>>
>> *Phone: +94715478185 <%2B94715478185>*
>> *LinkedIn: http://www.linkedin.com/in/bhathiyaj
>> *
>> *Twitter: https://twitter.com/bhathiyax *
>> *Blog: http://movingaheadblog.blogspot.com
>> *
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> With Regards,
>
> *Rasika Perera*
> Software Engineer
> M: +94 71 680 9060 E: rasi...@wso2.com
> LinkedIn: http://lk.linkedin.com/in/rasika90
>
> WSO2 Inc. www.wso2.com
> lean.enterprise.middleware
>



-- 
*Bhathiya Jayasekara*
*Senior Software Engineer,*
*WSO2 inc., http://wso2.com *

*Phone: +94715478185*
*LinkedIn: http://www.linkedin.com/in/bhathiyaj
*
*Twitter: https://twitter.com/bhathiyax *
*Blog: http://movingaheadblog.blogspot.com
*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [AppFac][Docker]Best Java Docker Client Library

2015-12-16 Thread Samith Dassanayake
Hi Roshan,

Have you looked at [1]

[1]
https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin

Regards,
Samith

On Wed, Dec 16, 2015 at 4:58 PM, Roshan Deniyage  wrote:

> Hi All,
>For App Factory build artifact feature, we are going with the stand
> alone Jenkins server for the next release as well. This is the existing
> method. The only change is instead of building user artifact and push it to
> some git repository, we are going to build a docker image and push it to
> our private docker registry.
>
> For this we think of calling docker REST API inside our
> appfactory-jenkins-plugin (existing custom plugin). So, need to have a java
> docker client library and I found 4 libraries as below.
>
> (1) https://github.com/docker-java/docker-java
>  [based on jersey REST library and java 7]
>
> (2) https://github.com/spotify/docker-client
>   [Simple java client, seems like a primitive library]
>
> (3) (1) https://github.com/shekhargulati/rx-docker-client
>   [Asyn style library and use java 8 features]
>
> (4) https://github.com/jclouds/jclouds-labs/tree/master/docker
>  [This is used by the jCloud library]
>
> I am going to go ahead with (1) since it gives the required
> functionalities.
>
> If anyone has used any of those libraries or any other better library,
> please give your suggestions.
>
> Thanks,
> Roshan Deniyage
> Associate Technical Lead
> WSO2, Inc: http://wso2.com
>
> Mobile:  +94 777636406 / +1 408 667 6254
> Twitter:  *https://twitter.com/roshku *
> LinkedIn :  https://www.linkedin.com/in/roshandeniyage
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Best Regards

Samith Dassanayake
Software Engineer | Cloud TG
WSO2, Inc. | http://wso2.com
lean. enterprise. middleware

Mobile : +947 76207351
Blog : buddycode.blogspot.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Governance Registry - Where to define a new handlebars helpers ?

2015-12-16 Thread Arnaud Charpentier
Hi,


We are working on the publisher of governance registry.


We are trying to internationalize some text chains with jaggery and the 
Handlebars template engine.

We understood that we have to define a new helper with the function 
registerHelper of the template engine Handlebars but we can't figure out in 
which Javascript file we should use this function ?


We tried to do it in caramel.handlebars.client.js (path : 
/wso2greg-5.1.0/repository/deployment/server/jaggeryapps/publisher/themes/default/js/caramel.handlebars.client.js
 ) and some others js files but it didn't work.



Regards


Arnaud Charpentier
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Dev studio errors and feedback

2015-12-16 Thread Dulitha Wijewantha
On Tue, Dec 15, 2015 at 10:26 PM, Awanthika Senarath 
wrote:

> Hi Dulitha,
>
> Find my responses in-line,
>
> 1) Can't have the same project name in the workspace. For example - I
> create a project called gateway-dev and have gateway-car, gateway-synapse,
> gateway-registry. I can't create a project with gateway-staging and have
> gateway-car. Is this an eclipse limitation?
> This is an eclipse limitation. Eclipse maintains a META file called
> .project for each and every project and the name in this .project file
> needs to be unique.
>
> 2) Can't have the same artifact name in the workspace. This one is bit
> annoying. I renamed the project to gateway-dev-synapse and
> gateway-staging-synapse. I can't have the api artifact with id.xml inside
> dev project and staging project in the same namespace:- the car picking
> form gets confused on the two assets. What's more - it will get corrupted
> and not have anything at all from synapse to pick. Below is an error that
> popped in the error console
> Yes, car files import artifacts based on the artifact name and in the
> artifact.xml of a car file it will list all the artifacts that needs to be
> bundled in that particular car, hence the artifact names needs to be unique
> for the car file. The error you have got is due to dev studio crashing
> before the file system files are properly refreshed, OR an attempt to
> manually edit the file system resources not via eclipse. Ideally this
> should be resolved by refreshing the workspace files through eclipse (right
> click the file and refresh)
>

​I agree that artifact name needs to be unqiue to the car file. But in a
scenario where you have 2 esb configuration projects having an artifact
with the same name (in-sequence.xml) - it crashes the car interface.​



> ​
>
3) Deleting a resource (API) got me below error -
> Could you please let us know how to reproduce this? did you attempt to
> rename the resource before deleting?
>
​This was the scenario where I had the same arifact name in 2 different esb
configuration projects. ​



>
> 4) Bulk import for resources -
> You are correct, this is a current limitation. You can import the
> synapse configuration with multiple proxies in it and dev-studio will
> generate the proxies for different artifacts in the synapse configuration.
> But currently it is not supported to import multiple proxy.xml files
> simultaneously.
>
> Regards
> Awanthika
>
>
> Awanthika Senarath
> Software Engineer, WSO2 Inc.
> Mobile: +94717681791
>
>
>
> On Wed, Dec 16, 2015 at 6:25 AM, Dulitha Wijewantha 
> wrote:
>
>> Hi guys,
>> I got some issues today working on the developer studio.
>>
>> 1) Can't have the same project name in the workspace. For example - I
>> create a project called gateway-dev and have gateway-car, gateway-synapse,
>> gateway-registry. I can't create a project with gateway-staging and have
>> gateway-car. Is this an eclipse limitation?
>>
>> 2) Can't have the same artifact name in the workspace. This one is bit
>> annoying. I renamed the project to gateway-dev-synapse and
>> gateway-staging-synapse. I can't have the api artifact with id.xml inside
>> dev project and staging project in the same namespace:- the car picking
>> form gets confused on the two assets. What's more - it will get corrupted
>> and not have anything at all from synapse to pick. Below is an error that
>> popped in the error console -
>>
>> org.eclipse.core.runtime.CoreException: The file is not synchronized with
>> the local file system.
>> at
>> org.eclipse.core.internal.filebuffers.ResourceTextFileBuffer.commitFileBufferContent(ResourceTextFileBuffer.java:338)
>> at
>> org.eclipse.core.internal.filebuffers.ResourceFileBuffer.commit(ResourceFileBuffer.java:325)
>> at
>> org.eclipse.ltk.core.refactoring.TextFileChange.commit(TextFileChange.java:233)
>> at
>> org.eclipse.ltk.core.refactoring.TextChange.perform(TextChange.java:240)
>> at
>> org.eclipse.ltk.core.refactoring.CompositeChange.perform(CompositeChange.java:278)
>> at
>> org.eclipse.ltk.core.refactoring.CompositeChange.perform(CompositeChange.java:278)
>> at
>> org.eclipse.ltk.core.refactoring.PerformChangeOperation$1.run(PerformChangeOperation.java:258)
>> at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2345)
>> at
>> org.eclipse.ltk.core.refactoring.PerformChangeOperation.executeChange(PerformChangeOperation.java:306)
>> at
>> org.eclipse.ltk.internal.ui.refactoring.UIPerformChangeOperation.executeChange(UIPerformChangeOperation.java:92)
>> at
>> org.eclipse.ltk.core.refactoring.PerformChangeOperation.run(PerformChangeOperation.java:218)
>> at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:2345)
>> at
>> org.eclipse.ltk.internal.ui.refactoring.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:87)
>> at
>> org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)
>>
>>
>> org.eclipse.core.runtime.CoreException: The file is not synchronized with
>> th

Re: [Dev] HTTP event receiver to accept batch of same type events

2015-12-16 Thread Udara Rathnayake
I'm able to publish array of objects[1] with your help. Thanks Lasantha.

[1]

[{

  "event": {

"payloadData": {



}

  }

},

{

  "event": {

"payloadData": {



}

  }

}]


On Wed, Dec 16, 2015 at 3:12 AM, Lasantha Fernando 
wrote:

> Hi Udara,
>
> You can do this using an HTTP receiver as well. For this, you can use XML
> input mapping and provide a parent selector xpath and if there are multiple
> child elements within that larger XML element, they will be taken as
> multiple events.
>
> Alternatively, if you are using JSON input mapping, if you send it as a
> json array, the objects in the array will be taken as individual events.
>
> Thanks,
> Lasantha
>
> On 16 December 2015 at 02:59, Udara Rathnayake  wrote:
>
>> Hi,
>>
>> My requirement is to receive a set of events in a single request at DAS
>> side.
>> eg:- array of events
>>
>> is $subject possible else what is the recommended approach other than a
>> http event receiver?
>>
>> --
>> Regards,
>> UdaraR
>>
>
>
>
> --
> *Lasantha Fernando*
> Senior Software Engineer - Data Technologies Team
> WSO2 Inc. http://wso2.com
>
> email: lasan...@wso2.com
> mobile: (+94) 71 5247551
>



-- 
Regards,
UdaraR
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [AppFac][Docker]Best Java Docker Client Library

2015-12-16 Thread Roshan Deniyage
Hi Samith,
Yes we evaluated [1] and [2] and not going to use those plugins since
our docker files are going to be dynamic and should be available to jenkins
job before build begins. That is also possible with number of ways, but
decided to change only deployment part in appfactory-jenkins plugin to
build docker based artifacts.

Thanks,
Roshan Deniyage
Associate Technical Lead
WSO2, Inc: http://wso2.com

Mobile:  +94 777636406 / +1 408 667 6254
Twitter:  *https://twitter.com/roshku *
LinkedIn :  https://www.linkedin.com/in/roshandeniyage


On Wed, Dec 16, 2015 at 9:33 PM, Samith Dassanayake  wrote:

> Hi Roshan,
>
> Have you looked at [1]
>
> [1]
> https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin
>
> Regards,
> Samith
>
> On Wed, Dec 16, 2015 at 4:58 PM, Roshan Deniyage  wrote:
>
>> Hi All,
>>For App Factory build artifact feature, we are going with the stand
>> alone Jenkins server for the next release as well. This is the existing
>> method. The only change is instead of building user artifact and push it to
>> some git repository, we are going to build a docker image and push it to
>> our private docker registry.
>>
>> For this we think of calling docker REST API inside our
>> appfactory-jenkins-plugin (existing custom plugin). So, need to have a java
>> docker client library and I found 4 libraries as below.
>>
>> (1) https://github.com/docker-java/docker-java
>>  [based on jersey REST library and java 7]
>>
>> (2) https://github.com/spotify/docker-client
>>   [Simple java client, seems like a primitive library]
>>
>> (3) (1) https://github.com/shekhargulati/rx-docker-client
>>   [Asyn style library and use java 8 features]
>>
>> (4) https://github.com/jclouds/jclouds-labs/tree/master/docker
>>  [This is used by the jCloud library]
>>
>> I am going to go ahead with (1) since it gives the required
>> functionalities.
>>
>> If anyone has used any of those libraries or any other better library,
>> please give your suggestions.
>>
>> Thanks,
>> Roshan Deniyage
>> Associate Technical Lead
>> WSO2, Inc: http://wso2.com
>>
>> Mobile:  +94 777636406 / +1 408 667 6254
>> Twitter:  *https://twitter.com/roshku *
>> LinkedIn :  https://www.linkedin.com/in/roshandeniyage
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Best Regards
>
> Samith Dassanayake
> Software Engineer | Cloud TG
> WSO2, Inc. | http://wso2.com
> lean. enterprise. middleware
>
> Mobile : +947 76207351
> Blog : buddycode.blogspot.com
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [AppFac][Docker]Best Java Docker Client Library

2015-12-16 Thread Roshan Deniyage
Adding missing links ;

​
[1]
https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin
[2] https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin


Hi Samith,
> Yes we evaluated [1] and [2] and not going to use those plugins since
> our docker files are going to be dynamic and should be available to jenkins
> job before build begins. That is also possible with number of ways, but
> decided to change only deployment part in appfactory-jenkins plugin to
> build docker based artifacts.
>
> Thanks,
> Roshan Deniyage
> Associate Technical Lead
> WSO2, Inc: http://wso2.com
>
> Mobile:  +94 777636406 / +1 408 667 6254
> Twitter:  *https://twitter.com/roshku *
> LinkedIn :  https://www.linkedin.com/in/roshandeniyage
>
>
> On Wed, Dec 16, 2015 at 9:33 PM, Samith Dassanayake 
> wrote:
>
>> Hi Roshan,
>>
>> Have you looked at [1]
>>
>> ​​
>> [1]
>> https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin
>>
>> Regards,
>> Samith
>>
>> On Wed, Dec 16, 2015 at 4:58 PM, Roshan Deniyage 
>> wrote:
>>
>>> Hi All,
>>>For App Factory build artifact feature, we are going with the stand
>>> alone Jenkins server for the next release as well. This is the existing
>>> method. The only change is instead of building user artifact and push it to
>>> some git repository, we are going to build a docker image and push it to
>>> our private docker registry.
>>>
>>> For this we think of calling docker REST API inside our
>>> appfactory-jenkins-plugin (existing custom plugin). So, need to have a java
>>> docker client library and I found 4 libraries as below.
>>>
>>> (1) https://github.com/docker-java/docker-java
>>>  [based on jersey REST library and java 7]
>>>
>>> (2) https://github.com/spotify/docker-client
>>>   [Simple java client, seems like a primitive library]
>>>
>>> (3) (1) https://github.com/shekhargulati/rx-docker-client
>>>   [Asyn style library and use java 8 features]
>>>
>>> (4) https://github.com/jclouds/jclouds-labs/tree/master/docker
>>>  [This is used by the jCloud library]
>>>
>>> I am going to go ahead with (1) since it gives the required
>>> functionalities.
>>>
>>> If anyone has used any of those libraries or any other better library,
>>> please give your suggestions.
>>>
>>> Thanks,
>>> Roshan Deniyage
>>> Associate Technical Lead
>>> WSO2, Inc: http://wso2.com
>>>
>>> Mobile:  +94 777636406 / +1 408 667 6254
>>> Twitter:  *https://twitter.com/roshku *
>>> LinkedIn :  https://www.linkedin.com/in/roshandeniyage
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Best Regards
>>
>> Samith Dassanayake
>> Software Engineer | Cloud TG
>> WSO2, Inc. | http://wso2.com
>> lean. enterprise. middleware
>>
>> Mobile : +947 76207351
>> Blog : buddycode.blogspot.com
>>
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Dev studio errors and feedback

2015-12-16 Thread Awanthika Senarath
Hello Dulitha,


​I agree that artifact name needs to be unqiue to the car file. But in a
scenario where you have 2 esb configuration projects having an artifact
with the same name (in-sequence.xml) - it crashes the car interface.​

If you have the same ESB artifact in different configuration projects, you
cannot pack them in the same car file. Because in car file level we only
consider the artifact name and not the folder hierarchy (which involves the
ESB project). The ESB project is introduced by Dev studio to maintain
artifacts. When you deploy the car file in the ESB, ESB will only deploy
the artifacts as single entries and not the project, and I fail to
understand you reasoning behind having the same artifact name as you wont
be able to deploy them in the same ESB instance in a single car file anyway.

But I accept that this should be conveyed to the user via a message, not
crashing the car file.


About the issue you faced when deleting artifacts, I was not able to
reproduce it,

1. Created two ESB projects
2. Created 2 Rest API artifacts with the same name in both projects
3. Deleted on Rest API

It worked as expected for me in Dev studio 3.8.0

Could you please give me your Dev studio version and also reproducing steps?


Regards
Awanthika

Awanthika Senarath
Software Engineer, WSO2 Inc.
Mobile: +94717681791



On Wed, Dec 16, 2015 at 10:23 PM, Dulitha Wijewantha 
wrote:

>
>
> On Tue, Dec 15, 2015 at 10:26 PM, Awanthika Senarath 
> wrote:
>
>> Hi Dulitha,
>>
>> Find my responses in-line,
>>
>> 1) Can't have the same project name in the workspace. For example - I
>> create a project called gateway-dev and have gateway-car, gateway-synapse,
>> gateway-registry. I can't create a project with gateway-staging and have
>> gateway-car. Is this an eclipse limitation?
>> This is an eclipse limitation. Eclipse maintains a META file called
>> .project for each and every project and the name in this .project file
>> needs to be unique.
>>
>> 2) Can't have the same artifact name in the workspace. This one is bit
>> annoying. I renamed the project to gateway-dev-synapse and
>> gateway-staging-synapse. I can't have the api artifact with id.xml inside
>> dev project and staging project in the same namespace:- the car picking
>> form gets confused on the two assets. What's more - it will get corrupted
>> and not have anything at all from synapse to pick. Below is an error that
>> popped in the error console
>> Yes, car files import artifacts based on the artifact name and in the
>> artifact.xml of a car file it will list all the artifacts that needs to be
>> bundled in that particular car, hence the artifact names needs to be unique
>> for the car file. The error you have got is due to dev studio crashing
>> before the file system files are properly refreshed, OR an attempt to
>> manually edit the file system resources not via eclipse. Ideally this
>> should be resolved by refreshing the workspace files through eclipse (right
>> click the file and refresh)
>>
>
> ​I agree that artifact name needs to be unqiue to the car file. But in a
> scenario where you have 2 esb configuration projects having an artifact
> with the same name (in-sequence.xml) - it crashes the car interface.​
>
>
>
>> ​
>>
> 3) Deleting a resource (API) got me below error -
>> Could you please let us know how to reproduce this? did you attempt
>> to rename the resource before deleting?
>>
> ​This was the scenario where I had the same arifact name in 2 different
> esb configuration projects. ​
>
>
>
>>
>> 4) Bulk import for resources -
>> You are correct, this is a current limitation. You can import the
>> synapse configuration with multiple proxies in it and dev-studio will
>> generate the proxies for different artifacts in the synapse configuration.
>> But currently it is not supported to import multiple proxy.xml files
>> simultaneously.
>>
>> Regards
>> Awanthika
>>
>>
>> Awanthika Senarath
>> Software Engineer, WSO2 Inc.
>> Mobile: +94717681791
>>
>>
>>
>> On Wed, Dec 16, 2015 at 6:25 AM, Dulitha Wijewantha 
>> wrote:
>>
>>> Hi guys,
>>> I got some issues today working on the developer studio.
>>>
>>> 1) Can't have the same project name in the workspace. For example - I
>>> create a project called gateway-dev and have gateway-car, gateway-synapse,
>>> gateway-registry. I can't create a project with gateway-staging and have
>>> gateway-car. Is this an eclipse limitation?
>>>
>>> 2) Can't have the same artifact name in the workspace. This one is bit
>>> annoying. I renamed the project to gateway-dev-synapse and
>>> gateway-staging-synapse. I can't have the api artifact with id.xml inside
>>> dev project and staging project in the same namespace:- the car picking
>>> form gets confused on the two assets. What's more - it will get corrupted
>>> and not have anything at all from synapse to pick. Below is an error that
>>> popped in the error console -
>>>
>>> org.eclipse.core.runtime.CoreException: The file is not synchronized
>>>

Re: [Dev] HTTPS REST Client

2015-12-16 Thread Dharshana Warusavitharana
Hi Malmee,

The SSL issue comes with the rest client is because you are going to invoke
HTTPS backend with out proper certificate in path.

you can set keys to the system path using following

String trustStore = System.getProperty("carbon.home") + File.separator
+ "repository" + File.separator +
"resources" + File.separator + "security" +
File.separator + "wso2carbon.jks";
System.setProperty("javax.net.ssl.trustStore", trustStore);
System.setProperty("javax.net.ssl.trustStorePassword", "wso2carbon");
System.setProperty("javax.net.ssl.trustStoreType", "JKS");


In this sample WSO2 carbon keys set as keys to the system path you can
use the same.

Apache HTTP client is bit old client instead you can try some thing
like JAX-RS 2 or RestEasy client.

In All these clients you need to add above code segment to export keys
to call secured HTTP call.


Thank you,

Dharshana.


On Wed, Dec 16, 2015 at 8:56 PM, Isuru Haththotuwa  wrote:

> Hi Malmee,
>
> If you have used java built in methods, that would mean your tool would
> not be required to depend on an external library such as Apache HTTP client
> as you have mentioned. This is fine for a simple use case. However, please
> note if you need more functionality, such as support for all HTTP
> operations, and other capabilities, etc. it would be advisable to use
> Apache HTTP client or any other suitable existing library without
> implementing it again by yourself.
>
> For the certificate issue that you are getting, its possible to override
> the default certificate validation mechanism and plug in your own
> implementation which can disable certificate validation (for testing
> purposes). For HTTP Client 4, please see [1].
>
> [1].
> http://stackoverflow.com/questions/2703161/how-to-ignore-ssl-certificate-errors-in-apache-httpclient-4-0
>
> On Wed, Dec 16, 2015 at 8:29 PM, Malmee Weerasinghe 
> wrote:
>
>> Hi All,
>>
>> We have developed a HTTPS REST client using java in-built methods which
>> works properly. [1] This method is configured to allow self signed
>> certificates.
>>
>> When using apache http client we get a certificate error :
>> javax.net.ssl.SSLPeerUnverifiedException: Host name '192.168.30.227'
>> does not match the certificate subject provided by the peer (CN=localhost,
>> O=WSO2, L=Mountain View, ST=CA, C=US).
>>
>> What would be a good choice? Using apache HTTP client or java in built
>> methods. Your suggestions are highly appreciated.
>>
>> [1]
>> https://github.com/nishadi/product-private-paas/blob/master/tools/migration/ppaas-artifact-converter/src/main/java/org/wso2/ppaas/tools/artifactmigration/RestClient.java
>>
>> --
>> Malmee Weerasinghe
>> WSO2 Intern
>> mobile : (+94)* 71 7601905* |   email :   
>> mal...@wso2.com
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Thanks and Regards,
>
> Isuru H.
> +94 716 358 048* *
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 

Dharshana Warusavitharana
Senior Software Engineer , Test Automation
WSO2 Inc. http://wso2.com
email : dharsha...@wso2.com 
Tel  : +94 11 214 5345
Fax :+94 11 2145300
cell : +94770342233
blog : http://dharshanaw.blogspot.com

lean . enterprise . middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] HTTPS REST Client

2015-12-16 Thread Udara Liyanage
Hi,

As Isuru mentioned better to use an already implemented library as it
provides more functions. Plus use an appropriate ConnectionManager, as I
remember default connection manager allows only 2 concurrent requests. You
will find a sample that is implemented in Stratos [1]

[1]
https://github.com/apache/stratos/blob/master/components/org.apache.stratos.metadata.client/src/main/java/org/apache/stratos/metadata/client/rest/DefaultRestClient.java

On Thu, Dec 17, 2015 at 9:31 AM, Dharshana Warusavitharana <
dharsha...@wso2.com> wrote:

> Hi Malmee,
>
> The SSL issue comes with the rest client is because you are going to
> invoke HTTPS backend with out proper certificate in path.
>
> you can set keys to the system path using following
>
> String trustStore = System.getProperty("carbon.home") + File.separator + 
> "repository" + File.separator +
> "resources" + File.separator + "security" + 
> File.separator + "wso2carbon.jks";
> System.setProperty("javax.net.ssl.trustStore", trustStore);
> System.setProperty("javax.net.ssl.trustStorePassword", "wso2carbon");
> System.setProperty("javax.net.ssl.trustStoreType", "JKS");
>
>
> In this sample WSO2 carbon keys set as keys to the system path you can use 
> the same.
>
> Apache HTTP client is bit old client instead you can try some thing like 
> JAX-RS 2 or RestEasy client.
>
> In All these clients you need to add above code segment to export keys to 
> call secured HTTP call.
>
>
> Thank you,
>
> Dharshana.
>
>
> On Wed, Dec 16, 2015 at 8:56 PM, Isuru Haththotuwa 
> wrote:
>
>> Hi Malmee,
>>
>> If you have used java built in methods, that would mean your tool would
>> not be required to depend on an external library such as Apache HTTP client
>> as you have mentioned. This is fine for a simple use case. However, please
>> note if you need more functionality, such as support for all HTTP
>> operations, and other capabilities, etc. it would be advisable to use
>> Apache HTTP client or any other suitable existing library without
>> implementing it again by yourself.
>>
>> For the certificate issue that you are getting, its possible to override
>> the default certificate validation mechanism and plug in your own
>> implementation which can disable certificate validation (for testing
>> purposes). For HTTP Client 4, please see [1].
>>
>> [1].
>> http://stackoverflow.com/questions/2703161/how-to-ignore-ssl-certificate-errors-in-apache-httpclient-4-0
>>
>> On Wed, Dec 16, 2015 at 8:29 PM, Malmee Weerasinghe 
>> wrote:
>>
>>> Hi All,
>>>
>>> We have developed a HTTPS REST client using java in-built methods which
>>> works properly. [1] This method is configured to allow self signed
>>> certificates.
>>>
>>> When using apache http client we get a certificate error :
>>> javax.net.ssl.SSLPeerUnverifiedException: Host name '192.168.30.227'
>>> does not match the certificate subject provided by the peer (CN=localhost,
>>> O=WSO2, L=Mountain View, ST=CA, C=US).
>>>
>>> What would be a good choice? Using apache HTTP client or java in built
>>> methods. Your suggestions are highly appreciated.
>>>
>>> [1]
>>> https://github.com/nishadi/product-private-paas/blob/master/tools/migration/ppaas-artifact-converter/src/main/java/org/wso2/ppaas/tools/artifactmigration/RestClient.java
>>>
>>> --
>>> Malmee Weerasinghe
>>> WSO2 Intern
>>> mobile : (+94)* 71 7601905* |   email :   
>>> mal...@wso2.com
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Thanks and Regards,
>>
>> Isuru H.
>> +94 716 358 048* *
>>
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
>
> Dharshana Warusavitharana
> Senior Software Engineer , Test Automation
> WSO2 Inc. http://wso2.com
> email : dharsha...@wso2.com 
> Tel  : +94 11 214 5345
> Fax :+94 11 2145300
> cell : +94770342233
> blog : http://dharshanaw.blogspot.com
>
> lean . enterprise . middleware
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 

Udara Liyanage
Software Engineer
WSO2, Inc.: http://wso2.com
lean. enterprise. middleware

web: http://udaraliyanage.wordpress.com
phone: +94 71 443 6897
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] DAS going OOM frequently

2015-12-16 Thread Anjana Fernando
Hi guys,

So as Sumedha told me, the error that has come is a OOM perm gen error. And
we suspect it's just because they've installed many features there in the
IoT server, so lots of classes being loaded. So after increasing the perm
gen size, Sumedha mentioned that the issue hasn't come yet.

Cheers,
Anjana.
On Dec 16, 2015 8:02 PM, "Niranda Perera"  wrote:

> Hi Gihan,
>
> The memory can be set by using the conf parameters ie. "
> spark.executor.memory"
>
> rgds
>
> On Wed, Dec 16, 2015 at 7:01 PM, Gihan Anuruddha  wrote:
>
>> Hi Niranda,
>>
>> So let say we have to run embedded DAS in a memory restricted
>> environment. So where I can define the spark allocated memory configuration
>> information?
>>
>> Regards,
>> Gihan
>>
>> On Wed, Dec 16, 2015 at 6:55 PM, Niranda Perera  wrote:
>>
>>> Hi Sumedha,
>>>
>>> I checked the heapdump you provided, and the size of it is around 230MB.
>>> I presume this was not a OOM scenario.
>>>
>>> As per the Spark memory usage, when you use spark in the local mode, the
>>> processing will happen inside that JVM itself. So, we have to make sure
>>> that we allocate enough memory for that
>>>
>>> Rgds
>>>
>>> On Wed, Dec 16, 2015 at 6:11 PM, Anjana Fernando 
>>> wrote:
>>>
 Hi Ayoma,

 Thanks for checking up on it, actually "getAllIndexedTables" doesn't
 return the Set here, it returns an array that was previously populated in
 the refresh operation, so no need to synchronize that method.

 Cheers,
 Anjana.

 On Wed, Dec 16, 2015 at 5:44 PM, Ayoma Wijethunga 
 wrote:

> And, missed mentioning that when this this race condition / state
> corruption happens all "get" operations performed on Set/Map get blocked
> resulting in OOM situation. [1
> ]
> has all that explained nicely. I have checked a heap dump in a similar
> situation and if you take one, you will clearly see many threads waiting 
> to
> access this Set instance.
>
> [1]
> http://mailinator.blogspot.gr/2009/06/beautiful-race-condition.html
>
> On Wed, Dec 16, 2015 at 5:37 PM, Ayoma Wijethunga 
> wrote:
>
>> Hi Anjana,
>>
>> Sorry, I didn't notice that you have already replied this thread.
>>
>> However, please consider my point on "getAllIndexedTables" as well.
>>
>> Thank you,
>> Ayoma.
>>
>> On Wed, Dec 16, 2015 at 5:12 PM, Anjana Fernando 
>> wrote:
>>
>>> Hi Sumedha,
>>>
>>> Thank you for reporting the issue. I've fixed the concurrent
>>> modification exception issue, where, actually both the methods
>>> "addIndexedTable" and "removeIndexedTable" needed to be synchronized, 
>>> since
>>> they both work on the shared Set object there.
>>>
>>> As for the OOM issue, can you please share a heap dump when the OOM
>>> happened. So we can see what is causing this. And also, I see there are
>>> multiple scripts running at the same time, so this actually can be a
>>> legitimate error also, where the server actually doesn't have enough 
>>> memory
>>> to continue its operations. @Niranda, please share if there is any info 
>>> on
>>> tuning Spark's memory requirements.
>>>
>>> Cheers,
>>> Anjana.
>>>
>>> On Wed, Dec 16, 2015 at 3:32 PM, Sumedha Rubasinghe <
>>> sume...@wso2.com> wrote:
>>>
 We have DAS Lite included in IoT Server and several summarisation
 scripts deployed. Server is going OOM frequently with following 
 exception.

 Shouldn't this[1] method be synchronised?

 [1]
 https://github.com/wso2/carbon-analytics/blob/master/components/analytics-core/org.wso2.carbon.analytics.dataservice.core/src/main/java/org/wso2/carbon/analytics/dataservice/core/indexing/AnalyticsIndexedTableStore.java#L45


 >>>
 [2015-12-16 15:11:00,004]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Light_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Magnetic_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,005]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Pressure_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Proximity_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,006]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: Rotation_Sensor_Script for tenant id: -1234
 [2015-12-16 15:11:00,007]  INFO
 {org.wso2.carbon.

[Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Nadeeshaan Gunasinghe
Hi all,
It has been a requirement to implement a feature for keeping track of the
various types of latency metrics in WSO2 GW. At the moment I am involved in
implementing this latency metrics calculation feature according to the
architecture proposed at [1].
As the first step I am capturing the raw data required for calculating
various latency values. These raw data being collected as follows at the
moment,

*Server Side*

   - Source Connection Creation time
   - Source Connection life time
   - Request header read time
   - Request body read time
   - Request read time


*Client Side*

   - Client connection creation time
   - Client Connection life time
   - Response header read time
   - Response body read time
   - Response read time


I am going to keep track of these raw data and then transport these data
through the carbon message, as the initial step. Then a latency calculation
engine is going to be implemented to calculate the various types of latency
values such as,

   - Average Throughput of a connection
   - Average Latency of a connection
   - Average jitter of a connection
   - Message build time
   - Message encoding time
   - Message mediation time
   - etc

Then a data publisher component is going to be implemented for publishing
data to  JMX and DAS.

During the implementation additional raw data will be needed to be captured
according to the type of metrics we are going to calculate. In such
situation, will update with the latest status and findings.

[1] [Architecture] Implementing Latency Metrics Calculation Feature in GW

Regards

*Nadeeshaan Gunasinghe*
Software Engineer, WSO2 Inc. http://wso2.com
+94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe <#>

  

Get a signature like this: Click here!

___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Kasun Indrasiri
We may also need a bit of high level stats too.. For instance things we
have included in ESB 4.10.

On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe  wrote:

> Hi all,
> It has been a requirement to implement a feature for keeping track of the
> various types of latency metrics in WSO2 GW. At the moment I am involved in
> implementing this latency metrics calculation feature according to the
> architecture proposed at [1].
> As the first step I am capturing the raw data required for calculating
> various latency values. These raw data being collected as follows at the
> moment,
>
> *Server Side*
>
>- Source Connection Creation time
>- Source Connection life time
>- Request header read time
>- Request body read time
>- Request read time
>
>
> *Client Side*
>
>- Client connection creation time
>- Client Connection life time
>- Response header read time
>- Response body read time
>- Response read time
>
>
> I am going to keep track of these raw data and then transport these data
> through the carbon message, as the initial step. Then a latency calculation
> engine is going to be implemented to calculate the various types of latency
> values such as,
>
>- Average Throughput of a connection
>- Average Latency of a connection
>- Average jitter of a connection
>- Message build time
>- Message encoding time
>- Message mediation time
>- etc
>
> Then a data publisher component is going to be implemented for publishing
> data to  JMX and DAS.
>
> During the implementation additional raw data will be needed to be
> captured according to the type of metrics we are going to calculate. In
> such situation, will update with the latest status and findings.
>
> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>
> Regards
>
> *Nadeeshaan Gunasinghe*
> Software Engineer, WSO2 Inc. http://wso2.com
> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
> <#151ae43483a6d1f0_>
> 
>   
> 
> Get a signature like this: Click here!
> 
>



-- 
Kasun Indrasiri
Software Architect
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

cell: +94 77 556 5206
Blog : http://kasunpanorama.blogspot.com/
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [DEV][DAS][DSS] Error after installing DSS features on DAS

2015-12-16 Thread Chanuka Dissanayake
Hi

I'm getting this error relevant to spark after installing DSS (version
4.3.4) features on DAS 3.0.0. Cannot execute spark queries.


[2015-12-17 10:27:42,811] ERROR
> {org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent} -  Error
> initializing analytics executor: com/codahale/metrics/json/MetricsModule
> java.lang.NoClassDefFoundError: com/codahale/metrics/json/MetricsModule
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:11)
> at
> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:186)
> at
> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:182)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
> at
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
> at
> org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:182)
> at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:99)
> at org.apache.spark.SparkContext.(SparkContext.scala:506)
> at
> org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:296)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:174)
> at
> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:69)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
> at
> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
> at
> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
> at
> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
> at
> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:950)
> at
> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent.activate(AnalyticsDataServiceComponent.java:64)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
> at
> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
> at
> or

Re: [Dev] Latency Calculation Feature in WSO2 GW

2015-12-16 Thread Viraj Senevirathne
Hi All,

In ESB 4.10.0 we are introducing new statistic feature which lets user
drill down service level statistics.

So for higher level statistics we can include,

   - Avg,Min, Maximum Mediation times for each service
   - Statistics of each endpoints
   - Allow users to enable and disable statistics for each components
   - Faults encounters while mediation for each service

These are some extra parameters that exists in current transport latency
parameters. I think it would be better to incorporate following parameters
too.

   - Parameters
   - Messages Received.
   - Requests received.
   - Responses sent
   - Fault in Receiving
   - Faults in Sending
   - Min, Max, Avg  message size sent
   - Min, Max, Avg  message size received
   - Bytes Received
   - Bytes received
   - Timeouts in Receiving
   - Timeouts in Sending
   - Active Thread Count
   - Last Reset Time
   - Statistics Views for Daily, Hourly, by minutes ( This may be optional)


*Operations*

   - Reset Statistics


Thank You,

On Thu, Dec 17, 2015 at 10:20 AM, Kasun Indrasiri  wrote:

> We may also need a bit of high level stats too.. For instance things we
> have included in ESB 4.10.
>
> On Thu, Dec 17, 2015 at 10:17 AM, Nadeeshaan Gunasinghe <
> nadeesh...@wso2.com> wrote:
>
>> Hi all,
>> It has been a requirement to implement a feature for keeping track of the
>> various types of latency metrics in WSO2 GW. At the moment I am involved in
>> implementing this latency metrics calculation feature according to the
>> architecture proposed at [1].
>> As the first step I am capturing the raw data required for calculating
>> various latency values. These raw data being collected as follows at the
>> moment,
>>
>> *Server Side*
>>
>>- Source Connection Creation time
>>- Source Connection life time
>>- Request header read time
>>- Request body read time
>>- Request read time
>>
>>
>> *Client Side*
>>
>>- Client connection creation time
>>- Client Connection life time
>>- Response header read time
>>- Response body read time
>>- Response read time
>>
>>
>> I am going to keep track of these raw data and then transport these data
>> through the carbon message, as the initial step. Then a latency calculation
>> engine is going to be implemented to calculate the various types of latency
>> values such as,
>>
>>- Average Throughput of a connection
>>- Average Latency of a connection
>>- Average jitter of a connection
>>- Message build time
>>- Message encoding time
>>- Message mediation time
>>- etc
>>
>> Then a data publisher component is going to be implemented for publishing
>> data to  JMX and DAS.
>>
>> During the implementation additional raw data will be needed to be
>> captured according to the type of metrics we are going to calculate. In
>> such situation, will update with the latest status and findings.
>>
>> [1] [Architecture] Implementing Latency Metrics Calculation Feature in GW
>>
>> Regards
>>
>> *Nadeeshaan Gunasinghe*
>> Software Engineer, WSO2 Inc. http://wso2.com
>> +94770596754 | nadeesh...@wso2.com | Skype: nadeeshaan.gunasinghe
>> <#151ae46f9e826311_151ae43483a6d1f0_>
>> 
>>   
>> 
>> Get a signature like this: Click here!
>> 
>>
>
>
>
> --
> Kasun Indrasiri
> Software Architect
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> cell: +94 77 556 5206
> Blog : http://kasunpanorama.blogspot.com/
>



-- 
Viraj Senevirathne
Software Engineer; WSO2, Inc.

Mobile : +94 71 958 0269
Email : vir...@wso2.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Best way to access registry data in java

2015-12-16 Thread Ishara Cooray
Hi,

By getting the super tenant's registry as below and use the relative
resource path worked for me.

Registry registry = registryService.getGovernanceSystemRegistry();

path = /apimgt/applicationdata/tenant_tier_policies.xml

Thanks for the support Danesh.

Ishara Cooray
Senior Software Engineer
Mobile : +9477 262 9512
WSO2, Inc. | http://wso2.com/
Lean . Enterprise . Middleware

On Wed, Dec 16, 2015 at 12:15 PM, Ishara Cooray  wrote:

> I have a .xml policy file stored in super tenant's registry. I need to
> read the file only once at the bundle activation.
>
> I was trying following approaches but none of them give me the resource i
> am looking for.
>
> 1.
>
>
>
>
>
>
>
> *RegistryService registryService =
> ThrottleDataHolder.getRegistryService();Registry registry =
> null;try {registry =
> registryService.getGovernanceSystemRegistry(Constants.SUPER_TENANT_ID);
> } catch (RegistryException e) {log.error("Error while fetching
> Governance Registry of Super Tenant");}*
>
> *   String path =
> "gov:/apimgt/applicationdata/tenant_tier_policies.xml";*
>
>
>
>
>
>
> *try {if (registry.resourceExists(path))
> {return registry.get(path);}
> } catch (RegistryException e) {log.error("Error while
> fetching the resource " + path, e);}*
>
> In this case resourceExists returns false.
>
> 2.
>
>
>
> *Registry governanceRegistry =
> getSuperTenantRegistry();//as abovePolicyManager
> policyManager = new PolicyManager(governanceRegistry);
> Policy policy = policyManager.getPolicy(Constants.POLICY_KEY); *
> In this case policy is null.
>
> 3.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *   String url = "https://192.168.123.100:9443/registry
> ";String username =
> Constants.SUPER_USER_NAME;String password =
> Constants.SUPER_USER_PW;
> System.setProperty(Constants.CARBON_REPO_WRITE_MODE, "true");
> Registry rootRegistry = null;try {rootRegistry
> = new RemoteRegistry(new URL(url), username, password);} catch
> (RegistryException e) {e.printStackTrace();}
> catch (MalformedURLException e) {
> e.printStackTrace();}String policyContent =
> null;OMElement policyElement = null;try {
> PolicyManager policyManager = new
> PolicyManager(rootRegistry);Policy policy =
> policyManager.getPolicy(Constants.POLICY_KEY);//get the OM
> from the policy.policyContent = policy.getPolicyContent();*
>
> In this case it hangs at policyManager.getPolicy(Constants.POLICY_KEY);
>
> Wondering what is the proper way of doing this.
> Any help would be appreciated.
>
> Thanks.
> Ishara Cooray
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS][DSS] Error after installing DSS features on DAS

2015-12-16 Thread Niranda Perera
Hi Chanuka,

can you start the server with the osgiconsole and see if
the io.dropwizard.metrics:metrics-json bundle s available and properly
wired?

cheers

On Thu, Dec 17, 2015 at 10:44 AM, Chanuka Dissanayake 
wrote:

>
> Hi
>
> I'm getting this error relevant to spark after installing DSS (version
> 4.3.4) features on DAS 3.0.0. Cannot execute spark queries.
>
>
> [2015-12-17 10:27:42,811] ERROR
>> {org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent} -  Error
>> initializing analytics executor: com/codahale/metrics/json/MetricsModule
>> java.lang.NoClassDefFoundError: com/codahale/metrics/json/MetricsModule
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:11)
>> at
>> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:186)
>> at
>> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:182)
>> at
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> at
>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>> at
>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>> at
>> org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:182)
>> at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:99)
>> at org.apache.spark.SparkContext.(SparkContext.scala:506)
>> at
>> org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:296)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:174)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:69)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>> at
>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>> at
>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>> at org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>> at
>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>> at
>> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
>> at
>> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
>> at
>> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
>> at
>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
>> at
>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:950)
>> at
>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent.activate(AnalyticsDataServiceComponent.java:64)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>> at
>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>> at
>> org.eclipse.equinox.internal.ds.Instanc

Re: [Dev] String plus compiles to append

2015-12-16 Thread Rasika Perera
Hi All,

Please find comments inline.

Yes, this has been the case for sometime where internally append is used.
> Try the same with a loop.

+1 have tried this. Even though compiler is optimized to put StringBuilder
on multiple "+" concats not too *smart* to use the same StringBuilder
instance inside a loop.

Tried Test.java with a string concat inside a *loop*.

*Test.java*
class Test {
public static void main(String[] args) {
String content = "";
for (int i = 0; i < 10; i++) {
content = content + String.valueOf(i);
}
System.out.println(content);
}
}

*Constant pool:*
const #3 = class #24; //  java/lang/StringBuilder

*Main Method:*
public static void main(java.lang.String[]);
  Code:
   0: ldc #2; //String
   2: astore_1
   3: iconst_0
   4: istore_2
   5: iload_2
   6: bipush 10
   8: if_icmpge 39
   11: new #3; //class java/lang/StringBuilder <---*creating new String
builder each cycle*
   14: dup
   15: invokespecial #4; //Method java/lang/StringBuilder."":()V
   18: aload_1
   19: invokevirtual #5; //Method
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
   22: iload_2
   23: invokestatic #6; //Method
java/lang/String.valueOf:(I)Ljava/lang/String;
   26: invokevirtual #5; //Method
java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
   29: invokevirtual #7; //Method
java/lang/StringBuilder.toString:()Ljava/lang/String;
   32: astore_1
   33: iinc 2, 1
   36: goto 5 <--- *Jump for the loop*
   39: getstatic #8; //Field java/lang/System.out:Ljava/io/PrintStream;
   42: aload_1
   43: invokevirtual #9; //Method
java/io/PrintStream.println:(Ljava/lang/String;)V
   46: return
}

Thanks,
Rasika

On Wed, Dec 16, 2015 at 6:19 AM, Afkham Azeez  wrote:

> Yes, this has been the case for sometime where internally append is used.
> Try the same with a loop.
>
> On Tue, Dec 15, 2015 at 11:43 PM, Manuranga Perera  wrote:
>
>> I have compiled following class using javac 1.6.0_38
>>
>> class X{
>> public String m(String a, String b, String c){
>> return a + b + c;
>> }
>> }
>>
>>
>> and decoupled using javap
>>
>>
>> class X {
>>   X();
>> Code:
>>0: aload_0
>>1: invokespecial #1  // Method
>> java/lang/Object."":()V
>>4: return
>>
>>   public java.lang.String m(java.lang.String, java.lang.String,
>> java.lang.String);
>> Code:
>>0: new   #2  // class
>> java/lang/StringBuilder
>>3: dup
>>4: invokespecial #3  // Method
>> java/lang/StringBuilder."":()V
>>7: aload_1
>>8: invokevirtual #4  // Method
>> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>>   11: aload_2
>>   12: invokevirtual #4  // Method
>> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>>   15: aload_3
>>   16: invokevirtual #4  // Method
>> java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder;
>>   19: invokevirtual #5  // Method
>> java/lang/StringBuilder.toString:()Ljava/lang/String;
>>   22: areturn
>> }
>>
>> As you can see, there are three appends but only one StringBuilder
>> objects. Therefor I purpose using plus instead of append in our code.
>>
>>
>>
>> --
>> With regards,
>> *Manu*ranga Perera.
>>
>> phone : 071 7 70 20 50
>> mail : m...@wso2.com
>>
>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * *
> *email: **az...@wso2.com* 
> * cell: +94 77 3320919 <%2B94%2077%203320919>blog: *
> *http://blog.afkham.org* 
> *twitter: **http://twitter.com/afkham_azeez*
> 
> *linked-in: **http://lk.linkedin.com/in/afkhamazeez
> *
>
> *Lean . Enterprise . Middleware*
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
With Regards,

*Rasika Perera*
Software Engineer
M: +94 71 680 9060 E: rasi...@wso2.com
LinkedIn: http://lk.linkedin.com/in/rasika90

WSO2 Inc. www.wso2.com
lean.enterprise.middleware
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS][DSS] Error after installing DSS features on DAS

2015-12-16 Thread Chanuka Dissanayake
Hi Niranda,

I have checked it [1].

[1] 85 ACTIVE  io.dropwizard.metrics.json_3.1.2

regards,
Chanuka.

On Thu, Dec 17, 2015 at 11:09 AM, Niranda Perera  wrote:

> Hi Chanuka,
>
> can you start the server with the osgiconsole and see if
> the io.dropwizard.metrics:metrics-json bundle s available and properly
> wired?
>
> cheers
>
> On Thu, Dec 17, 2015 at 10:44 AM, Chanuka Dissanayake 
> wrote:
>
>>
>> Hi
>>
>> I'm getting this error relevant to spark after installing DSS (version
>> 4.3.4) features on DAS 3.0.0. Cannot execute spark queries.
>>
>>
>> [2015-12-17 10:27:42,811] ERROR
>>> {org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent} -  Error
>>> initializing analytics executor: com/codahale/metrics/json/MetricsModule
>>> java.lang.NoClassDefFoundError: com/codahale/metrics/json/MetricsModule
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:11)
>>> at
>>> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:186)
>>> at
>>> org.apache.spark.metrics.MetricsSystem$$anonfun$registerSinks$1.apply(MetricsSystem.scala:182)
>>> at
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>> at
>>> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)
>>> at
>>> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226)
>>> at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
>>> at scala.collection.mutable.HashMap.foreach(HashMap.scala:98)
>>> at
>>> org.apache.spark.metrics.MetricsSystem.registerSinks(MetricsSystem.scala:182)
>>> at org.apache.spark.metrics.MetricsSystem.start(MetricsSystem.scala:99)
>>> at org.apache.spark.SparkContext.(SparkContext.scala:506)
>>> at
>>> org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:61)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeClient(SparkAnalyticsExecutor.java:296)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.initializeSparkServer(SparkAnalyticsExecutor.java:174)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent.activate(AnalyticsComponent.java:69)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:260)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:345)
>>> at
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:620)
>>> at
>>> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:197)
>>> at
>>> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:343)
>>> at
>>> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:222)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:107)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
>>> at
>>> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
>>> at
>>> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
>>> at
>>> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:451)
>>> at
>>> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:950)
>>> at
>>> org.wso2.carbon.analytics.dataservice.AnalyticsDataServiceComponent.activate(AnalyticsDataServiceComponent.java:64)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComp

Re: [Dev] [AppFac][Docker]Best Java Docker Client Library

2015-12-16 Thread Samith Dassanayake
Hi Roshan,

Even though if you use [1], you have to face the dynamic docker file issue
right?(Correct me If I am wrong).

[1] https://github.com/docker-java/docker-java

Regards,
Samith

On Wed, Dec 16, 2015 at 11:40 PM, Roshan Deniyage  wrote:

> Adding missing links ;
>
> ​
> [1]
> https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin
> [2] https://wiki.jenkins-ci.org/display/JENKINS/Docker+build+step+plugin
>
>
> Hi Samith,
>> Yes we evaluated [1] and [2] and not going to use those plugins since
>> our docker files are going to be dynamic and should be available to jenkins
>> job before build begins. That is also possible with number of ways, but
>> decided to change only deployment part in appfactory-jenkins plugin to
>> build docker based artifacts.
>>
>> Thanks,
>> Roshan Deniyage
>> Associate Technical Lead
>> WSO2, Inc: http://wso2.com
>>
>> Mobile:  +94 777636406 / +1 408 667 6254
>> Twitter:  *https://twitter.com/roshku *
>> LinkedIn :  https://www.linkedin.com/in/roshandeniyage
>>
>>
>> On Wed, Dec 16, 2015 at 9:33 PM, Samith Dassanayake 
>> wrote:
>>
>>> Hi Roshan,
>>>
>>> Have you looked at [1]
>>>
>>> ​​
>>> [1]
>>> https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Build+and+Publish+plugin
>>>
>>> Regards,
>>> Samith
>>>
>>> On Wed, Dec 16, 2015 at 4:58 PM, Roshan Deniyage 
>>> wrote:
>>>
 Hi All,
For App Factory build artifact feature, we are going with the stand
 alone Jenkins server for the next release as well. This is the existing
 method. The only change is instead of building user artifact and push it to
 some git repository, we are going to build a docker image and push it to
 our private docker registry.

 For this we think of calling docker REST API inside our
 appfactory-jenkins-plugin (existing custom plugin). So, need to have a java
 docker client library and I found 4 libraries as below.

 (1) https://github.com/docker-java/docker-java
  [based on jersey REST library and java 7]

 (2) https://github.com/spotify/docker-client
   [Simple java client, seems like a primitive library]

 (3) (1) https://github.com/shekhargulati/rx-docker-client
   [Asyn style library and use java 8 features]

 (4) https://github.com/jclouds/jclouds-labs/tree/master/docker
  [This is used by the jCloud library]

 I am going to go ahead with (1) since it gives the required
 functionalities.

 If anyone has used any of those libraries or any other better library,
 please give your suggestions.

 Thanks,
 Roshan Deniyage
 Associate Technical Lead
 WSO2, Inc: http://wso2.com

 Mobile:  +94 777636406 / +1 408 667 6254
 Twitter:  *https://twitter.com/roshku *
 LinkedIn :  https://www.linkedin.com/in/roshandeniyage


 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Samith Dassanayake
>>> Software Engineer | Cloud TG
>>> WSO2, Inc. | http://wso2.com
>>> lean. enterprise. middleware
>>>
>>> Mobile : +947 76207351
>>> Blog : buddycode.blogspot.com
>>>
>>
>>
>


-- 
Best Regards

Samith Dassanayake
Software Engineer | Cloud TG
WSO2, Inc. | http://wso2.com
lean. enterprise. middleware

Mobile : +947 76207351
Blog : buddycode.blogspot.com
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [VOTE] Release WSO2 Carbon Kernel 5.0.0 RC1

2015-12-16 Thread Manuri Amaya Perera
Hi,

We are cancelling this vote due to the following issues.

In carbon-bundle-archetype the resource template pom is using
maven.bundleplugin.version property which was previously in carbon-kernel
pom but not anymore, therefore the version is not getting included as
expected.

In carbon-component-archetype the resource template pom is using several
dependencies of which the version was being inferred from the denedency
management section of carbon-kernel pom but now that dependency management
section has been moved from carbon-kernel to carbon-kernel-parent pom
therefore a project created using this archetype is unable to get the
relevant version values therefore the when building that project the build
fails.

Thank you.



On Wed, Dec 16, 2015 at 12:26 PM, Aruna Karunarathna  wrote:

> Hi Devs,
>
> This is the 1st Release Candidate of WSO2 Carbon Kernel 5.0.0.
>
> This release fixes the following issues:
> https://wso2.org/jira/issues/?filter=12581
>
> Please download and test your products with kernel 5.0.0 RC1
> and vote. Vote will be open for 72 hours or as longer as needed.
>
> *​Source and binary distribution files:*
>
> https://github.com/wso2/carbon-kernel/releases/download/v5.0.0-RC1/wso2carbon-kernel-5.0.0-rc1.zip
>
> *Maven staging repository:*
> http://maven.wso2.org/nexus/content/repositories/orgwso2carbon-177/
>
> *The tag to be voted upon:*
> https://github.com/wso2/carbon-kernel/releases/tag/v5.0.0-RC1
>
>
> [ ] Broken - do not release (explain why)
> [ ] Stable - go ahead and release
>
> Thank you,
> Carbon Team
> --
>
> *Aruna Sujith Karunarathna *| Software Engineer
> WSO2, Inc | lean. enterprise. middleware.
> #20, Palm Grove, Colombo 03, Sri Lanka
> Mobile: +94 71 9040362 | Work: +94 112145345
> Email: ar...@wso2.com | Web: www.wso2.com
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 

*Manuri Amaya Perera*

*Software Engineer*

*WSO2 Inc.*

*Blog: http://manuriamayaperera.blogspot.com
*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] [APIM] Sample API (WeatherAPI) no longer works

2015-12-16 Thread Ayoma Wijethunga
Hi All,

API manager "Sample API" no longer works. OpenWeatherMap requires sending
an API Key from 9th October 2015 [1 
][2 ]. Rate limited key is available
for free. Though extended rate limit is available for FOSS developers[1
], this might not work for us,
because we have to distribute the API Key with APIM.

If this is not corrected, users will get below error during invocations,
which can be frustrating for a new customer who is evaluating API Manager.

{"cod":401,"message":"Invalid API key. Please see
> http://openweathermap.org/faq#error401 for more info."}
>

Any idea if we continue using OpenWeatherMap or move to a different sample
implementation?

FYI : Current free plan rate limits are as follows :

Calls 10min: 600
> Calls 1day: 50,000
> Threshold: 7,200
> Hourly forecast: 5
> Daily forecast: 0
>

[1] http://openweathermap.org/faq#error401
[2] http://openweathermap.org/appid#get

Best Regards,
Ayoma Wijethunga
Software Engineer
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

Mobile : +94 (0) 719428123 <+94+(0)+719428123>
Blog : http://www.ayomaonline.com
LinkedIn: https://www.linkedin.com/in/ayoma
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [APIM] Sample API (WeatherAPI) no longer works

2015-12-16 Thread Lakshman Udayakantha
this has changed. Now API Manager has embedded calculator API. check the
latest API Manager 1.10.0-SNAPSHOT

On Thu, Dec 17, 2015 at 12:47 PM, Ayoma Wijethunga  wrote:

> Hi All,
>
> API manager "Sample API" no longer works. OpenWeatherMap requires sending
> an API Key from 9th October 2015 [1
> ][2
> ]. Rate limited key is available for
> free. Though extended rate limit is available for FOSS developers[1
> ], this might not work for us,
> because we have to distribute the API Key with APIM.
>
> If this is not corrected, users will get below error during invocations,
> which can be frustrating for a new customer who is evaluating API Manager.
>
> {"cod":401,"message":"Invalid API key. Please see
>> http://openweathermap.org/faq#error401 for more info."}
>>
>
> Any idea if we continue using OpenWeatherMap or move to a different sample
> implementation?
>
> FYI : Current free plan rate limits are as follows :
>
> Calls 10min: 600
>> Calls 1day: 50,000
>> Threshold: 7,200
>> Hourly forecast: 5
>> Daily forecast: 0
>>
>
> [1] http://openweathermap.org/faq#error401
> [2] http://openweathermap.org/appid#get
>
> Best Regards,
> Ayoma Wijethunga
> Software Engineer
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
> Blog : http://www.ayomaonline.com
> LinkedIn: https://www.linkedin.com/in/ayoma
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


-- 
Lakshman Udayakantha
WSO2 Inc. www.wso2.com
lean.enterprise.middleware
Mobile: *0714388124*
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [APIM] Sample API (WeatherAPI) no longer works

2015-12-16 Thread Ayoma Wijethunga
Sounds great... Thanks Lakshman for the information. Just noticed that this
has been discussed in mailing list around October 30th. I will check 1.10.0.

Thanks again,
Ayoma.

On Thu, Dec 17, 2015 at 12:50 PM, Lakshman Udayakantha 
wrote:

> this has changed. Now API Manager has embedded calculator API. check the
> latest API Manager 1.10.0-SNAPSHOT
>
> On Thu, Dec 17, 2015 at 12:47 PM, Ayoma Wijethunga  wrote:
>
>> Hi All,
>>
>> API manager "Sample API" no longer works. OpenWeatherMap requires sending
>> an API Key from 9th October 2015 [1
>> ][2
>> ]. Rate limited key is available
>> for free. Though extended rate limit is available for FOSS developers[1
>> ], this might not work for us,
>> because we have to distribute the API Key with APIM.
>>
>> If this is not corrected, users will get below error during invocations,
>> which can be frustrating for a new customer who is evaluating API Manager.
>>
>> {"cod":401,"message":"Invalid API key. Please see
>>> http://openweathermap.org/faq#error401 for more info."}
>>>
>>
>> Any idea if we continue using OpenWeatherMap or move to a different
>> sample implementation?
>>
>> FYI : Current free plan rate limits are as follows :
>>
>> Calls 10min: 600
>>> Calls 1day: 50,000
>>> Threshold: 7,200
>>> Hourly forecast: 5
>>> Daily forecast: 0
>>>
>>
>> [1] http://openweathermap.org/faq#error401
>> [2] http://openweathermap.org/appid#get
>>
>> Best Regards,
>> Ayoma Wijethunga
>> Software Engineer
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> Mobile : +94 (0) 719428123 <+94+(0)+719428123>
>> Blog : http://www.ayomaonline.com
>> LinkedIn: https://www.linkedin.com/in/ayoma
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Lakshman Udayakantha
> WSO2 Inc. www.wso2.com
> lean.enterprise.middleware
> Mobile: *0714388124*
>
>


-- 
Ayoma Wijethunga
Software Engineer
WSO2, Inc.; http://wso2.com
lean.enterprise.middleware

Mobile : +94 (0) 719428123 <+94+(0)+719428123>
Blog : http://www.ayomaonline.com
LinkedIn: https://www.linkedin.com/in/ayoma
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev