Re: [Dev] US Election 2016 Tweet Analyze System

2016-01-24 Thread Thusitha Kalugamage
Hi All,

As per discussions we had previously, I came up with a quick design and
worked on a markup. :)
you can have a look at the static page here [1]
.

*DInali / Yasara*: Please clone this repo [2]
and start working on this.
I will help you get the integration done.

*Yudhanjaya* / *Srinath*,  Let me know if we can improve on this.

Please note that some of these images are copyrighted and must be revised
before publishing.


[1] http://thusithak.github.io/twitter-analytics/
[2] https://github.com/thusithak/twitter-analytics


Regards,

On Thu, Dec 3, 2015 at 11:05 AM, Dakshika Jayathilaka 
wrote:

> Sure, I'll join for brainstorming.
>
> Regards,
>
> *Dakshika Jayathilaka*
> PMC Member & Committer of Apache Stratos
> Senior Software Engineer
> WSO2, Inc.
> lean.enterprise.middleware
> 0771100911
>
> On Thu, Dec 3, 2015 at 11:02 AM, Srinath Perera  wrote:
>
>> Hi Dakshika,
>>
>> Yudhanjaya will create some mock view and meet you to brainstorm and get
>> feedback. Yudhanjaya, grab me or get me via a phone to the same discussion.
>>
>> Thanks
>> Srinath
>>
>>
>>
>> On Wed, Dec 2, 2015 at 12:54 PM, Dakshika Jayathilaka 
>> wrote:
>>
>>> Hi Srinath,
>>>
>>> Seems I missed this thread. Anyway shall we meet to build some good
>>> story + design concept.
>>>
>>> Regards,
>>>
>>> *Dakshika Jayathilaka*
>>> PMC Member & Committer of Apache Stratos
>>> Senior Software Engineer
>>> WSO2, Inc.
>>> lean.enterprise.middleware
>>> 0771100911
>>>
>>> On Wed, Dec 2, 2015 at 11:49 AM, Yasara Dissanayake 
>>> wrote:
>>>
 Hi ,
 Left Conner:
 Top 3 election candidates are displayed on the top and big letters and
 other candidates are displayed in left side bar. We can visit anyones page
 and this is the Trump page.

 Middle:
 Community Graph:
 Here display the Trump's community graph(that nodes represent the
 twitter account and also color of node indicates the different
 candidates and color shade is use to indicate the number of tweets
 produce by that account and sized of the node indicate the re tweet count
 of the that account gets) is at the middle. dinali is  working on this.

 Re-tweet List
 List of top tweets displayed below the community graph based on the
 rank which directly proportion to the re-tweet count and inversely
 proportion to the life time of that tweets.

 Right Conner:

 Result of positive and negative sentiment analysis result is displayed
 below the photograph of owner of that page. Indent to display result using
 graph. Yudhanjaya is  working on this.

 Below that display the owner's(here Trump's) unique hash tags which
 based on the popular tweets(Please consider this hashTags are updated with
 time).

 Doughnut graph is use to display the  current winning percentage of
 that candidate compared to other candidates. Currently have only rough
 draft to be implemented after machine learning part .



 @Nirmal thank you. Still we are integrating our parts and still
 database is not completed yet. thank you for the correction it should be
 sentiment analysis. Hashtags are based on the popular tweets at that time.
 It change with the list of tweets which we are selected as most popular.
 Sentiment analysis not integrate to the dashboard yet percentage for
 Hillary Clinton is hard code value and we'll correct it.And upload the
 revised version soon. Thank you for the comments.

 regards,
 Yasara

 On Wed, Dec 2, 2015 at 10:09 AM, Nirmal Fernando 
 wrote:

> Hi Yasara,
>
> Please explain the UI (as the UI is at very early stages, it's not
> easy to grasp stuff) :-)
>
> On Wed, Dec 2, 2015 at 9:47 AM, Yasara Dissanayake 
> wrote:
>
>> Hi,
>>
>> This is the snap shots of the final Integration of  the website.
>>
>> Please leave your comments.
>>
>> regards.
>>
>> On Tue, Dec 1, 2015 at 1:34 PM, Yudhanjaya Wijeratne <
>> yudhanj...@wso2.com> wrote:
>>
>>> +1 :)
>>>
>>> On Tue, Dec 1, 2015 at 1:10 PM, Srinath Perera 
>>> wrote:
>>>
 I might WFH. Shall we meet Thursday 11am?

 On Tue, Dec 1, 2015 at 12:20 PM, Yudhanjaya Wijeratne <
 yudhanj...@wso2.com> wrote:

> Hi Srinath,
>
> +1 to all. I think sentiment analysis will take the form of a x-y
> graph charting the ups and downs. Shall I come to Trace tomorrow 
> morning?
>
> Thanks,
> Yudha
>
> On Tue, Dec 1, 2015 at 11:21 AM, Srinath Perera 
> wrote:
>
>> Hi Yudhanjaya,
>>
>> Yasara and Dinali have the basics for twitter graph and most
>> important tweets in place. We need to design the story around this. 
>>>

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Roshan Wijesena
Hi  Niranda/DAS team,

I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
when one server is in down situation. I am getting this exception
periodically, and spark scripts are not running at all. it seems we can
*not* survive when one server is in down situation?

TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
{org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
executing the scheduled task for the script: is_log_analytics
{org.wso2.carbon.analytics.spark.core.AnalyticsTask}
org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
Spark SQL Context is not available. Check if the cluster has instantiated
properly.
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
at
org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
at
org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
at
org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
at
org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

-Roshan







On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:

> Hi Niranda / Inosh,
>
> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
> now. Did not appear for a while.
>
> -Roshan
>
> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera  wrote:
>
>> Hi Roshan,
>>
>> This happens when you have a malformed HA cluster. When you put the
>> master count as 2, the spark cluster would not get initiated until there
>> are 2 members in the analytics cluster. when the count as 2 and there is a
>> task scheduled already, you may come across this issue, until the 2nd node
>> is up and running. You should see that after sometime, the exception gets
>> resolved., and that is when the analytics cluster is at a workable state.
>>
>> But I agree, an NPE is not acceptable here and this has been already
>> fixed in 3.0.1 [1]
>>
>> as per the query modification, yes, the query gets modified to handle
>> multi tenancy in the spark runtime.
>>
>> hope this resolves your issues.
>>
>> rgds
>>
>> [1] https://wso2.org/jira/browse/DAS-329
>>
>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>> wrote:
>>
>>>  I reproduced the error. If we set carbon.spark.master.count value to 2
>>> this error will occur. Any solution available in this case?
>>>
>>>
>>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
>>> wrote:
>>>
 After I enabled the debug, it looks like below

 [2015-12-10 22:03:00,001]  INFO
 {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
 schedule task for: httpd_log_analytics for tenant id: -1234
 [2015-12-10 22:03:00,013] DEBUG
 {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
 org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
 [2015-12-10 22:03:00,013] ERROR
 {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
 executing task: null
 java.lang.NullPointerException
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
 at
 org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
 at
 org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
 at
 org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:57)
 at
 org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
 at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.T

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Inosh Goonewardena
Hi Roshan,

How many analyzer nodes are there in the cluster? If the master count is
set to 1 and the master is down, spark cluster will not survive. It you set
the master count to 2, then if the master is down other node become the
master and it survive. However until spark context is initialize properly
in the other node(will take roughly about 5 - 30secs) you will see above
error.

On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:

> Hi  Niranda/DAS team,
>
> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
> when one server is in down situation. I am getting this exception
> periodically, and spark scripts are not running at all. it seems we can
> *not* survive when one server is in down situation?
>
> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
> executing the scheduled task for the script: is_log_analytics
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Spark SQL Context is not available. Check if the cluster has instantiated
> properly.
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> -Roshan
>
>
>
>
>
>
>
> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>
>> Hi Niranda / Inosh,
>>
>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>> now. Did not appear for a while.
>>
>> -Roshan
>>
>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>> wrote:
>>
>>> Hi Roshan,
>>>
>>> This happens when you have a malformed HA cluster. When you put the
>>> master count as 2, the spark cluster would not get initiated until there
>>> are 2 members in the analytics cluster. when the count as 2 and there is a
>>> task scheduled already, you may come across this issue, until the 2nd node
>>> is up and running. You should see that after sometime, the exception gets
>>> resolved., and that is when the analytics cluster is at a workable state.
>>>
>>> But I agree, an NPE is not acceptable here and this has been already
>>> fixed in 3.0.1 [1]
>>>
>>> as per the query modification, yes, the query gets modified to handle
>>> multi tenancy in the spark runtime.
>>>
>>> hope this resolves your issues.
>>>
>>> rgds
>>>
>>> [1] https://wso2.org/jira/browse/DAS-329
>>>
>>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>>> wrote:
>>>
  I reproduced the error. If we set carbon.spark.master.count value to 2
 this error will occur. Any solution available in this case?


 On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
 wrote:

> After I enabled the debug, it looks like below
>
> [2015-12-10 22:03:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 22:03:00,013] DEBUG
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
> [2015-12-10 22:03:00,013] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:149)
> at
> org.wso2.ca

Re: [Dev] Getting ML model (1.1.0) working with DAS (3.0.0)

2016-01-24 Thread Rukshani Weerasinha
Hi all,

I will update this page soon and inform you in this thread once it is done.

Thank You and Best Regards,
Rukshani.

On Sat, Jan 23, 2016 at 9:19 PM, Iranga Muthuthanthri 
wrote:

> + Rukshani
> On Jan 23, 2016 3:00 PM, "Niranda Perera"  wrote:
>
>> Shall we include this under another subtopic in the following page [1]?
>> then it will be a one stop page for all the -D options for DAS
>>
>> [1] https://docs.wso2.com/pages/viewpage.action?pageId=45952727
>>
>> On Fri, Jan 22, 2016 at 3:05 PM, Nirmal Fernando  wrote:
>>
>>> Hi Iranga,
>>>
>>>
>>> On Fri, Jan 22, 2016 at 3:00 PM, Iranga Muthuthanthri 
>>> wrote:
>>>
 Hi,

 In order to build an integrated  analytics scenario (Batch, Realtime
 and Predictive)  with DAS 3.0.0 tried the following steps in a local server
 though by installing and using CEP-ML extension in DAS. The steps followed
 are below.

 1.) Installed ML 1.1.0  features in DAS 3.0.0.
 2) Created a model through ML 1.1.0 .
 3.)  Publish model to DAS registry .
 4.) Use the ML extention through execution plan.

  The following exceptions is observed at Server startup, possibly due
 to the  issue  of "supporting multiple Spark context in a jvm"[2].  Can
 start the server with command wso2server.sh
 -DdisableAnalyticsSparkCtx=true

>>>
>>> Instead of doing this, do -DdisableMLSparkCtx=true this would disable
>>> ML Spark Context creation but predictions should still work.
>>>
>>>
>>>

 However this prevents using the batch analytics features in DAS,

 Is there a  solution to get all three integrated analytics with DAS
 3.0.0?Is there a way use the ML model to overcome issue[2]





 [1]2016-01-22 12:09:58,193] ERROR
 {org.wso2.carbon.analytics.spark.core.internal.AnalyticsComponent} -  Error
 initializing analytics executor: Only one SparkContext may be running in
 this JVM (see SPARK-2243). To ignore this error, set
 spark.driver.allowMultipleContexts = true. The currently running
 SparkContext was created 


 [2]https://issues.apache.org/jira/browse/SPARK-2243

 --
 Thanks & Regards

 Iranga Muthuthanthri
 (M) -0777-255773
 Team Product Management


>>>
>>>
>>> --
>>>
>>> Thanks & regards,
>>> Nirmal
>>>
>>> Team Lead - WSO2 Machine Learner
>>> Associate Technical Lead - Data Technologies Team, WSO2 Inc.
>>> Mobile: +94715779733
>>> Blog: http://nirmalfdo.blogspot.com/
>>>
>>>
>>>
>>
>>
>> --
>> *Niranda Perera*
>> Software Engineer, WSO2 Inc.
>> Mobile: +94-71-554-8430
>> Twitter: @n1r44 
>> https://pythagoreanscript.wordpress.com/
>>
>


-- 
Rukshani Weerasinha

WSO2 Inc.
Web:http://wso2.com
Mobile: 0777 683 738
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Niranda Perera
Hi Roshan,

I agree with Inosh. It takes few seconds for spark to recover, until then
you might see this exception. Do you see this exception through out?

best

On Mon, Jan 25, 2016 at 8:13 AM, Inosh Goonewardena  wrote:

> Hi Roshan,
>
> How many analyzer nodes are there in the cluster? If the master count is
> set to 1 and the master is down, spark cluster will not survive. It you set
> the master count to 2, then if the master is down other node become the
> master and it survive. However until spark context is initialize properly
> in the other node(will take roughly about 5 - 30secs) you will see above
> error.
>
> On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:
>
>> Hi  Niranda/DAS team,
>>
>> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
>> when one server is in down situation. I am getting this exception
>> periodically, and spark scripts are not running at all. it seems we can
>> *not* survive when one server is in down situation?
>>
>> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
>> executing the scheduled task for the script: is_log_analytics
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>> Spark SQL Context is not available. Check if the cluster has instantiated
>> properly.
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>> at
>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
>> at
>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
>> at
>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> -Roshan
>>
>>
>>
>>
>>
>>
>>
>> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>>
>>> Hi Niranda / Inosh,
>>>
>>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>>> now. Did not appear for a while.
>>>
>>> -Roshan
>>>
>>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>>> wrote:
>>>
 Hi Roshan,

 This happens when you have a malformed HA cluster. When you put the
 master count as 2, the spark cluster would not get initiated until there
 are 2 members in the analytics cluster. when the count as 2 and there is a
 task scheduled already, you may come across this issue, until the 2nd node
 is up and running. You should see that after sometime, the exception gets
 resolved., and that is when the analytics cluster is at a workable state.

 But I agree, an NPE is not acceptable here and this has been already
 fixed in 3.0.1 [1]

 as per the query modification, yes, the query gets modified to handle
 multi tenancy in the spark runtime.

 hope this resolves your issues.

 rgds

 [1] https://wso2.org/jira/browse/DAS-329

 On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
 wrote:

>  I reproduced the error. If we set carbon.spark.master.count value to
> 2 this error will occur. Any solution available in this case?
>
>
> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
> wrote:
>
>> After I enabled the debug, it looks like below
>>
>> [2015-12-10 22:03:00,001]  INFO
>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>> schedule task for: httpd_log_analytics for tenant id: -1234
>> [2015-12-10 22:03:00,013] DEBUG
>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>> [2015-12-10 22:03:00,013] ERROR
>> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
>> executing task: null
>> java.lang.NullPointerException
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
>> at
>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.execu

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Roshan Wijesena
Hi Guys,

I have set carbon.spark.master.count  2 in both DAS server's
repository/conf/analytics/spark/spark-defaults.conf. And this error is
printing periodically, for now, its more that 10 mins.

On Sun, Jan 24, 2016 at 9:02 PM, Niranda Perera  wrote:

> Hi Roshan,
>
> I agree with Inosh. It takes few seconds for spark to recover, until then
> you might see this exception. Do you see this exception through out?
>
> best
>
> On Mon, Jan 25, 2016 at 8:13 AM, Inosh Goonewardena 
> wrote:
>
>> Hi Roshan,
>>
>> How many analyzer nodes are there in the cluster? If the master count is
>> set to 1 and the master is down, spark cluster will not survive. It you set
>> the master count to 2, then if the master is down other node become the
>> master and it survive. However until spark context is initialize properly
>> in the other node(will take roughly about 5 - 30secs) you will see above
>> error.
>>
>> On Sun, Jan 24, 2016 at 8:25 PM, Roshan Wijesena  wrote:
>>
>>> Hi  Niranda/DAS team,
>>>
>>> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
>>> when one server is in down situation. I am getting this exception
>>> periodically, and spark scripts are not running at all. it seems we can
>>> *not* survive when one server is in down situation?
>>>
>>> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
>>> executing the scheduled task for the script: is_log_analytics
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
>>> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
>>> Spark SQL Context is not available. Check if the cluster has instantiated
>>> properly.
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
>>> at
>>> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
>>> at
>>> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
>>> at
>>> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
>>> at
>>> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
>>> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> -Roshan
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena 
>>> wrote:
>>>
 Hi Niranda / Inosh,

 Thanks a lot for the quick call and reply. Yes issue seems to be fixed
 now. Did not appear for a while.

 -Roshan

 On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
 wrote:

> Hi Roshan,
>
> This happens when you have a malformed HA cluster. When you put the
> master count as 2, the spark cluster would not get initiated until there
> are 2 members in the analytics cluster. when the count as 2 and there is a
> task scheduled already, you may come across this issue, until the 2nd node
> is up and running. You should see that after sometime, the exception gets
> resolved., and that is when the analytics cluster is at a workable state.
>
> But I agree, an NPE is not acceptable here and this has been already
> fixed in 3.0.1 [1]
>
> as per the query modification, yes, the query gets modified to handle
> multi tenancy in the spark runtime.
>
> hope this resolves your issues.
>
> rgds
>
> [1] https://wso2.org/jira/browse/DAS-329
>
> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
> wrote:
>
>>  I reproduced the error. If we set carbon.spark.master.count value to
>> 2 this error will occur. Any solution available in this case?
>>
>>
>> On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
>> wrote:
>>
>>> After I enabled the debug, it looks like below
>>>
>>> [2015-12-10 22:03:00,001]  INFO
>>> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
>>> schedule task for: httpd_log_analytics for tenant id: -1234
>>> [2015-12-10 22:03:00,013] DEBUG
>>> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>>>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
>>> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>>>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
>>> [2015-12-10 2

Re: [Dev] [DEV][DAS]Null pointer exception while executing a schedule task in DAS HA cluster.

2016-01-24 Thread Maninda Edirisooriya
Hi Roshan,

If you need to test only the receiver HA with CEP HA configuration you need
to run the receiver nodes with receiver profile.
(i.e. Start the DAS with *./wso2server.sh -receiverNode*) This will avoid
running spark scripts in receiver nodes.
And also make sure the Axis2 clustering is setup such that all the DAS
nodes are clustered into a single cluster.
(Test by shutting down each node independently and see other nodes are
printing the logs saying a member has left the cluster)
Thanks.



On Mon, Jan 25, 2016 at 7:55 AM, Roshan Wijesena  wrote:

> Hi  Niranda/DAS team,
>
> I have updated  DAS server into 3.0.1. I am testing a minimum HA cluster
> when one server is in down situation. I am getting this exception
> periodically, and spark scripts are not running at all. it seems we can
> *not* survive when one server is in down situation?
>
> TID: [-1234] [] [2016-01-24 21:10:10,015] ERROR
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Error while
> executing the scheduled task for the script: is_log_analytics
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask}
> org.wso2.carbon.analytics.spark.core.exception.AnalyticsExecutionException:
> Spark SQL Context is not available. Check if the cluster has instantiated
> properly.
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:728)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:709)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:201)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeScript(CarbonAnalyticsProcessorService.java:151)
> at
> org.wso2.carbon.analytics.spark.core.AnalyticsTask.execute(AnalyticsTask.java:59)
> at
> org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter.execute(TaskQuartzJobAdapter.java:67)
> at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> -Roshan
>
>
>
>
>
>
>
> On Fri, Dec 11, 2015 at 1:08 AM, Roshan Wijesena  wrote:
>
>> Hi Niranda / Inosh,
>>
>> Thanks a lot for the quick call and reply. Yes issue seems to be fixed
>> now. Did not appear for a while.
>>
>> -Roshan
>>
>> On Fri, Dec 11, 2015 at 12:47 AM, Niranda Perera 
>> wrote:
>>
>>> Hi Roshan,
>>>
>>> This happens when you have a malformed HA cluster. When you put the
>>> master count as 2, the spark cluster would not get initiated until there
>>> are 2 members in the analytics cluster. when the count as 2 and there is a
>>> task scheduled already, you may come across this issue, until the 2nd node
>>> is up and running. You should see that after sometime, the exception gets
>>> resolved., and that is when the analytics cluster is at a workable state.
>>>
>>> But I agree, an NPE is not acceptable here and this has been already
>>> fixed in 3.0.1 [1]
>>>
>>> as per the query modification, yes, the query gets modified to handle
>>> multi tenancy in the spark runtime.
>>>
>>> hope this resolves your issues.
>>>
>>> rgds
>>>
>>> [1] https://wso2.org/jira/browse/DAS-329
>>>
>>> On Fri, Dec 11, 2015 at 11:40 AM, Roshan Wijesena 
>>> wrote:
>>>
  I reproduced the error. If we set carbon.spark.master.count value to 2
 this error will occur. Any solution available in this case?


 On Thu, Dec 10, 2015 at 9:05 PM, Roshan Wijesena 
 wrote:

> After I enabled the debug, it looks like below
>
> [2015-12-10 22:03:00,001]  INFO
> {org.wso2.carbon.analytics.spark.core.AnalyticsTask} -  Executing the
> schedule task for: httpd_log_analytics for tenant id: -1234
> [2015-12-10 22:03:00,013] DEBUG
> {org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor} -
>  Executing : CREATE TEMPORARY TABLE X1234_HttpLogTableUSING
> org.wso2.carbon.analytics.spark.core.sources.AnalyticsRelationProvider
>OPTIONS (tableName "ORG_WSO2_SAMPLE_HTTPD_LOGS" , tenantId "-1234")
> [2015-12-10 22:03:00,013] ERROR
> {org.wso2.carbon.ntask.core.impl.TaskQuartzJobAdapter} -  Error in
> executing task: null
> java.lang.NullPointerException
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQueryLocal(SparkAnalyticsExecutor.java:710)
> at
> org.wso2.carbon.analytics.spark.core.internal.SparkAnalyticsExecutor.executeQuery(SparkAnalyticsExecutor.java:692)
> at
> org.wso2.carbon.analytics.spark.core.CarbonAnalyticsProcessorService.executeQuery(CarbonAnalyticsProcessorService.java:199)
> at
> org.wso2.carbon.analytics.spark.cor

Re: [Dev] IOS REST API

2016-01-24 Thread Inosh Perera
[adding dev]

Hi Sashika,

1) *Remove application* (
https://docs.wso2.com/display/EMM200/Removing+an+Application+on+iOS+Devices+via+the+REST+API
)
This gives 201 created message, but in the message log, status appeared as
'Error'

2) *Install Enterprise applications *(
https://docs.wso2.com/display/EMM200/Installing+an+Enterprise+Application+on+iOS+Devices+via+the+REST+API
)
This gives 201 created message, but in the message log, status appeared as
'Error'

This is correct as a new operation is added, EMM provides the response 201,
since the operation execution can take any amount of time. Error appears
here probaby due to providing a wrong "manifestURL" that is the path to
apps manifest file.

3) *iTune store app* (
https://docs.wso2.com/display/EMM200/Installing+an+iTunes+Store+Application+via+the+REST+API
)
Scope validation failed message appeared
This can happen if the access token was not generated with a scope.

Most of the REST API calls gave 201 created response, but those are still
in in-progress or pending status
This is the expected behavior as explained, operation that are added may
not be applied immediately due to many reasons(device can be offline),
hence only created.

I will have a look at the rest as well.

Regards,
Inosh





On Fri, Jan 22, 2016 at 5:43 PM, Sashika Wijesinghe 
wrote:

> Hi Inosh,
>
> 1) *Remove application* (
> https://docs.wso2.com/display/EMM200/Removing+an+Application+on+iOS+Devices+via+the+REST+API
> )
> This gives 201 created message, but in the message log, status appeared as
> 'Error'
>
> 2) *Install Enterprise applications *(
> https://docs.wso2.com/display/EMM200/Installing+an+Enterprise+Application+on+iOS+Devices+via+the+REST+API
> )
> This gives 201 created message, but in the message log, status appeared as
> 'Error'
>
> 3) *iTune store app* (
> https://docs.wso2.com/display/EMM200/Installing+an+iTunes+Store+Application+via+the+REST+API
> )
> Scope validation failed message appeared
>
> 4) *WI-FI* (
> https://docs.wso2.com/display/EMM200/Adding+Wi-Fi+Operations+on+iOS+Devices+via+the+REST+API
> )
>
> Below error message appeared.
>
> sashika@sashika:~/Apps/Applications/EMM/GA/IOS/bin$ curl -X POST -H
> "Content-Type: application/json" -H "Authorization: Bearer
> 266a9e3f00d19c2e9323c67f4aa507ca" -d @'info.json' -k -v
> https://10.10.10.224:9443/ios/operation/wifi
> *   Trying 10.10.10.224...
> * Connected to 10.10.10.224 (10.10.10.224) port 9443 (#0)
> * found 187 certificates in /etc/ssl/certs/ca-certificates.crt
> * found 758 certificates in /etc/ssl/certs
> * ALPN, offering http/1.1
> * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_CBC_SHA1
> * server certificate verification SKIPPED
> * server certificate status verification SKIPPED
> * common name: 10.10.10.224 (matched)
> * server certificate expiration date OK
> * server certificate activation date OK
> * certificate public key: RSA
> * certificate version: #1
> * subject: C=SL,ST=western,L=Colombo,O=wso22,OU=QA2,CN=10.10.10.224,EMAIL=
> sashi...@wso2.com
> * start date: Mon, 04 Jan 2016 13:58:33 GMT
> * expire date: Wed, 03 Jan 2018 13:58:33 GMT
> * issuer: C=SL,ST=western,L=Colombo,O=wso2,OU=QA,CN=localhost,EMAIL=
> sash...@wso2.com
> * compression: NULL
> * ALPN, server did not agree to a protocol
> > POST /ios/operation/wifi HTTP/1.1
> > Host: 10.10.10.224:9443
> > User-Agent: curl/7.43.0
> > Accept: */*
> > Content-Type: application/json
> > Authorization: Bearer 266a9e3f00d19c2e9323c67f4aa507ca
> > Content-Length: 1367
> > Expect: 100-continue
> >
> < HTTP/1.1 100 Continue
> * We are completely uploaded and fine
> < HTTP/1.1 500 Internal Server Error
> < Content-Type: text/html;charset=utf-8
> < Content-Language: en
> < Transfer-Encoding: chunked
> < Vary: Accept-Encoding
> < Date: Fri, 22 Jan 2016 10:00:46 GMT
> < Connection: close
> < Server: WSO2 Carbon Server
> <
> Apache Tomcat/7.0.59 - Error
> report
> HTTP Status 500 - org.apache.cxf.interceptor.Fault:
> java.lang.IllegalStateException: Expected BEGIN_ARRAY but was STRING at
> line 1 column 623 path
> $.operation.clientConfiguration.TLSTrustedServerNames noshade="noshade">type Exception reportmessage
> org.apache.cxf.interceptor.Fault: java.lang.IllegalStateException:
> Expected BEGIN_ARRAY but was STRING at line 1 column 623 path
> $.operation.clientConfiguration.TLSTrustedServerNamesdescription
> The server encountered an internal 

Re: [Dev] Error wile building carbon-feature-repository (branch: release-4.2.x)

2016-01-24 Thread Sinthuja Ragendran
Hi Tanya,

Please check on this.

Thanks,
Sinthuja.

On Mon, Jan 18, 2016 at 5:09 PM, Lahiru Cooray  wrote:

> adding Manu and Sinthuja
>
> On Mon, Jan 18, 2016 at 5:07 PM, Lahiru Cooray  wrote:
>
>> Hi Dakshika,
>> Currently we are using Jaggery Server Feature 0.10.1 and this particular
>> dependency is not included in released P2 repo.
>> (https://github.com/wso2/carbon-feature-repository > release-4.2.x
>> branch)
>>
>> Could you please update the repo with relevant dependencies so we can
>> proceed further.
>>
>> Thank you.
>>
>>
>> On Sat, Jan 9, 2016 at 4:02 PM, Lahiru Cooray  wrote:
>>
>>> Hi Chamila,
>>> Thank you. Now its working.
>>>
>>> On Fri, Jan 8, 2016 at 6:09 PM, Chamila Adhikarinayake <
>>> chami...@wso2.com> wrote:
>>>
 Hi Lahiru,
 Released the missing repos. You can try building it now.

 Chamila

 On Wed, Jan 6, 2016 at 12:32 PM, Lahiru Cooray 
 wrote:

> Hi APIM team,
> Could you please update the repo with missing versions.
>
> Thank you..
>
> On Tue, Jan 5, 2016 at 7:06 PM, Lahiru Cooray 
> wrote:
>
>> Hi,
>>
>> I get the below error while building the carbon-feature-repository (
>> https://github.com/wso2/carbon-feature-repository > release-4.2.x
>> branch)
>>
>> And the required version 
>> (org.wso2.am:org.wso2.apimgt.gateway-manager.nested.category.feature:zip:1.9.1)
>> is not available in the
>> http://maven.wso2.org/nexus/content/groups/wso2-public/org/wso2/am/org.wso2.apimgt.gateway-manager.nested.category.feature/
>>
>>
>>
>> [ERROR] Error occured when processing the Feature Artifact:
>> org.wso2.am:
>> org.wso2.apimgt.gateway-manager.nested.category.feature:1.9.1
>>
>> org.apache.maven.plugin.MojoExecutionException: Error occured when
>> processing the Feature Artifact: org.wso2.am:
>> org.wso2.apimgt.gateway-manager.nested.category.feature:1.9.1
>>
>> at
>> org.wso2.maven.p2.RepositoryGenMojo.getProcessedFeatureArtifacts(RepositoryGenMojo.java:322)
>>
>> at
>> org.wso2.maven.p2.RepositoryGenMojo.createRepo(RepositoryGenMojo.java:197)
>>
>> at
>> org.wso2.maven.p2.RepositoryGenMojo.execute(RepositoryGenMojo.java:191)
>>
>> at
>> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
>>
>> at
>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
>>
>> at
>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>>
>> at
>> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>>
>> at
>> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
>>
>> at
>> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
>>
>> at
>> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
>>
>> at
>> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
>>
>> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
>>
>> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
>>
>> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
>>
>> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
>>
>> at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at
>> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
>>
>> at
>> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
>>
>> at
>> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
>>
>> at
>> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
>>
>> Caused by: org.apache.maven.plugin.MojoExecutionException: ERROR
>>
>> at
>> org.wso2.maven.p2.generate.utils.MavenUtils.getResolvedArtifact(MavenUtils.java:43)
>>
>> at
>> org.wso2.maven.p2.RepositoryGenMojo.getProcessedFeatureArtifacts(RepositoryGenMojo.java:319)
>>
>> ... 23 more
>>
>> Caused by:
>> org.apache.maven.artifact.resolver.ArtifactNotFoundException: Failure to
>> find 
>> org.wso2.am:org.wso2.apimgt.gateway-manager.nested.category.feature:zip:1.9.1
>> in http://maven.wso2.org/nexus/content/repositories/releases/ was
>> cached in the local repository, resolution will 

[Dev] Unknown attachment in ESB documentation 4.9

2016-01-24 Thread Indrajith Udayakumara
To the documentation team,
Here in the following URL, I have found an unknown attachment under Testing
the Proxy Service.
https://docs.wso2.com/display/ESB490/Configure+with+IBM+WebSphere+MQ

Thank you.
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Related to Configuring ESB with IBM Websphere MQ

2016-01-24 Thread Indrajith Udayakumara
I already have come up with the following problem that ESB management
console displays the MyJMSProxy as a faulty service. I havent used either a
username or a password when creating the ESBQManager in WebSphere MQ.

Is that possible?
If so,
how would I configure the following parameter in the transport-sender and
transport-receiver which is given in the documentation?
nandika
password

Can I simply erase the username and password?
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [DEV][AS] Ideas and ways to integrate with Hot Swap Agent

2016-01-24 Thread Thusitha Thilina Dayaratne
Hi Clovis,

I've checked on the error that you have mentioned.
According to the hotswapagent logic
*org.hotswap.agent.plugin.tomcat.TomcatPlugin.init()
*method get called during WebappLoader.start lifecycle event. But at the
time STARTING event get fired we haven't set the  WebappClassloadingContext
to the WebappClassloader. Therefore when it tries to do some checks in
order to load the resources it throws the NPE.

*org.wso2.carbon.webapp.mgt.loader.CarbonWebappLoader*

@Override
protected void startInternal() throws LifecycleException {

...

super.startInternal();  > In this point STARTING even get fired
and so *TomcatPlugin.init * method get called and it try to get the
resource. That called getResources method and since we haven't still
set the webappClassloadingContext it throws a NPE.


//Adding the WebappClassloadingContext to the WebappClassloader

((CarbonWebappClassLoader)
getClassLoader()).setWebappCC(webappClassloadingContext); // This is the
point where we are setting the webappClassloadingContext

Thanks
Thusitha

On Fri, Jan 22, 2016 at 4:54 PM, Clovis Wichoski 
wrote:

> Hi Kishanthan,
>
> The redeploy all of a small system its ok, but for a big one, its a
> problem, just a change in one class you must redeploy, loose sessions,
> restart the system from scratch, to reach the point of test again, with the
> ideas used by HotSwapAgent they act on reloading only that one class
> changed, this way speeds the development time, these ideas are well
> defended by JRebel tool, that HotSwapAgent tends to be a choice as an open
> source solution. Please note that this feature is only needed for
> development phase, in production, the scheduled task its ok.
>
> I mean time I tested with AS 5.3.0, JRebel and HotSwapAgent, JRebel trial
> worked as expected, but HotSwapAgent not (see exception bellow [1]), I will
> check these problems with HotSwapAgent devs, and try to discover why dont
> works with WSO2 AS as with HotSwapAgent active none of webapps load, all
> get that exception, as we see a NPE in class CarbonWebappClassLoader maybe
> its the way they initialize the dependent class loaders.
>
> [1] Exception when HotSwapAgent is active:
> [2016-01-22 09:14:18,467] ERROR {org.apache.catalina.core.ContainerBase}
> -  ContainerBase.addChild: start:
> org.apache.catalina.LifecycleException: Failed to start component
> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/example]]
> at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
> at
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
> at
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
> at
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
> at
> org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:344)
> at
> org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:252)
> at
> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWebappDeployment(TomcatGenericWebappsDeployer.java:314)
> at
> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWarWebappDeployment(TomcatGenericWebappsDeployer.java:212)
> at
> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleHotDeployment(TomcatGenericWebappsDeployer.java:179)
> at
> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.deploy(TomcatGenericWebappsDeployer.java:144)
> at
> org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deployThisWebApp(AbstractWebappDeployer.java:224)
> at
> org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deploy(AbstractWebappDeployer.java:114)
> at
> org.wso2.carbon.webapp.deployer.WebappDeployer.deploy(WebappDeployer.java:42)
> at
> org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
> at
> org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
> at
> org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
> at
> org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
> at
> org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
> at
> org.apache.axis2.deployment.DeploymentEngine.loadServices(DeploymentEngine.java:135)
> at
> org.wso2.carbon.core.CarbonAxisConfigurator.deployServices(CarbonAxisConfigurator.java:567)
> at
> org.wso2.carbon.core.internal.DeploymentServerStartupObserver.completingServerStartup(DeploymentServerStartupObserver.java:51)
> at
> org.wso2.carbon.core.internal.CarbonCoreServiceComponent.notifyBefore(CarbonCoreServiceComponent.java:235)
> at
> org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.completeInitialization(StartupFinalizerServiceComponent.java:185)
> at
> org.wso2.carbon.core.internal.StartupFinalizerServiceComponent.serviceChanged(StartupFinalizerServiceComponent.java:288)
> at
> org.eclips

Re: [Dev] [DEV][AS] Ideas and ways to integrate with Hot Swap Agent

2016-01-24 Thread Thusitha Thilina Dayaratne
Hi,

AFAIU we can easily fix this by adding a null check to webappCC in the
CarbonWebappClassLoader.getResource method as follows

public Enumeration getResources(String name) throws IOException {
Enumeration[] tmp = new Enumeration[2];
if (parent != null && webappCC != null) { // add a null check to webappCC

.

WDYT?


Thanks

Thusitha


On Mon, Jan 25, 2016 at 12:04 PM, Thusitha Thilina Dayaratne <
thusit...@wso2.com> wrote:

> Hi Clovis,
>
> I've checked on the error that you have mentioned.
> According to the hotswapagent logic 
> *org.hotswap.agent.plugin.tomcat.TomcatPlugin.init()
> *method get called during WebappLoader.start lifecycle event. But at the
> time STARTING event get fired we haven't set the  WebappClassloadingContext
> to the WebappClassloader. Therefore when it tries to do some checks in
> order to load the resources it throws the NPE.
>
> *org.wso2.carbon.webapp.mgt.loader.CarbonWebappLoader*
>
> @Override
> protected void startInternal() throws LifecycleException {
>
> ...
>
> super.startInternal();  > In this point STARTING even get fired and so 
> *TomcatPlugin.init * method get called and it try to get the resource. That 
> called getResources method and since we haven't still set the 
> webappClassloadingContext it throws a NPE.
>
>
> //Adding the WebappClassloadingContext to the WebappClassloader
>
> ((CarbonWebappClassLoader)
> getClassLoader()).setWebappCC(webappClassloadingContext); // This is the
> point where we are setting the webappClassloadingContext
>
> Thanks
> Thusitha
>
> On Fri, Jan 22, 2016 at 4:54 PM, Clovis Wichoski 
> wrote:
>
>> Hi Kishanthan,
>>
>> The redeploy all of a small system its ok, but for a big one, its a
>> problem, just a change in one class you must redeploy, loose sessions,
>> restart the system from scratch, to reach the point of test again, with the
>> ideas used by HotSwapAgent they act on reloading only that one class
>> changed, this way speeds the development time, these ideas are well
>> defended by JRebel tool, that HotSwapAgent tends to be a choice as an open
>> source solution. Please note that this feature is only needed for
>> development phase, in production, the scheduled task its ok.
>>
>> I mean time I tested with AS 5.3.0, JRebel and HotSwapAgent, JRebel trial
>> worked as expected, but HotSwapAgent not (see exception bellow [1]), I will
>> check these problems with HotSwapAgent devs, and try to discover why dont
>> works with WSO2 AS as with HotSwapAgent active none of webapps load, all
>> get that exception, as we see a NPE in class CarbonWebappClassLoader maybe
>> its the way they initialize the dependent class loaders.
>>
>> [1] Exception when HotSwapAgent is active:
>> [2016-01-22 09:14:18,467] ERROR {org.apache.catalina.core.ContainerBase}
>> -  ContainerBase.addChild: start:
>> org.apache.catalina.LifecycleException: Failed to start component
>> [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/example]]
>> at
>> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:154)
>> at
>> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
>> at
>> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
>> at
>> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
>> at
>> org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:344)
>> at
>> org.wso2.carbon.tomcat.internal.CarbonTomcat.addWebApp(CarbonTomcat.java:252)
>> at
>> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWebappDeployment(TomcatGenericWebappsDeployer.java:314)
>> at
>> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleWarWebappDeployment(TomcatGenericWebappsDeployer.java:212)
>> at
>> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.handleHotDeployment(TomcatGenericWebappsDeployer.java:179)
>> at
>> org.wso2.carbon.webapp.mgt.TomcatGenericWebappsDeployer.deploy(TomcatGenericWebappsDeployer.java:144)
>> at
>> org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deployThisWebApp(AbstractWebappDeployer.java:224)
>> at
>> org.wso2.carbon.webapp.mgt.AbstractWebappDeployer.deploy(AbstractWebappDeployer.java:114)
>> at
>> org.wso2.carbon.webapp.deployer.WebappDeployer.deploy(WebappDeployer.java:42)
>> at
>> org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
>> at
>> org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
>> at
>> org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
>> at
>> org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
>> at
>> org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
>> at
>> org.apache.axis2.deployment.DeploymentEngine.loadServices(DeploymentEngine.java:135)
>> at
>> org.wso2.carbon.core.CarbonAxisConfigurator.deployServices(Carbo