Re: [Dev] Update on BAM Task migration

2012-03-25 Thread Buddhika Chamith
On Mon, Mar 26, 2012 at 12:04 PM, Prabath Abeysekera wrote:

> Hi all,
>
> I've been working on the $subject and currently there's only one task
> remaining to be completed. The main intention of this effort is to migrate
> the existing task implementation of BAM to ntasks which is a quartz based
> task scheduler component introduced a couple of months back. While working
> on the aforementioned bit of remaining work, I was confronted with an issue
> and here's my suggestions to overcome it.
>
> In the currently used BAM task implementation, there's a mechanism called
> DataContext which is used to pass some runtime data to a particular task
> execution (such as default credentials to log into cassandra, tenant ID,
> etc) and to populate it with records that are fetched from cassandra (using
> the GetAnalyser) in memory. Whenever a task is scheduled this object is
> passed into it which then uses the DataContext object to grab the necessary
> runtime information required for the task execution. Ntasks being a generic
> task scheduler component, only accepts a map of string key value pairs as
> run time properties. In other words, if it is required to pass additional
> set of properties apart from the ones that are provided by default (for
> example, cron expression, task count, task interval, etc) then they have to
> be passed as Strings. Thus, we either need to serialize the DataContext or
> we need to find some other mechanism to pass those properties in the
> DataContext to the corresponding task.
>
>
Currently the DataContext is used as a holder for data fetched from
Cassandra and so the serializing of DataContext is not trivial and not very
useful as well.

IIUC, I believe it would be more appropriate if it is possible to make the
> DataContext local to the task execution since a task being a single
> independent execution unit and the DataContext is also bound to a
> particular task. To elaborate more on this, the DataContext seems to be
> effectively used in the task execution phase, so I believe, there's a
> possibility for us to build it up at the aforementioned phase without
> initializing it in the task scheduling phase. This way we can even get rid
> of the usage of the binding of DataContext at the task scheduling phase.
> And we can also pass the properties of the DataContext as String key value
> pairs as well.
>

+1. This decouples run time data and configuration data currently present
in the DataContext. We can basically use DataContext for holding run time
data (basically fetched data) which would be specific for the particular
execution of the task.


> Had an offline chat with Tharindu on this and we agreed to proceed with
> the above suggested mechanism which involves removing the binding of the
> DataContext object at the task scheduling phase.
>
>
>
> Regards,
> --
> Prabath Abeysekara
> Software Engineer
> WSO2 Inc.
> Email: praba...@wso2.com 
> Mobile: +94774171471
>
> 
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Running some code when installing a feature for C4

2012-03-27 Thread Buddhika Chamith
Yes if we have some thing like this at the moment we may be able to
leverage it for our tool box stuff as well. Refer thread on "Deployment of
Domain Specific ToolBoxes for BAM" on architecture list.

Regards
Buddhika

On Tue, Mar 27, 2012 at 12:38 PM, Tharindu Mathew  wrote:

>
>
> On Tue, Mar 27, 2012 at 11:52 AM, Hiranya Jayathilaka wrote:
>
>>
>>
>> On Mon, Mar 26, 2012 at 10:30 PM, Tharindu Mathew wrote:
>>
>>> I have configs stored in the registry. Easiest option is to run some
>>> code to call the admin services. Otherwise, I have to look at file system
>>> based configuration.
>>>
>>> Can I do this without changing carbon core, i.e. within the
>>> component/feature code? Meaning, is it possible to extend the touch point
>>> for a new feature I'm going to implement without changing anything else
>>
>>
>> I think that's a slippery slope. Individual components should not
>> directly interact with Eclipse touchpoints. Rather Carbon core should
>> provide some sort of an interface or an OSGi service to expose this
>> functionality.
>>
> That would be great. I guess the question is would that sort of thing be
> available already?
>
>>
>> Thanks,
>> Hiranya
>>
>>
>>>
>>>
>>> On Mon, Mar 26, 2012 at 10:21 PM, Pradeep Fernando wrote:
>>>
 Hi,

 yes, it should be possible. The rationale is,

 when installing features, we already do things such as file-copying,
 extracting, etc. Those are implemented as eclipse touchpoint actions. We
 can come up with our own touchpoint implementations.

 within the framework hook of the touchpoint implementation, we should
 be able to do whatever we want. :)

 then again, why we want this ? can you please explain the scenario.

 --Pradeep

>>>
>>>
>>>
>>> --
>>> Regards,
>>>
>>> Tharindu
>>>
>>> blog: http://mackiemathew.com/
>>> M: +9459908
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> Hiranya Jayathilaka
>> Associate Technical Lead;
>> WSO2 Inc.;  http://wso2.org
>> E-mail: hira...@wso2.com;  Mobile: +94 77 633 3491
>> Blog: http://techfeast-hiranya.blogspot.com
>>
>
>
>
> --
> Regards,
>
> Tharindu
>
> blog: http://mackiemathew.com/
> M: +9459908
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Build failures in Gadget IDE

2012-03-27 Thread Buddhika Chamith
We are looking in to this.

Regards
Buddhika

On Tue, Mar 27, 2012 at 3:46 PM, Isuru Suriarachchi  wrote:

> To start with, some maven plugin and the gadget ide stub were not added to
> relevant root poms. I added those and committed.. But now I get the
> following error. Looks like the stub is given a wrong version..
>
> Please fix ASAP..
>
>
> [INFO] WSO2 Carbon - Gadget IDE - UI . FAILURE [0.283s]
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 3.655s
> [INFO] Finished at: Tue Mar 27 15:12:23 IST 2012
> [INFO] Final Memory: 40M/482M
> [INFO]
> 
> [ERROR] Failed to execute goal on project org.wso2.carbon.gadget.ide.ui:
> Could not resolve dependencies for project
> org.wso2.carbon:org.wso2.carbon.gadget.ide.ui:bundle:4.0.0-SNAPSHOT: The
> repository system is offline but the artifact
> org.wso2.carbon:org.wso2.carbon.gadget.ide.stub:jar:4.0.0-SNAPSHOT is not
> available in the local repository. -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [ERROR]
>
>
> --
> Isuru Suriarachchi
> Technical Lead
> WSO2 Inc. http://wso2.com
> email : is...@wso2.com
> blog : http://isurues.wordpress.com/
>
> lean . enterprise . middleware
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [platform] why several libthrift orbit versions?

2012-03-28 Thread Buddhika Chamith
On Wed, Mar 28, 2012 at 10:02 PM, Pradeep Fernando  wrote:

> Hi all,
>
> Actually we decided to have different versions after a long mailing thread
> discussion. Its better to use the same version, but it was not the case.
> After all we are using OSGi module layer. So why not different versions.
>

Yes having the same version may not be practical since specially due to
Cassandra dependency on an earlier version of Thrift which is AFAIK is
incompatible with new version. Identity and Agent components use new
features present in Thrift if I am not mistaken which are not present in
the version used by Cassandra. Not sure about dss though.

Regards
Buddhika


>
> please correct me if i'm wrong, else,
> please resolve the JIRA.
>
> --Pradeep
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Enabling equinox trace logs in trunk carbon

2012-04-16 Thread Buddhika Chamith
Hi All,

I see that osgi-debug.options file not included the distribution anymore
even though the launch.ini still refer to this file in it's old location
(./lib/core/WEB-INF/eclipse - commented out by default). I think its useful
to have this option since it's helpful to debug some situations where the
carbon server hangs without any error logs.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Enabling equinox trace logs in trunk carbon

2012-04-16 Thread Buddhika Chamith
On Mon, Apr 16, 2012 at 6:06 PM, Pradeep Fernando  wrote:

> Hi,
>
> I introduced it back, few days ago.


Cool.


> did you get an svn update?  the new location of the config file is
> repository/conf.
>

I will take an update and see.


>
> --Pradeep
>

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Build failure in Orbit

2012-05-03 Thread Buddhika Chamith
Hi,

Can you try building dependencies/datanuclues first and try.

Regards
Buddhika

On Thu, May 3, 2012 at 3:08 PM, Shammi Jayasinghe  wrote:

> I am also getting this. Seems
> "org.datanucleus.wso2:datanucleus-core:jar:3.0.1" missing from nexus repo.
>
> Thanks
> Shammi
> On Thu, May 3, 2012 at 2:30 PM, Sanjaya Vithanagama wrote:
>
>> Hi All,
>>
>> I'm getting the following error when trying to build orbit. (on r126375).
>>
>> [INFO]
>> 
>> [INFO] Building datanucleus-core.wso2 3.0.1.wso2v1
>> [INFO]
>> 
>> [WARNING] The POM for org.datanucleus.wso2:datanucleus-core:jar:3.0.1 is
>> missing, no dependency information available
>> [INFO]
>> 
>> [INFO] BUILD FAILURE
>> [INFO]
>> 
>> [INFO] Total time: 0.700s
>> [INFO] Finished at: Thu May 03 14:26:01 IST 2012
>> [INFO] Final Memory: 8M/981M
>> [INFO]
>> 
>> [ERROR] Failed to execute goal on project datanucleus-core: Could not
>> resolve dependencies for project
>> org.datanucleus.wso2:datanucleus-core:bundle:3.0.1.wso2v1: Failure to find
>> org.datanucleus.wso2:datanucleus-core:jar:3.0.1 in
>> http://maven.wso2.org/nexus/content/groups/wso2-public/ was cached in
>> the local repository, resolution will not be reattempted until the update
>> interval of wso2-nexus has elapsed or updates are forced -> [Help 1]
>> [ERROR]
>> [ERROR] To see the full stack trace of the errors, re-run Maven with the
>> -e switch.
>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>> [ERROR]
>> [ERROR] For more information about the errors and possible solutions,
>> please read the following articles:
>> [ERROR] [Help 1]
>> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
>>
>> --
>> Sanjaya Vithanagama
>> WSO2, Inc.; http://wso2.com
>> lean.enterprise.middleware
>>
>> cell: +94 71 342 2881
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Best Regards,*
>
> Shammi Jayasinghe*
> Senior Software Engineer; WSO2, Inc.; http://wso2.com,
> mobile: +94 71 4493085
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Build failure in Orbit

2012-05-03 Thread Buddhika Chamith
I have forked datanucleus under dependencies since it was required to be
patched for hive stuff. You are right in that it should come under
dependencies/orbit to where I have moved it now. Need to look in to test
failures coming from datanucleus. However for the moment can you continue
without building without tests of datanucleus.

Regards
Buddhika

On Thu, May 3, 2012 at 3:44 PM, Shammi Jayasinghe  wrote:

>
>
> On Thu, May 3, 2012 at 3:04 PM, Buddhika Chamith wrote:
>
>> Hi,
>>
>> Can you try building dependencies/datanuclues first and try.
>>
>
> Hi Buddika,
>   AFAIK this bundle is a top level orbit bundle. Does it depends on
> platform dependencies.
>
> how ever when building platform dependency i am getting the following error
>
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 1:41.547s
> [INFO] Finished at: Thu May 03 15:08:56 IST 2012
> [INFO] Final Memory: 19M/178M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:2.0:testCompile
> (default-testCompile) on project datanucleus-core: Compilation failure:
> Compilation failure:
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[39,45]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[225,89]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[226,89]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[227,87]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[228,83]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[229,85]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[230,91]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[231,87]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[232,82]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[233,82]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[234,83]
> package org.datanucleus.store.types.converters does not exist
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[237,26]
> cannot find symbol
> [ERROR] symbol  : method
> getTypeConverterForType(java.lang.Class,java.lang.Class)
> [ERROR] location: class org.datanucleus.store.types.TypeManager
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[238,26]
> cannot find symbol
> [ERROR] symbol  : method
> getTypeConverterForType(java.lang.Class,java.lang.Class)
> [ERROR] location: class org.datanucleus.store.types.TypeManager
> [ERROR]
> [ERROR]
> /home/shammi/wso2/development/trunk/platform/dependencies/datanucleus/src/test/org/datanucleus/store/types/TypeManagerTest.java:[239,26]
> cannot find symbol
> [ERROR] symbol  : method
> getTypeConverterForType(,java.lang.Class)
> [ERROR] location:

Re: [Dev] [Bamboo-Build] WSO2 Carbon > platform > #154 has FAILED. Change made by nirmal and shelan.

2012-05-03 Thread Buddhika Chamith
Both hadoop and datanucleus has been moved to
platform/trunk/dependencies/orbits from orbit/trunk yesterday. Possibly
orbits and platform build checkouts happened in an intermediate state?

Regards
Buddhika

On Fri, May 4, 2012 at 7:55 AM, Shelan Perera  wrote:

> Hi,
>
> Error has occurred due to following. Is this related to recent fix for
> orbit bundle of datanucleaus?
> WSO2 Hive - Core APIs and Manager interfaces .. FAILURE [0.435s]
>  ERROR] Failed to execute goal on project datanucleus-core: Could not
> resolve dependencies for project
> org.datanucleus.wso2:datanucleus-core:bundle:3.0.1.wso2v1: Failure to find
> org.datanucleus.wso2:datanucleus-core:jar:3.0.1 in
> http://maven.wso2.org/nexus/content/groups/wso2-public/ was cached in the
> local repository, resolution will not be reattempted until the update
> interval of wso2-nexus has elapsed or updates are forced -> [Help 1]
> 03-May-2012 15:26:48
>
>
>
>
> On Fri, May 4, 2012 at 4:05 AM, Bamboo  wrote:
>
>>  [image: Failed]  WSO2 Carbon›
>> platform  › 
>> #154
>> failed
>>
>> Code has been updated by nirmal,
>> shelan .
>>
>> No failed tests found, a possible compilation error.
>>Failing Jobs 
>> Job
>> Duration Tests[image: Failed]  Default 
>> Job (Default
>> Stage)  134 minutes  2029 passed  
>> Logs|
>> Artifacts
>>  Code
>> Changes   View
>> full change 
>> details
>> shelan 
>> code cleanup and fixing issue on empty rows  126433
>> nirmal
>> Increasing time out since LXCs take time to load  126430 View 
>> Online
>> | Add 
>> Comments
>>
>> This message was sent by Atlassian Bamboo .
>>
>> If you wish to stop receiving these emails edit your user 
>> profileor notify
>> your administrator .
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Shelan Perera*
>
> Software Engineer
> **
> *WSO2, Inc. : wso2.com*
> lean.enterprise.middleware.
>
> *Home Page*  :shelan.org
> *Blog* : blog.shelan.org
> *Linked-i*n  :http://www.linkedin.com/pub/shelan-perera/a/194/465
> *Twitter* :https://twitter.com/#!/shelan
>
> *Mobile*  : +94 772 604 402
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Bamboo-Build] WSO2 Carbon > platform > #156 has FAILED. Change made by lalaji.

2012-05-04 Thread Buddhika Chamith
This has been fixed as of r126448.

Regards
Buddhika

On Fri, May 4, 2012 at 1:59 PM, Bamboo  wrote:

>  [image: Failed]  WSO2 Carbon›
> platform  › 
> #156
> failed
>
> Code has been updated by lalaji
> .
>
> No failed tests found, a possible compilation error.
>   Failing Jobs 
> Job Duration Tests[image: Failed]  Default 
> Job (Default
> Stage)  132 minutes  2029 passed  
> Logs|
> Artifacts
>  Code
> Changes   View
> full change 
> details
> lalaji 
> Added blocks  126437 View 
> Online
> | Add 
> Comments
>
> This message was sent by Atlassian Bamboo .
>
> If you wish to stop receiving these emails edit your user 
> profileor notify
> your administrator .
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] org.datanucleus.wso2:datanucleus-core ?

2012-05-04 Thread Buddhika Chamith
Hi,

This has been fixed as of r126448. Can you take an up of depedencies and
try.

Regards
Buddhika

On Fri, May 4, 2012 at 2:47 PM, Dimuthu Leelarathne wrote:

> Hi all,
>
> I am getting this error on revision 126436.
>
>
> [ERROR] Failed to execute goal on project datanucleus-core: Could not
> resolve dependencies for project
> org.datanucleus.wso2:datanucleus-core:bundle:3.0.1.wso2v1: The repository
> system is offline but the artifact
> org.datanucleus.wso2:datanucleus-core:jar:3.0.1 is not available in the
> local repository. -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>
> I found a datanucleus in orbit, but it fails saying that is not available
> in nexus.
>
> [ERROR] Failed to execute goal on project datanucleus-core: Could not
> resolve dependencies for project
> org.datanucleus.wso2:datanucleus-core:bundle:3.0.1.wso2v1: Failure to find
> org.datanucleus.wso2:datanucleus-core:jar:3.0.1 in
> http://maven.wso2.org/nexus/content/groups/wso2-public/ was cached in the
> local repository, resolution will not be reattempted until the update
> interval of wso2-nexus has elapsed or updates are forced -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
>
> Any help is really appreciated.
>
> thanks,
> dimuthu
>
>
>
> --
> Dimuthu Leelarathne
> Technical Lead
>
> WSO2, Inc. (http://wso2.com)
> email: dimut...@wso2.com
>
> Lean . Enterprise . Middleware
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Compilation errors from Apache-Hive dependency

2012-05-06 Thread Buddhika Chamith
Hi,

Yes I have switched antlr versions used by Hive due to some OSGi issues
when used with Cassandra. However I fixed this build error due to version
switch. Can you take a svn up and try?

Regards
Buddhika

On Mon, May 7, 2012 at 11:46 AM, Denis Weerasiri  wrote:

> Hi Buddhika,
> I'm getting this errors in trunk. Are we using the correct antrl-runtime
> api?
>
> compile:
>  [echo] Project: ql
> [javac] Compiling 7 source files to
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build/ql/classes
> [javac]
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java:317:
> recoverFromMismatchedSet(org.antlr.runtime.IntStream,org.antlr.runtime.RecognitionException,org.antlr.runtime.BitSet)
> in org.apache.hadoop.hive.ql.parse.ParseDriver.HiveParserX cannot override
> recoverFromMismatchedSet(org.antlr.runtime.IntStream,org.antlr.runtime.RecognitionException,org.antlr.runtime.BitSet)
> in org.antlr.runtime.BaseRecognizer; attempting to use incompatible return
> type
> [javac] found   : java.lang.Object
> [javac] required: void
> [javac] public Object recoverFromMismatchedSet(IntStream input,
> [javac]   ^
> [javac]
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java:316:
> method does not override or implement a method from a supertype
> [javac] @Override
> [javac] ^
> [javac] 2 errors
>
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on project
> hive-deploy: An Ant BuildException has occured: The following error
> occurred while executing this line:
> [ERROR]
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build.xml:297:
> The following error occurred while executing this line:
> [ERROR]
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build.xml:151:
> The following error occurred while executing this line:
> [ERROR]
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/build.xml:192:
> Compile failed; see the compiler error output for details.
> [ERROR] around Ant part ... antfile="/opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/deploy/../build.xml"
> target="package">... @ 4:125 in
> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/deploy/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> [ERROR]
>
>
> --
> Thanks,
> Denis
> --
> *Denis Weerasiri*
> Senior Software Engineer
> Integration Technologies Team, WSO2 Inc.; http://wso2.com,
> *email: denis ** [AT] wso2.com *
> *phone: +94117639629
> *
> *site: 
> **https://sites.google.com/site/ddweerasiri/*
> *blog: **http://ddweerasiri.blogspot.com*
> *
> twitter: **http://twitter.com/ddweerasiri*
> *
> linked-in: 
> **http://lk.linkedin.com/in/ddweerasiri*
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Hadoop download in platform/trunk/dependencies/hive clog the build

2012-05-06 Thread Buddhika Chamith
Hi,

It will not download these distributions on subsequent builds since it
caches them. I think it is using these different hadoop versions to run
compatibility tests with those hadoop versions. However I will look in to
whether we can make this picked from nexus.

Regards
Buddhika

On Mon, May 7, 2012 at 12:10 PM, Sumedha Rubasinghe wrote:

> And again.. a different version..
>
> [ivy:retrieve] downloading
> http://mirror.facebook.net/facebook/hive-deps/hadoop/core/hadoop-0.23.0/hadoop-0.23.0.tar.gz...
>
>
>
> On Mon, May 7, 2012 at 10:39 AM, Sumedha Rubasinghe wrote:
>
>> And, we seem be downloading another version soon after.
>> [ivy:retrieve] downloading
>> http://mirror.facebook.net/facebook/hive-deps/hadoop/core/hadoop-0.20.3-CDH3-SNAPSHOT/hadoop-0.20.3-CDH3-SNAPSHOT.tar.gz...
>>
>> Can we use the latest & stop previous download?
>>
>>
>> On Mon, May 7, 2012 at 10:35 AM, Sumedha Rubasinghe wrote:
>>
>>> Hive build downloads Hadoop from
>>> http://mirror.facebook.net/facebook/hive-deps/hadoop/core/hadoop-0.20.1/hadoop-0.20.1.tar.gzand
>>>  this is close to 40 mb in size. The link seems to be very slow even
>>> accessing from browser URL.
>>>
>>>
>>> --
>>> /sumedha
>>> +94 773017743
>>>
>>
>>
>>
>> --
>> /sumedha
>> +94 773017743
>>
>
>
>
> --
> /sumedha
> +94 773017743
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Compilation errors from Apache-Hive dependency

2012-05-07 Thread Buddhika Chamith
This happens due to build target folder not being cleaned up properly so
will not happen for first time builders. I have added a clean target now so
it should be fine now.

Regards
Buddhika

On Mon, May 7, 2012 at 12:21 PM, Buddhika Chamith wrote:

> Hi,
>
> Yes I have switched antlr versions used by Hive due to some OSGi issues
> when used with Cassandra. However I fixed this build error due to version
> switch. Can you take a svn up and try?
>
> Regards
> Buddhika
>
>
> On Mon, May 7, 2012 at 11:46 AM, Denis Weerasiri  wrote:
>
>> Hi Buddhika,
>> I'm getting this errors in trunk. Are we using the correct antrl-runtime
>> api?
>>
>> compile:
>>  [echo] Project: ql
>> [javac] Compiling 7 source files to
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build/ql/classes
>> [javac]
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java:317:
>> recoverFromMismatchedSet(org.antlr.runtime.IntStream,org.antlr.runtime.RecognitionException,org.antlr.runtime.BitSet)
>> in org.apache.hadoop.hive.ql.parse.ParseDriver.HiveParserX cannot override
>> recoverFromMismatchedSet(org.antlr.runtime.IntStream,org.antlr.runtime.RecognitionException,org.antlr.runtime.BitSet)
>> in org.antlr.runtime.BaseRecognizer; attempting to use incompatible return
>> type
>> [javac] found   : java.lang.Object
>> [javac] required: void
>> [javac] public Object recoverFromMismatchedSet(IntStream input,
>> [javac]   ^
>> [javac]
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseDriver.java:316:
>> method does not override or implement a method from a supertype
>> [javac] @Override
>> [javac] ^
>> [javac] 2 errors
>>
>>
>> [ERROR] Failed to execute goal
>> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on project
>> hive-deploy: An Ant BuildException has occured: The following error
>> occurred while executing this line:
>> [ERROR]
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build.xml:297:
>> The following error occurred while executing this line:
>> [ERROR]
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/build.xml:151:
>> The following error occurred while executing this line:
>> [ERROR]
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/ql/build.xml:192:
>> Compile failed; see the compiler error output for details.
>> [ERROR] around Ant part ...> antfile="/opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/deploy/../build.xml"
>> target="package">... @ 4:125 in
>> /opt/installations/wso2/trunk/carbon/platform/trunk/dependencies/hive/deploy/target/antrun/build-main.xml
>> [ERROR] -> [Help 1]
>> [ERROR]
>>
>>
>> --
>> Thanks,
>> Denis
>> --
>> *Denis Weerasiri*
>> Senior Software Engineer
>> Integration Technologies Team, WSO2 Inc.; http://wso2.com,
>> *email: denis <http://goog_277208233/>** [AT] wso2.com <http://wso2.com/>
>> *
>> *phone: +94117639629
>> *
>> *site: 
>> **https://sites.google.com/site/ddweerasiri/*<https://sites.google.com/site/ddweerasiri/>
>> *blog: **http://ddweerasiri.blogspot.com*<http://ddweerasiri.blogspot.com/>
>> *
>> twitter: **http://twitter.com/ddweerasiri*<http://twitter.com/ddweerasiri>
>> *
>> linked-in: 
>> **http://lk.linkedin.com/in/ddweerasiri*<http://lk.linkedin.com/in/ddweerasiri>
>>
>>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] BAM2 Milestone 3 Released

2012-05-08 Thread Buddhika Chamith
WSO2 is pleased to announce availability of BAM2 Milestone 3. This
milestone cannot be tested for samples or end-end scenarios because of
Ongoing/Removed features. Testing will be possible in future milestones,
and will be announced as appropriate.

New features included in this release are

- Integration of SQL based HiveQL as the analytics language and Apache Hive
integration
- Apache Hive and Cassandra integration facilitating data retrieval from
Cassandra to Hive analytic jobs

Ongoing features
- Revamping Java SDK for Agent API
- Strongly typed events
- Gadget Generation wizard
- Data writing into Cassandra/ JDBC after running Hive jobs

Removed Features
- Adding analytics based on BAM Analyzer XML language
- Analyzer Framework

Hive analytic jobs can be programatically submitted using Hive JDBC driver
[1]. Samples on retrieving data from Cassandra to Hive can be found at [2].
Please note that these features are still work in progress and associated
user interfaces are currently being worked on and would be available in
next milestone releases.

Distribution can be downloaded from [3].


Thanks
WSO2 Data Technologies Team

[1] https://cwiki.apache.org/Hive/hiveclient.html
[2] http://www.datastax.com/docs/0.8/brisk/about_hive
[3] http://dist.wso2.org/products/bam/2.0.0-m1/wso2bam-2.0.0-SNAPSHOT.zip
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Hive build failure in windows

2012-05-11 Thread Buddhika Chamith
Hi,

I have removed the symlinks from the ant build since it's not required for
building maven artifacts. Would be great if some one running on windows can
verify whether it's building properly now.

Regards
Buddhika

On Sat, May 12, 2012 at 8:09 AM, Chintana Wilamuna wrote:

> Hi,
>
> Easier way to fix this is to install MinGW (http://www.mingw.org/) in
> Windows. Then install MSYS (mingw-get install msys) which has ln and other
> Unix tools.
>
> -Chintana
>
>
> On Fri, May 11, 2012 at 1:26 AM, Thilina Buddhika wrote:
>
>> Hi Folks,
>>
>> This is not yet fixed and build remains broken for Windows users. Quick
>> resolution would be highly appreciated.
>>
>> Thanks,
>> Thilina
>>
>>
>> On Tue, May 8, 2012 at 8:26 AM, Selvaratnam Uthaiyashankar <
>> shan...@wso2.com> wrote:
>>
>>> On Tue, May 8, 2012 at 8:23 AM, Selvaratnam Uthaiyashankar
>>>  wrote:
>>> > $subject. It tries to create a symlink with "ln". This is not valid in
>>> > windows. Please fix this.
>>>
>>> Note that ant symlink task can work only in unix based platforms[1].
>>>
>>>
>>> [1] http://ant.apache.org/manual/Tasks/symlink.html
>>>
>>>
>>> >
>>> > [ERROR] Failed to execute goal
>>> > org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on
>>> > project hive-deploy: An Ant BuildException has occured: The following
>>> > error occurred while executing th
>>> > is line:
>>> > [ERROR]
>>> E:\src\java\carbon_trunk\platform\dependencies\hive\build.xml:520:
>>> > Could not launch ln: java.io.IOException: Cannot run program "ln":
>>> > CreateProcess error=2, The system cannot find the file spe
>>> > cified
>>> > [ERROR] around Ant part ...>> >
>>> antfile="E:\src\java\carbon_trunk\platform\dependencies\hive\deploy/../build.xml"
>>> > target="package">... @ 4:107 in
>>> > E:\src\java\carbon_trunk\platform\dependencies\hive\de
>>> > ploy\target\antrun\build-main.xml
>>> > [ERROR] -> [Help 1]
>>> > [ERROR]
>>> > [ERROR] To see the full stack trace of the errors, re-run Maven with
>>> > the -e switch.
>>> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>> > [ERROR]
>>> > [ERROR] For more information about the errors and possible solutions,
>>> > please read the following articles:
>>> > [ERROR] [Help 1]
>>> >
>>> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
>>> >
>>> >
>>> > --
>>> > S.Uthaiyashankar
>>> > Senior Software Architect
>>> > Chair, Management Committee – Cloud Technologies
>>> > WSO2 Inc.
>>> > http://wso2.com/ - "lean . enterprise . middleware"
>>> >
>>> > Phone: +94 714897591
>>>
>>>
>>>
>>> --
>>> S.Uthaiyashankar
>>> Senior Software Architect
>>> Chair, Management Committee – Cloud Technologies
>>> WSO2 Inc.
>>> http://wso2.com/ - "lean . enterprise . middleware"
>>>
>>> Phone: +94 714897591
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>
>>
>>
>> --
>> Thilina Buddhika
>> Associate Technical Lead
>> WSO2 Inc. ; http://wso2.com
>>
>> lean . enterprise . middleware
>>
>> phone : +94 77 44 88 727
>> blog : http://blog.thilinamb.com
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Chintana Wilamuna
> Associate Technical Lead
> WSO2, Inc.; http://wso2.com
> lean.enterprise.middleware
>
> phone: +94 75 211 1106
> blog: http://engwar.com/
> photos: http://flickr.com/photos/chintana
> linkedin: http://www.linkedin.com/in/engwar
> twitter: twitter.com/std_err
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Blog post] Cassandra Explorer

2012-05-13 Thread Buddhika Chamith
+1. Good stuff. Makes it lot easier to work with Cassandra.

Regards
Buddhika

On Sun, May 13, 2012 at 7:29 AM, Shelan Perera  wrote:

> Hi,
>
> I have added a small blog post on how to use $Subject. [1]
>
> [1]
> http://blog.shelan.org/2012/05/cassandra-explorer-gui-for-cassandra.html
>
> --
> *Shelan Perera*
>
> Software Engineer
> **
> *WSO2, Inc. : wso2.com*
> lean.enterprise.middleware.
>
> *Home Page*  :shelan.org
> *Blog* : blog.shelan.org
> *Linked-i*n  :http://www.linkedin.com/pub/shelan-perera/a/194/465
> *Twitter* :https://twitter.com/#!/shelan
>
> *Mobile*  : +94 772 604 402
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [Architecture] [Carbon-dev] Improvement: UI for ntask component

2012-05-13 Thread Buddhika Chamith
This nice to have around. Has any work done on this front so far? We need a
way to schedule analytics scripts (Hive) for BAM usecases.

Regards
Buddhika

On Mon, Feb 27, 2012 at 7:26 AM, Anjana Fernando  wrote:

> +1 .. As you mentioned, we can create a generic UI for start/stop/status
> functionality, so task based implementations can use the generic UI rather
> than always creating that part from scratch. Then we've to come up with
> some convention to link a specific task type to the context specific UI.
> The concept of "task type" is already there in ntask, so just have to do
> the mapping.
>
> Cheers,
> Anjana.
>
>
> On Fri, Feb 24, 2012 at 11:42 PM, Senaka Fernando  wrote:
>
>> Hi Tharindu,
>>
>> In effect yes. That's pretty much what's generic when it comes to tasks.
>> Something analogous (to a certain extent) would be the Windows Task manager
>> (the generic bit) compared with something like the Wrapper that we use to
>> start Carbon as a background process (the context-specific bit).
>>
>> Thanks,
>> Senaka.
>> On Fri, Feb 24, 2012 at 11:13 PM, Tharindu Mathew wrote:
>>
>>> Do you mean something like a task manager/monitor UI.
>>>
>>> functionality: Start, stop tasks, list running tasks...
>>>
>>>
>>> On Fri, Feb 24, 2012 at 10:25 PM, Senaka Fernando wrote:
>>>
 Hi Tharindu,

 Well, there is a slight relevance, and hence this e-mail. Take the
 service UI for example. Multiple products publish multiple kinds of
 services which have different semantics, but utilize a single UI. And,
 another example is a situation as in G-Reg. There can be multiple types of
 tasks that may get scheduled even in a single product. I'm not suggesting a
 100% generic UI, but I'm trying to understand the best solution to this
 situation. While we can write our own thing is an easy answer. How can I
 manage, understand and correlate all my tasks is definitely going to be a
 question to a user in the long run.

 Thanks,
 Senaka.
 On Fri, Feb 24, 2012 at 9:56 PM, Tharindu Mathew wrote:

> BAM will also use ntask quite soon, and what you say applies. The
> context of a task varies greatly.
>
> So having a generic UI has no meaning if the context of tasks are
> different, does it?
>
> On Fri, Feb 24, 2012 at 9:29 PM, Senaka Fernando wrote:
>
>>  Hi all,
>>
>> The ntask component, done by Anjana, is very useful to schedule any
>> type of task based on Quartz. I got G-Reg to use this, and (except for
>> exception handling which is totally not useful, :-)..) it is great. But,
>> DSS which is the only other product apart from the next release of G-Reg
>> which uses ntask has its own UI. G-Reg and any other product starting to
>> use ntask would love to have a UI to manage it, and a UI-per product is
>> definitely of least use. The proposal I'm making here is to drop the
>> existing DSS task scheduling UI and design a new one based on ntask, that
>> is generic such that more than one product can make use of it.
>>
>> But, there is a slight catch here because task scheduling can have a
>> different meaning from product to product. In DSS the use-case is DSS
>> invocation. In G-Reg some usecases are report generation and
>> change/lifecycle management. So, a proposal from Isabelle was to create a
>> generic task thing and link the context sensitive scheduling interfaces
>> (i.e. the Reporting UI that we are planning for G-Reg) with that.
>>
>> Your feedback is most appreciated.
>>
>> Thanks,
>> Senaka.
>>
>> --
>> *Senaka Fernando*
>> Product Manager - WSO2 Governance Registry;
>> Associate Technical Lead; WSO2 Inc.; http://wso2.com*
>> Member; Apache Software Foundation; http://apache.org
>>
>> E-mail: senaka AT wso2.com
>> **P: +1 408 754 7388; ext: 51736*; *M: +94 77 322 1818
>> Linked-In: http://linkedin.com/in/senakafernando
>>
>> *Lean . Enterprise . Middleware
>>
>>
>> ___
>> Architecture mailing list
>> architect...@wso2.org
>> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>>
>>
>
>
> --
> Regards,
>
> Tharindu
>
> blog: http://mackiemathew.com/
> M: +9459908
>
>
> ___
> Architecture mailing list
> architect...@wso2.org
> https://mail.wso2.org/cgi-bin/mailman/listinfo/architecture
>
>


 --
 *Senaka Fernando*
 Product Manager - WSO2 Governance Registry;
 Associate Technical Lead; WSO2 Inc.; http://wso2.com*
 Member; Apache Software Foundation; http://apache.org

 E-mail: senaka AT wso2.com
 **P: +1 408 754 7388; ext: 51736*; *M: +94 77 322 1818
 Linked-In: http://linkedin.com/in/senakafernando

 *Lean . Enterprise . Middleware


>

Re: [Dev] Massive amounts of Time spent on Building Dependencies with Ant-based Build Systems

2012-05-17 Thread Buddhika Chamith
Hi,

Sorry for the late reply. +1 for hosting the orbit. Hopefully there would
be infrequent changes to Hive code. I had to do several modifications to
Hive source to make it work inside Carbon and fix its Hadoop job submission
for BAM2 requirements.

Regards
Buddhika

On Thu, May 17, 2012 at 1:29 PM, Tharindu Mathew  wrote:

> I'm not sure of the repo, can we put this to the snapshot repo?
>
> ~/.m2/repository/org/apache/hive/wso2/hive/0.8.1-wso2v1 - this should be
> the path
>
>
> On Thu, May 17, 2012 at 12:10 PM, Yasith Tharindu  wrote:
>
>> can you send us the exact repo, path to host this .
>>
>>
>>
>> On Thu, May 17, 2012 at 11:40 AM, Tharindu Mathew wrote:
>>
>>> Hi Dhanushka/Yasith,
>>>
>>> Any luck with this?
>>>
>>>
>>> On Wed, May 16, 2012 at 7:17 PM, Tharindu Mathew wrote:
>>>
>>>> Can we upload this jar to the snapshot repo?
>>>>
>>>> We need to remove the hive dependency from the build pom for now...
>>>>
>>>>
>>>> -- Forwarded message --
>>>> From: Afkham Azeez 
>>>> Date: Wed, May 16, 2012 at 6:45 PM
>>>> Subject: Re: [Dev] Massive amounts of Time spent on Building
>>>> Dependencies with Ant-based Build Systems
>>>> To: Senaka Fernando , Buddhika Chamith <
>>>> buddhi...@wso2.com>
>>>> Cc: WSO2 Developers' List 
>>>>
>>>>
>>>> Buddhika,
>>>> Seems like you have added hive. Please get rid of it. We put MASSIVE
>>>> amounts of effort to make the platform build fast in the past few months,
>>>> and all that has been undone by adding these dependencies that take many
>>>> hours to build. Build those locally and add those to the SNAPSHOT repo.
>>>> Anyway, why do we need to keep the source code and build it? Why are we
>>>> modifying it?
>>>>
>>>>
>>>> On Wed, May 16, 2012 at 6:41 PM, Afkham Azeez  wrote:
>>>>
>>>>> Folks
>>>>> Please get rid of these dependencies. We are trying to do a critical
>>>>> fix to the platform, yet have to wait for serveral hours just to get past
>>>>> this ivy:resolve..
>>>>>
>>>>>
>>>>> Upload those artifacts to the snapshot repo!
>>>>>
>>>>> On Sat, May 12, 2012 at 12:28 PM, Senaka Fernando wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> None of Maven's benefits are useful while building dependencies
>>>>>> inside the platform having ant-based build systems. Can we exclude them
>>>>>> from the build? Its a tremendous amount of time spent to do a build due 
>>>>>> to
>>>>>> dependencies such as Cassandra, Andes and Hive.
>>>>>>
>>>>>> Thanks,
>>>>>> Senaka.
>>>>>>
>>>>>> --
>>>>>> *Senaka Fernando*
>>>>>> Product Manager - WSO2 Governance Registry;
>>>>>> Associate Technical Lead; WSO2 Inc.; http://wso2.com*
>>>>>> Member; Apache Software Foundation; http://apache.org
>>>>>>
>>>>>> E-mail: senaka AT wso2.com
>>>>>> **P: +1 408 754 7388; ext: 51736*; *M: +94 77 322 1818
>>>>>> Linked-In: http://linkedin.com/in/senakafernando
>>>>>>
>>>>>> *Lean . Enterprise . Middleware
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Dev mailing list
>>>>>> Dev@wso2.org
>>>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> *Afkham Azeez*
>>>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>>>> Member; Apache Software Foundation; http://www.apache.org/
>>>>> * <http://www.apache.org/>**
>>>>> email: **az...@wso2.com* * cell: +94 77 3320919
>>>>> blog: **http://blog.afkham.org* <http://blog.afkham.org>*
>>>>> twitter: 
>>>>> **http://twitter.com/afkham_azeez*<http://twitter.com/afkham_azeez>
>>>>> *
>>>>> linked-in: **http://lk.linkedin.com/in/afkhamazeez*
>>>>>
>>>>> *
>>>>> *
>>>>> *Lean . 

Re: [Dev] Conventions not followed for analytics component

2012-05-25 Thread Buddhika Chamith
Working on it.

Regards
Buddhika

On Fri, May 25, 2012 at 1:49 PM, Tharindu Mathew  wrote:

>
> This is not right... Please fix it on monday, I'm in the process of
> building for a milestone release
>
> $ ls
> org.wso2.carbon.analytics.hive
> org.wso2.carbon.hive.explorer.ui
>
> It should be;
> org.wso2.carbon.analytics.hive.ui
>
>
>
>
> --
> Regards,
>
> Tharindu
>
> blog: http://mackiemathew.com/
> M: +9459908
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Adding a Data Source using DataSourceInformationRepositoryService OSGi service

2012-05-30 Thread Buddhika Chamith
Hi,

I tried $subject. Although I was able to add a DataSource the data source
definition doesn't seem to get persisted so that it won't survive across
server restarts and the added DataSource doesn't get listed in the
Datasources page as well. Going through the data-source component code I
see that there is additional logic to persist the definition to the
registry in the datasource admin service eventhough it's not included in
the OSGi service. I feel it would be beneficial the provide a comparable
set of functionalities across the web service and the OSGi service exposed.
Any ideas on this?

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Adding a Data Source using DataSourceInformationRepositoryService OSGi service

2012-05-30 Thread Buddhika Chamith
On Wed, May 30, 2012 at 5:47 PM, Prabath Abeysekera wrote:

>
> On Wed, May 30, 2012 at 4:56 PM, Buddhika Chamith wrote:
>
>> Hi,
>>
>> I tried $subject. Although I was able to add a DataSource the data source
>> definition doesn't seem to get persisted so that it won't survive across
>> server restarts and the added DataSource doesn't get listed in the
>> Datasources page as well. Going through the data-source component code I
>> see that there is additional logic to persist the definition to the
>> registry in the datasource admin service eventhough it's not included in
>> the OSGi service. I feel it would be beneficial the provide a comparable
>> set of functionalities across the web service and the OSGi service exposed.
>> Any ideas on this?
>>
>
> Currently, it doesn't expose the functionalities that can be performed
> upon datasources except the access for DataSourceRepository. +1 for the
> suggestion.
>

Yes. I was able to add datasource definition using the returned
DataSourceRepository. Would be great if this can be incorporated to the
coming release.

Regards
Buddhika

>
>
>>
>> Regards
>> Buddhika
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Prabath Abeysekara
> Software Engineer
> WSO2 Inc.
> Email: praba...@wso2.com 
> Mobile: +94774171471
>
> <http://harshana05.blogspot.com/>
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Performance testing/load testing with Cassandra appender

2012-06-11 Thread Buddhika Chamith
On Mon, Jun 11, 2012 at 6:03 PM, Paul Fremantle  wrote:

> Does the BAM event sender implement and in-memory queueing model and async?


Yes. Event publishing happen via a queue in an async manner.

Regards
Buddhika


>
> Paul
>
> On 11 June 2012 11:40, Afkham Azeez  wrote:
>
>> Regardless of the appender, such appenders that publish events could have
>> an effect on performance.
>>
>>
>> On Mon, Jun 11, 2012 at 12:21 PM, Amani  wrote:
>>
>>> Actually we are not using the Cassandra appender. Instead of that we are
>>> using LogEventAppender which sends logs to BAM. Cassandra appender was
>>> something we implemented earlier before we decided to send logs to BAM. We
>>> will test LogEventAppender and send the result once we complete it. Right
>>> now we are having issues publishing log events to BAM.
>>>
>>> Sent from my iPhone
>>>
>>> On Jun 11, 2012, at 12:01 PM, Afkham Azeez  wrote:
>>>
>>> I have a major concern about this appender with regards to performance.
>>> If it leads to major performance issues, we cannot use it. Amani, can you
>>> do some tests which do logging with & without the Cassandra appender & send
>>> the results?
>>>
>>> --
>>> *Afkham Azeez*
>>> Director of Architecture; WSO2, Inc.; http://wso2.com
>>> Member; Apache Software Foundation; 
>>> http://www.apache.org/
>>> * **
>>> email: **az...@wso2.com* * cell: +94 77 3320919
>>> blog: **http://blog.afkham.org* *
>>> twitter: **http://twitter.com/afkham_azeez*
>>> *
>>> linked-in: ** 
>>> http://lk.linkedin.com/in/afkhamazeez*
>>> *
>>> *
>>> *Lean . Enterprise . Middleware*
>>>
>>>
>>
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * **
>> email: **az...@wso2.com* * cell: +94 77 3320919
>> blog: **http://blog.afkham.org* *
>> twitter: **http://twitter.com/afkham_azeez*
>> *
>> linked-in: **http://lk.linkedin.com/in/afkhamazeez*
>> *
>> *
>> *Lean . Enterprise . Middleware*
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Paul Fremantle
> CTO and Co-Founder, WSO2
> OASIS WS-RX TC Co-chair, VP, Apache Synapse
>
> UK: +44 207 096 0336
> US: +1 646 595 7614
>
> blog: http://pzf.fremantle.org
> twitter.com/pzfreo
> p...@wso2.com
>
> wso2.com Lean Enterprise Middleware
>
> Disclaimer: This communication may contain privileged or other
> confidential information and is intended exclusively for the addressee/s.
> If you are not the intended recipient/s, or believe that you may have
> received this communication in error, please reply to the sender indicating
> that fact and delete the copy you received and in addition, you should not
> print, copy, retransmit, disseminate, or otherwise use the information
> contained in this communication. Internet communications cannot be
> guaranteed to be timely, secure, error or virus-free. The sender does not
> accept liability for any errors or omissions.
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Stratos Usage and Summarizer to be migrated to use BAM2.

2012-06-13 Thread Buddhika Chamith
GroupBy analyzer followed by aggregate analyzer can be used to this
requirement. In the group by use time based grouping with required
granularity "DAY, MONTH, YEAR" etc.

 

 


 See [1]. These kind of aggregations would become much easy with Hive based
analytics in the future.

Regards
Buddhika

[1] http://wso2.org/project/bam/2.0.0-alpha2/docs/analyzers.html#GroupBy

On Wed, Jun 13, 2012 at 1:34 PM, Kathiravelu Pradeeban
wrote:

>
>
> On Tue, Jun 5, 2012 at 2:06 PM, Sanjeewa Malalgoda wrote:
>
>>
>>
>> On Tue, Jun 5, 2012 at 1:54 PM, Kasun Weranga  wrote:
>>
>>>
>>>
>>> On Tue, Jun 5, 2012 at 1:06 PM, Sanjeewa Malalgoda wrote:
>>>
 +1 for doing this change. Actually we have to change few places.

 01. Change *Usage agent's* publisherUtils publish method with new
 publish method pointing to bam2.
 02. Modify *Summery generation* code for summarize hourly, daily,
 monthly

>>>
>>> Are you going to modify the existing summary generation code? I think
>>> better way is to use analyzer framework provided by BAM2 for doing the
>>> summarization.
>>>
>>
> Analyzer Framework comes with multiple analyzers. Is it the Aggregate
> Analyzer that we should use for our summary generation? A pointer to the
> relevant classes in bam2.analyzer would be appreciated.
>
> Regards,
> Pradeeban.
>
>
>> +for use analyzer frame work. In earlier case also we used extended bam
>> core summery generator code.
>>
>>>   03. Change* Usage service* data retrieving code to get usage data
 from cassandra.

>>>
>>> since the earlier implantation read data from RDBMS. You might use the
>>> same implementation(with minimal change) If we use hive queries to read
>>> data from cassandra, then do the summarization and put the summarized data
>>> into RDBMS as earlier.
>>>
>>> We access them by calling Data service (meteringquery.dbs). So is it
>> possible to use data services with BAM2.
>> I guess we cant. In that case we might have to write usage service code.
>> Also please note that except initial publishing process, we use data
>> services for almost all the data base operations in usage.
>>
>>> Thanks,
>>> KasunW.
>>>
>>>   (Usage ui and throttling manager will use this service)

 Thanks.

 On Tue, Jun 5, 2012 at 12:09 PM, Kathiravelu Pradeeban <
 pradee...@wso2.com> wrote:

>  Hi,
> Currently Stratos usage and summarizer components are using BAM
> components. This is to be ported to use the new BAM2 in trunk.
> We have started working on this, with PublisherUtils of the usage
> bundle, to begin with publishing.
>
> Thank you.
> Regards,
> Pradeeban.
>
> --
> Kathiravelu Pradeeban.
> Cloud Technologies Team.
> WSO2 Inc.
>
> Blog: [Llovizna] http://kkpradeeban.blogspot.com/
> M: +94 776 477 976
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>


 --
 *Sanjeewa Malalgoda*
 mobile : +94 713068779
  blog
 :http://sanjeewamalalgoda.blogspot.com/

 ___
 Dev mailing list
 Dev@wso2.org
 http://wso2.org/cgi-bin/mailman/listinfo/dev


>>>
>>>
>>>
>>
>>
>> --
>> *Sanjeewa Malalgoda*
>> mobile : +94 713068779
>>  blog
>> :http://sanjeewamalalgoda.blogspot.com/
>>
>
>
>
> --
> Kathiravelu Pradeeban.
> Cloud Technologies Team.
> WSO2 Inc.
>
> Blog: [Llovizna] http://kkpradeeban.blogspot.com/
> M: +94 776 477 976
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Behavior of Entitlement mediator when authorization fails

2012-06-16 Thread Buddhika Chamith
Hi,

 I didn't see any log or exception at ESB when I tried the xacml sample
with a failed authorization. I was under the impression the flow would go
through the fault sequence once authorization failed. May I know the
intended behavior once this happens specially as seen by the client? (I am
simply getting an org.apache.axis2.AxisFault: The input stream for an
incoming message is null at my sample client. Shouldn't the error be more
specific for a failed authorization?).

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Behavior of Entitlement mediator when authorization fails

2012-06-16 Thread Buddhika Chamith
Hi Suresh,

Well it's the entitlement mediator I tried out. I think you have tried out
the OAuth mediator. Anyway I am getting following log at IS.

[2012-06-17 08:31:45,595]  INFO
{org.wso2.carbon.identity.entitlement.policy.PolicyCollection} -  Matching
XACML policy found urn:sample:xacml:2.0:samplepolicy
[2012-06-17 08:31:45,599]  INFO
{org.wso2.carbon.identity.entitlement.pip.CarbonAttributeFinder} -  No
attribute designators defined for the attribute group

Did the flow went through the fault sequence when the OAuth authorization
failed?

Thanks and Regards
Buddhika

On Sun, Jun 17, 2012 at 4:11 AM, Suresh Attanayaka  wrote:

> Hi Chamith,
>
> I do get an error log for failed authorizations at the ESB console. Given
> bellow is the exception I could generate.
>
> [2012-06-17 02:25:53,044] ERROR - OAuthMediator Error occured while
> validating oauth consumer
> org.apache.synapse.SynapseException: OAuth authentication failed
> at
> org.wso2.carbon.identity.oauth.mediator.OAuthMediator.mediate(OAuthMediator.java:120)
>  at
> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:60)
> at
> org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
>  at
> org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:154)
> at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:181)
>  at
> org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
> at
> org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
>  at
> org.apache.synapse.transport.nhttp.util.RESTUtil.processGetAndDeleteRequest(RESTUtil.java:139)
> at
> org.apache.synapse.transport.nhttp.DefaultHttpGetProcessor.processGetAndDelete(DefaultHttpGetProcessor.java:464)
>  at
> org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.process(NHttpGetProcessor.java:296)
> at
> org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:258)
>  at
> org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:173)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> Do you get any error logs in the IS console ? what is the scenario you
> tried ?
>
> Thanks,
> Suresh
>
> On Sun, Jun 17, 2012 at 1:35 AM, Buddhika Chamith wrote:
>
>> Hi,
>>
>>  I didn't see any log or exception at ESB when I tried the xacml sample
>> with a failed authorization. I was under the impression the flow would go
>> through the fault sequence once authorization failed. May I know the
>> intended behavior once this happens specially as seen by the client? (I am
>> simply getting an org.apache.axis2.AxisFault: The input stream for an
>> incoming message is null at my sample client. Shouldn't the error be more
>> specific for a failed authorization?).
>>
>> Regards
>> Buddhika
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Suresh Attanayake
> Software Engineer; WSO2 Inc. http://wso2.com/
> Blog : http://sureshatt.blogspot.com/
> Twitter : https://twitter.com/sureshatt
> LinkedIn : http://lk.linkedin.com/in/sureshatt
> Mobile : +94755012060,+94770419136,+94710467976
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] BAM201 exception when executing the default hive script, service_stats_306

2012-11-25 Thread Buddhika Chamith
Are you running BAM with an offset? If so to connect to embedded Cassandra
within the script you will need to change the default port specified in
Hive CREATE TABLE for Cassandra column family in the Hive script.

Regards
Buddhika

On Mon, Nov 26, 2012 at 2:15 AM, Kasun Gajasinghe  wrote:

> Hi,
>
> I'm trying to configure BAM to work with ESB. I was able to successfully
> connect ESB 403 to work with BAM201. Cassandra explorer worked fine too.
> Then, I added 'Service Stats Monitoring Toolbox' via ui. And then, I went
> to *“Main” tab --> “Manage” menu --> “Analytics” --> “List”, and executed
> the default hive script '*service_stats_306*'. But when I did this, I got
> the following exception. I didn't configure any db/hadoop/hive. Does it
> need any? *
>
> *ERROR: Error while executing Hive script.Query returned non-zero code:
> 9, cause: FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask*
>
> osgi> [2012-11-25 12:27:00,022]  INFO
> {org.wso2.carbon.analytics.hive.task.HiveScriptExecutorTask} -  Running
> script executor task for script service_stats_306. [Sun Nov 25 12:27:00 PST
> 2012]
> Hive history file=/tmp/kasun/hive_job_log_kasun_201211251227_2115407573.txt
> FAILED: Error in metadata: MetaException(message:Unable to connect to the
> server org.apache.hadoop.hive.cassandra.CassandraException: unable to
> connect to server)
> [2012-11-25 12:27:00,166] ERROR {org.apache.hadoop.hive.ql.exec.Task} -
>  FAILED: Error in metadata: MetaException(message:Unable to connect to the
> server org.apache.hadoop.hive.cassandra.CassandraException: unable to
> connect to server)
> org.apache.hadoop.hive.ql.metadata.HiveException:
> MetaException(message:Unable to connect to the server
> org.apache.hadoop.hive.cassandra.CassandraException: unable to connect to
> server)
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:546)
>  at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3479)
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:225)
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
> at
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1334)
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1125)
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:933)
> at
> org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:201)
>  at
> org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
> at
> org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:325)
>  at
> org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:225)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: MetaException(message:Unable to connect to the server
> org.apache.hadoop.hive.cassandra.CassandraException: unable to connect to
> server)
> at
> org.apache.hadoop.hive.cassandra.CassandraManager.openConnection(CassandraManager.java:118)
>  at
> org.apache.hadoop.hive.cassandra.CassandraStorageHandler.preCreateTable(CassandraStorageHandler.java:168)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:397)
>  at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:540)
> ... 16 more
>
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> [2012-11-25 12:27:00,167] ERROR {org.apache.hadoop.hive.ql.Driver} -
>  FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> [2012-11-25 12:27:00,169] ERROR
> {org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl} -  Error
> while executing Hive script.
> Query returned non-zero code: 9, cause: FAILED: Execution Error, return
> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED:
> Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> at
> org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:189)
>  at
> org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:325)
> at
> org.wso2.carbon.analytics.hive.impl.HiveExecutorServiceImpl$ScriptCallable.call(HiveExecutorServiceImpl.java:225)
>  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>  at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.

Re: [Dev] platform build error

2012-12-04 Thread Buddhika Chamith
I am not getting this error with a clean repo.

Regards
Buddhika

On Tue, Dec 4, 2012 at 4:47 PM, Ravi Undupitiya  wrote:

> Hi,
>
>
> I got the same error. Here's more output with the -e switch:
>
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 56.731s
> [INFO] Finished at: Tue Dec 04 16:40:22 IST 2012
> [INFO] Final Memory: 110M/1322M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on project
> hive-deploy: An Ant BuildException has occured: The following error
> occurred while executing this line:
> [ERROR]
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:297:
> The following error occurred while executing this line:
> [ERROR]
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:151:
> The following error occurred while executing this line:
> [ERROR]
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/metastore/build.xml:98:
> Java returned: 1
> [ERROR] around Ant part ... antfile="/home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/deploy/../build.xml"
> target="package">... @ 4:137 in
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/deploy/target/antrun/build-main.xml
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
> goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (default) on
> project hive-deploy: An Ant BuildException has occured: The following error
> occurred while executing this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:297:
> The following error occurred while executing this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:151:
> The following error occurred while executing this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/metastore/build.xml:98:
> Java returned: 1
> around Ant part ... antfile="/home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/deploy/../build.xml"
> target="package">... @ 4:137 in
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/deploy/target/antrun/build-main.xml
>  at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:217)
> at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>  at
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
>  at
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
> at
> org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
>  at
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
>  at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
>  at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
>  at
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
>  at
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
> at
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
> Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant
> BuildException has occured: The following error occurred while executing
> this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:297:
> The following error occurred while executing this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/build.xml:151:
> The following error occurred while executing this line:
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/metastore/build.xml:98:
> Java returned: 1
> around Ant part ... antfile="/home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.1-wso2v5/deploy/../build.xml"
> target="package">... @ 4:137 in
> /home/ravi/Projects/carbon/platform/branches/4.0.0/dependencies/hive/0.8.

[Dev] Error Building Cassandra cluster mgt ui

2012-12-05 Thread Buddhika Chamith
I am getting following during the build.

[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[30,60]
cannot find symbol
symbol  : class ProxyKeyspaceInitialInfo
location: package org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[31,60]
cannot find symbol
symbol  : class ProxyNodeInitialInfo
location: package org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[371,11]
cannot find symbol
symbol  : class ProxyNodeInitialInfo
location: class
org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[380,11]
cannot find symbol
symbol  : class ProxyKeyspaceInitialInfo
location: class
org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[374,49]
cannot find symbol
symbol  : method getNodeInitialInfo(java.lang.String)
location: class
org.wso2.carbon.cassandra.cluster.proxy.stub.operation.ClusterOperationProxyAdminStub
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[383,49]
cannot find symbol
symbol  : method getKeyspaceInitialInfo(java.lang.String)
location: class
org.wso2.carbon.cassandra.cluster.proxy.stub.operation.ClusterOperationProxyAdminStub
[INFO] 6 errors
[INFO] -
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 4.791s
[INFO] Finished at: Thu Dec 06 07:28:22 IST 2012
[INFO] Final Memory: 27M/130M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
(default-compile) on project org.wso2.carbon.cassandra.cluster.mgt.ui:
Compilation failure: Compilation failure:
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[30,60]
cannot find symbol
[ERROR] symbol  : class ProxyKeyspaceInitialInfo
[ERROR] location: package
org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[31,60]
cannot find symbol
[ERROR] symbol  : class ProxyNodeInitialInfo
[ERROR] location: package
org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[371,11]
cannot find symbol
[ERROR] symbol  : class ProxyNodeInitialInfo
[ERROR] location: class
org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[380,11]
cannot find symbol
[ERROR] symbol  : class ProxyKeyspaceInitialInfo
[ERROR] location: class
org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
[ERROR]
/opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperation

Re: [Dev] Error Building Cassandra cluster mgt ui

2012-12-05 Thread Buddhika Chamith
Please ignore. This was a case of 4.0.2 stub being not correctly built in
my build. Now working fine.

Regards
Buddhika

On Thu, Dec 6, 2012 at 7:45 AM, Buddhika Chamith  wrote:

> I am getting following during the build.
>
> [INFO] -
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[30,60]
> cannot find symbol
> symbol  : class ProxyKeyspaceInitialInfo
> location: package org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[31,60]
> cannot find symbol
> symbol  : class ProxyNodeInitialInfo
> location: package org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[371,11]
> cannot find symbol
> symbol  : class ProxyNodeInitialInfo
> location: class
> org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[380,11]
> cannot find symbol
> symbol  : class ProxyKeyspaceInitialInfo
> location: class
> org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[374,49]
> cannot find symbol
> symbol  : method getNodeInitialInfo(java.lang.String)
> location: class
> org.wso2.carbon.cassandra.cluster.proxy.stub.operation.ClusterOperationProxyAdminStub
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[383,49]
> cannot find symbol
> symbol  : method getKeyspaceInitialInfo(java.lang.String)
> location: class
> org.wso2.carbon.cassandra.cluster.proxy.stub.operation.ClusterOperationProxyAdminStub
> [INFO] 6 errors
> [INFO] -
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 4.791s
> [INFO] Finished at: Thu Dec 06 07:28:22 IST 2012
> [INFO] Final Memory: 27M/130M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
> (default-compile) on project org.wso2.carbon.cassandra.cluster.mgt.ui:
> Compilation failure: Compilation failure:
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[30,60]
> cannot find symbol
> [ERROR] symbol  : class ProxyKeyspaceInitialInfo
> [ERROR] location: package
> org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[31,60]
> cannot find symbol
> [ERROR] symbol  : class ProxyNodeInitialInfo
> [ERROR] location: package
> org.wso2.carbon.cassandra.cluster.proxy.stub.data.xsd
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/components/cassandra/org.wso2.carbon.cassandra.cluster.mgt.ui/4.0.5/src/main/java/org/wso2/carbon/cassandra/cluster/mgt/ui/operation/ClusterNodeOperationsAdminClient.java:[371,11]
> cannot find symbol
> [ERROR] symbol  : class ProxyNodeInitialInfo
> [ERROR] location: class
> org.wso2.carbon.cassandra.cluster.mgt.ui.operation.ClusterNodeOperationsAdminClient
> [ERROR]
> /opt/build/trunk/carbon/platform/branches/4.0.0/c

Re: [Dev] Issue in ./wso2server.sh restart

2012-12-06 Thread Buddhika Chamith
I will add this. In fact there was another separate fix committed to
Windows script today.

Regards
Buddhika

On Thu, Dec 6, 2012 at 7:41 PM, Tharindu Mathew  wrote:

> BAM folks,
>
> In case you forgot we have one of these...
>
>
> On Thu, Dec 6, 2012 at 1:48 PM, Sanjaya Ratnaweera wrote:
>
>> Hi all,
>> When we restart a server using "./wso2ver.sh restart" It first sends
>> the signal to kill the process looking at wso2carbon.pid and starts the
>> server. It kills the server using "kill -term" and sometimes it takes some
>> time to kill a process. If the server starts before earlier process gets
>> killed it gives startup exceptions. So I have changed wso2server.sh to wait
>> until the process gets killed before a restart. I have updated the fix in
>> the core[1], please update if anyone keeping product specific wso2server.sh
>> files.
>>
>> Thanks
>>
>> ~sanjaya
>>
>> [1]
>> https://svn.wso2.org/repos/wso2/carbon/kernel/branches/4.0.0/distribution/kernel/4.0.5/carbon-home/bin/wso2server.sh
>>
>> --
>> Sanjaya Ratnaweera
>> Senior Software Engineer; WSO2 Inc; http://www.wso2.com/.
>>
>> blog: http://www.samudura.org
>> homepage: http://www.samudura.net
>> twitter: http://twitter.com/sanjayar
>>
>> Lean . Enterprise . Middleware
>>
>
>
>
> --
> Regards,
>
> Tharindu
>
> blog: http://mackiemathew.com/
> M: +9459908
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Can't install BAM Data Agents on top of AS 5.0.2

2012-12-15 Thread Buddhika Chamith
There is no much point in installing BAM mediator feature on top of AS
since this is specific for ESB. Having said that the issue is some thing
that needs to be fixed if possible.

Regards
Buddhika

On Sat, Dec 15, 2012 at 1:56 PM, Dileepa Jayakody  wrote:

> Hi All,
>
> Getting below error when trying to install BAM Data Agents category from
> latest 4.0.5 p2-repo;
>
> Your original install request has been modified.
> org.wso2.carbon.databridge.datapublisher.feature.group-4.0.2 is already
> installed, so an update will be performed instead.
>
> Cannot complete the install because of a conflicting dependency. Software
> being installed: WSO2 Carbon - *BAM Mediator Aggregate Feature 4.0.5
> (org.wso2.carbon.mediator.bam.feature.group 4.0.5)* Software currently
> installed: WSO2 Carbon - Event Server Feature 4.0.5
> (org.wso2.carbon.event.server.feature.group 4.0.5) Only one of the
> following can be installed at once: WSO2 Carbon - Event Server Feature
> 4.0.2 (org.wso2.carbon.event.server.feature.jar 4.0.2) WSO2 Carbon - Event
> Server Feature 4.0.5 (org.wso2.carbon.event.server.feature.jar 4.0.5)
> Cannot satisfy dependency: From: WSO2 Carbon - Event Server Feature 4.0.2
> (org.wso2.carbon.event.server.feature.group 4.0.2) To:
> org.wso2.carbon.event.server.feature.jar [4.0.2] Cannot satisfy dependency:
> From: WSO2 Carbon - Event Server Feature 4.0.5
> (org.wso2.carbon.event.server.feature.group 4.0.5) To:
> org.wso2.carbon.event.server.feature.jar [4.0.5] Cannot satisfy dependency:
> From: WSO2 Carbon - Mediation Initializer Server Feature 4.0.2
> (org.wso2.carbon.mediation.initializer.server.feature.group 4.0.2) To:
> org.wso2.carbon.event.server.feature.group [4.0.2] Cannot satisfy
> dependency: From: WSO2 Carbon - BAM Mediator Aggregate Feature 4.0.5
> (org.wso2.carbon.mediator.bam.feature.group 4.0.5) To:
> org.wso2.carbon.mediator.bam.server.feature.group [4.0.5] Cannot satisfy
> dependency: From: WSO2 Carbon - BAM Mediator Feature 4.0.5
> (org.wso2.carbon.mediator.bam.server.feature.group 4.0.5) To:
> org.wso2.carbon.mediation.initializer.server.feature.group [4.0.0,4.1.0)
>
> In summary;
>
> The root cause here again is features including external features as a
> nested part of it instead of properly* importing external features*.
> In org.wso2.carbon.mediator.bam.server.feature 4.0.5 it has a dependency
> to org.wso2.carbon.mediation.initializer.server.feature:4.0.2.
>
> In org.wso2.carbon.mediation.initializer.server.feature:4.0.2, the
> event.server.feature:4.0.2 is included instead of being imported. Hence the
> above conflict in installation as AS 5.0.2 has event.server.feature:4.0.5
> already installed and doesn't allow it's lower versioned (4.0.2) get
> installed as part of
> org.wso2.carbon.mediation.initializer.server.feature:4.0.2
>
> If we are to fix this need to update mediation.initializer.server.feature
> to properly import the server.feature. How to proceed with this?
>
> Thanks,
> Dileepa
>
> --
> Dileepa Jayakody,
> Software Engineer, WSO2 Inc.
> Lean . Enterprise . Middleware
>
> Mobile : +94777-857616
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] An exception when publishing API manager events to BAM

2013-01-21 Thread Buddhika Chamith
This usually happens when an event not conforming to the stream definition
registered is sent. May be there has been some change to stream definition
in the toolbox or at agent side in versions being used?

Regards
Buddhika

On Tue, Jan 22, 2013 at 12:07 PM, Lalaji Sureshika  wrote:

> extract data from the incoming request to the gateway,without depending on
> the security scheme attached to the particular API resource verb..
> I tried same scenario with keeping the security level as none for a
> particular API resource and without subscribe it to any app.I could able to
> view stats from publisher side.[AM 1.3.0 and BAM 2.0.1]
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Build Error - 4.0.7 patch release

2013-01-27 Thread Buddhika Chamith
Looking in to this.

Regards
Buddhika

On Mon, Jan 28, 2013 at 12:15 PM, Hasini Gunasinghe  wrote:

> I got a test failure at following component.
>
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-surefire-plugin:2.12:test (default-test) on
> project org.wso2.carbon.databridge.persistence.cassandra: There are test
> failures.
> [ERROR]
> [ERROR] Please refer to
> /home/hasini/WSO2/CARBON_TRUNK/carbon/platform/branches/4.0.0/components/data-bridge/org.wso2.carbon.databridge.persistence.cassandra/4.0.7/target/surefire-reports
> for the individual test results.
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
>
> Thanks,
> Hasini.
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Ehcache warning at Startup

2013-01-30 Thread Buddhika Chamith
Getting following during BAM 2.1.1 startup. Any idea how to resolve this?

WARN {net.sf.ehcache.config.ConfigurationFactory} - No configuration found.
Configuring ehcache from ehcache-failsafe.xml found in the classpath:
bundleresource://44.fwk426901684/ehcache-failsafe.xml

Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Build Faliure Synapse

2013-02-03 Thread Buddhika Chamith
Getting the following while building synapse in 4.0.7 patch release.

[ERROR] Failed to execute goal on project Apache-Synapse: Could not resolve
dependencies for project
org.apache.synapse:Apache-Synapse:pom:2.1.1-wso2v3: Failed to collect
dependencies for [jline:jline:jar:0.9.94 (provided),
org.apache.axis2:axis2-transport-local:jar:1.6.1-wso2v8 (test),
org.apache.axis2:axis2-kernel:jar:1.6.1-wso2v8 (compile),
org.apache.axis2:axis2-adb:jar:1.6.1-wso2v8 (compile),
org.apache.axis2:axis2-clustering:jar:1.6.1-wso2v8 (compile),
org.apache.axis2:axis2-mtompolicy:jar:1.6.1-wso2v8 (compile),
org.apache.neethi:neethi:jar:2.0.4-wso2v2 (compile),
org.apache.sandesha2:sandesha2-core:jar:1.6.1-wso2v1 (compile),
commons-logging:commons-logging:jar:1.1.1 (compile),
org.slf4j:slf4j-log4j12:jar:1.5.6 (compile), junit:junit:jar:3.8.2 (test),
xmlunit:xmlunit:jar:1.1 (test), org.apache.derby:derby:jar:10.4.2.0 (test),
xalan:xalan:jar:2.7.1 (compile), xerces:xercesImpl:jar:2.8.1 (compile),
rhino:js:jar:1.6R5 (compile), commons-dbcp:commons-dbcp:jar:1.2.2
(compile), commons-pool:commons-pool:jar:1.3 (compile),
org.apache.bcel:bcel:jar:5.2 (compile),
jakarta-regexp:jakarta-regexp:jar:1.4 (compile), java-cup:java-cup:jar:0.0
(compile), JLex:JLex:jar:0.0 (compile),
commons-collections:commons-collections:jar:3.2.1 (compile),
org.quartz-scheduler:quartz:jar:2.1.1 (compile), net.sf.saxon:saxon:jar:8.9
(compile), net.sf.saxon:saxon-dom:jar:8.9 (compile),
net.sf.saxon:saxon-xqj:jar:8.9 (compile), commons-cli:commons-cli:jar:1.0
(compile), commons-lang:commons-lang:jar:2.4 (test),
org.wso2.uri.template:wso2-uri-templates:jar:1.6.2 (compile),
org.wso2.caching:wso2caching-core:jar:4.0.2 (compile),
org.wso2.eventing:wso2eventing-api:jar:2.1 (compile)]: Failed to read
artifact descriptor for net.sf.saxon:saxon:jar:8.9: Could not transfer
artifact net.sf.saxon:saxon:pom:8.9 from/to wso2-nexus (
http://maven.wso2.org/nexus/content/groups/wso2-public/): Error
transferring file: Connection timed out -> [Help 1]

Apparently the saxon pom is not there in Nexus even though the dependency
jar is present.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Adding registry based stream definition store and making the stream id as :

2013-02-04 Thread Buddhika Chamith
I have done the required changes. Will commit after doing some testing.
(Unit tests are passing. There is some issue with integration tests).

Regards
Buddhika

On Sun, Feb 3, 2013 at 3:26 PM, Sriskandarajah Suhothayan wrote:

>
> The decision on the $Subject was made at the offline meeting with Sanjiva,
> Srinath, Buddhika, Maninda and me
>
> I have done the above changes at r160715.
> This also contains restricting stream definitions when there are attribute
> type changes.
>
>  As a followup the Cassandra based data definition store need to be
> properly updated to work with Registry based stream definition store.
> I have added todo for the appropriate methods in the
> /org/wso2/carbon/databridge/persistence/cassandra/datastore/CassandraConnector.java.
>
> as;
>
> public void definedStream(Cluster cluster, StreamDefinition
> streamDefinition) {
>  //todo creating CF logic need to come here, if CF exist need to
> add appropriate comparators for new Attributes
> }
>
> public void removeStream(Cluster cluster, StreamDefinition
> streamDefinition) {
> //todo removing data from CF logic need to come here & if no data
> is there CF need to be removed
> }
>
>
> --
> *S. Suhothayan
> *
> Software Engineer,
> Data Technologies Team,
>  *WSO2, Inc. **http://wso2.com
>  *
> *lean.enterprise.middleware.*
>
> *email: **s...@wso2.com* * cell: (+94) 779 756 757
> blog: **http://suhothayan.blogspot.com/* 
> *
> twitter: **http://twitter.com/suhothayan* *
> linked-in: **http://lk.linkedin.com/in/suhothayan*
> *
> *
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Adding registry based stream definition store and making the stream id as :

2013-02-04 Thread Buddhika Chamith
On Tue, Feb 5, 2013 at 1:11 AM, Sriskandarajah Suhothayan wrote:

>
>
> On Tue, Feb 5, 2013 at 1:09 AM, Sriskandarajah Suhothayan 
> wrote:
>
>>
>>
>> On Mon, Feb 4, 2013 at 7:42 PM, Buddhika Chamith wrote:
>>
>>> I have done the required changes. Will commit after doing some testing.
>>> (Unit tests are passing. There is some issue with integration tests).
>>>
>>> Great
>>
>> Suho
>>
>>
>>> Regards
>>> Buddhika
>>>
>>>
>>> On Sun, Feb 3, 2013 at 3:26 PM, Sriskandarajah Suhothayan >> > wrote:
>>>
>>>>
>>>> The decision on the $Subject was made at the offline meeting with
>>>> Sanjiva, Srinath, Buddhika, Maninda and me
>>>>
>>>> I have done the above changes at r160715.
>>>> This also contains restricting stream definitions when there are
>>>> attribute type changes.
>>>>
>>>>  As a followup the Cassandra based data definition store need to be
>>>> properly updated to work with Registry based stream definition store.
>>>> I have added todo for the appropriate methods in the
>>>> /org/wso2/carbon/databridge/persistence/cassandra/datastore/CassandraConnector.java.
>>>>
>>>> as;
>>>>
>>>> public void definedStream(Cluster cluster, StreamDefinition
>>>> streamDefinition) {
>>>>  //todo creating CF logic need to come here, if CF exist need
>>>> to add appropriate comparators for new Attributes
>>>>
>>> Note: This method will be called for all the available streams every
> time when server starts
> hence always do a check before creating the CFs
>

Noted.


>
> Suho
>
>> }
>>>>
>>>> public void removeStream(Cluster cluster, StreamDefinition
>>>> streamDefinition) {
>>>> //todo removing data from CF logic need to come here & if no
>>>> data is there CF need to be removed
>>>> }
>>>>
>>>>
>>>> --
>>>> *S. Suhothayan
>>>> *
>>>> Software Engineer,
>>>> Data Technologies Team,
>>>>  *WSO2, Inc. **http://wso2.com
>>>>  <http://wso2.com/>*
>>>> *lean.enterprise.middleware.*
>>>>
>>>> *email: **s...@wso2.com* * cell: (+94) 779 756 757
>>>> blog: **http://suhothayan.blogspot.com/*<http://suhothayan.blogspot.com/>
>>>> *
>>>> twitter: **http://twitter.com/suhothayan*<http://twitter.com/suhothayan>
>>>> *
>>>> linked-in: **http://lk.linkedin.com/in/suhothayan*
>>>> *
>>>> *
>>>>
>>>
>>>
>>
>>
>> --
>> *S. Suhothayan
>> *
>> Software Engineer,
>> Data Technologies Team,
>>  *WSO2, Inc. **http://wso2.com
>>  <http://wso2.com/>*
>> *lean.enterprise.middleware.*
>>
>> *email: **s...@wso2.com* * cell: (+94) 779 756 757
>> blog: **http://suhothayan.blogspot.com/*<http://suhothayan.blogspot.com/>
>> *
>> twitter: **http://twitter.com/suhothayan* <http://twitter.com/suhothayan>
>> *
>> linked-in: **http://lk.linkedin.com/in/suhothayan*
>> *
>> *
>>
>
>
>
> --
> *S. Suhothayan
> *
> Software Engineer,
> Data Technologies Team,
>  *WSO2, Inc. **http://wso2.com
>  <http://wso2.com/>*
> *lean.enterprise.middleware.*
>
> *email: **s...@wso2.com* * cell: (+94) 779 756 757
> blog: **http://suhothayan.blogspot.com/* <http://suhothayan.blogspot.com/>
> *
> twitter: **http://twitter.com/suhothayan* <http://twitter.com/suhothayan>*
> linked-in: **http://lk.linkedin.com/in/suhothayan*
> *
> *
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Doc on Roles and Permissions Configuration

2013-02-20 Thread Buddhika Chamith
Is there any doc on how to introduce fine grained permissions to the
permission tree with our components? I realize that there are certain
entries that can be provided in component.xml. I think we need it
documented some where properly if already not. So far couldn't find any
documentation on that. I think there are bunch of such entries we put in
component.xml which require to be documented properly.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] How to Debug XACML policy requests

2013-02-23 Thread Buddhika Chamith
Is it possible to see the messages being passed between the PEP and PDP
vice versa for debugging purposes? I am getting following error from XACML
response at ESB.

[2013-02-23 23:55:34,041] ERROR - EntitlementMediator
org.apache.synapse.SynapseException: Undefined Decision is received

I am trying the X509 certificate based scenario for authorization.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] BAM 2.2.0 toolbox issue in windows7

2013-03-01 Thread Buddhika Chamith
Can you check the same with BAM 2.0.1? I suspect this might be a regression
due to some reason fixes done to Hadoop and Hive.

Regards
Buddhika

On Fri, Mar 1, 2013 at 8:04 AM, Vijayaratha Vijayasingam wrote:

> Hi all;
> I have installed cygwin and when i tried to execute a hivescript im
> getting following[1]  folder not found issue. That particular folder is
> existing and i  tried coping the exact folder path in the address bar and
> it finds the folder location correctly. There is no any permission issues.
> Does anybody know how can i fix this issue?
>
> [1]
> at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Job Submission failed with exception
> 'org.apache.hadoop.util.Shell$ExitCodeException(chmod:
> C:\Users\TOSHIBA\Downloads\wso2bam-2.2.0\tmp\hadoop\local\
>
> -532323203\archive\6488787564537180850_-311551688_637947327\file\C\Users\TOSHIBA\Downloads\wso2bam-2.2.0\repository\components\configuration\org.eclip
> se.osgi\bundles\55\1\.cp\hive-builtins-0.8.1-wso2v6.jar-work-3046470035301427082:
> No such file or directory
> )'
> [2013-03-01 18:27:12,460] ERROR
> {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Job Submission failed with
> exception 'org.apache.hadoop.util.Shell$Exit
> CodeException(chmod:
> C:\Users\TOSHIBA\Downloads\wso2bam-2.2.0\tmp\hadoop\local\-532323203\archive\6488787564537180850_-311551688_637947327\file\C\User
>
> s\TOSHIBA\Downloads\wso2bam-2.2.0\repository\components\configuration\org.eclipse.osgi\bundles\55\1\.cp\hive-builtins-0.8.1-wso2v6.jar-work-3046470035
> 301427082: No such file or directory
> )'
> org.apache.hadoop.util.Shell$ExitCodeException: chmod:
> C:\Users\TOSHIBA\Downloads\wso2bam-2.2.0\tmp\hadoop\local\-532323203\archive\648878756453718085
>
> 0_-311551688_637947327\file\C\Users\TOSHIBA\Downloads\wso2bam-2.2.0\repository\components\configuration\org.eclipse.osgi\bundles\55\1\.cp\hive-builtin
> s-0.8.1-wso2v6.jar-work-3046470035301427082: No such file or directory
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Compilation Failure in Synpase

2013-04-15 Thread Buddhika Chamith
Getting following during the synapse build. Anybody come across this before?

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.0:compile
(default-compile) on project synapse-nhttp-transport: Compilation failure:
Compilation failure:
[ERROR]
/mnt/windows/Users/chamith/Desktop/share/builds/410/dependencies/synapse/2.1.1-wso2v4/modules/transports/core/nhttp/src/main/java/org/apache/synapse/transport/certificatevalidation/ocsp/OCSPVerifier.java:[195,29]
cannot find symbol
[ERROR] symbol  : method add(org.bouncycastle.asn1.DERObjectIdentifier)
[ERROR] location: class
java.util.Vector
[ERROR] -> [Help 1]
[ERROR]

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Some 4.1.0 components are still referring to 4.0.0 components

2013-04-16 Thread Buddhika Chamith
I noticed $subject building 4.1.0 from a clean repo. I think these
instances needs to be fixed.

e.g: components/rest-api

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] BAM Issue / release status

2013-05-01 Thread Buddhika Chamith
On Thu, May 2, 2013 at 9:53 AM, Chamara Ariyarathne wrote:

> BAM currently open issues:
> https://wso2.org/jira/secure/IssueNavigator.jspa?mode=hide&requestId=10980
>
> Also we had to manually include some features to the pack to test some new
> features, like;
> 1. Cassandra as a datasource in hive scripts: current toolboxes are not
> updated with that
>

This will be available in the next RC pack.


> 2. Oracle, MSSQL supported hive scripts: found in mail threads, support
> jiras
>

We have decided not to release separate toolboxes for other databases with
this BAM release since there were concerns on maintainability of a such an
approach. This was discussed on a separate thread on dev list and we are
planning to have that discussed approach incorporated to next release so
that one toolbox can handle all the supported databases.

Regards
Buddhika

Fully distributed setup had face some disk space issues so we need to run a
> full long test again. Current test was interrupted several times.
>
-- 
> *Chamara Ariyarathne*
> Senior Software Engineer - QA;
> WSO2 Inc; http://www.wso2.com/.
> Mobile; *+94772786766*
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] More than jars of same feature in BAM plugins folder

2013-05-02 Thread Buddhika Chamith
zookeeper duplication is already fixed and will be available with the next
pack. Apparently the cassandra duplication is occuring due to logging-mgt
component still referring to older Cassandra version.

Regards
Buddhika


On Thu, May 2, 2013 at 3:07 PM, Chamara Ariyarathne wrote:

> I found this in the {$BAM_HOME}/repository/components/plugins folder
>
> -rw-r--r--  1 chamara chamara 3.1M Apr 22 22:18
> apache-cassandra_1.1.3.wso2v2.jar
> -rw-r--r--  1 chamara chamara 3.1M Apr 22 22:18
> apache-cassandra_1.1.3.wso2v3.jar
> -rw-r--r--  1 chamara chamara 598K Apr 22 22:18
> apache-zookeeper_3.3.6.wso2v1.jar
> -rw-r--r--  1 chamara chamara 768K Apr 22 22:18
> apache-zookeeper_3.4.4.wso2v1.jar
>
> Is it to support those versions? Or are these added here by mistake?
>
> --
> *Chamara Ariyarathne*
> Senior Software Engineer - QA;
> WSO2 Inc; http://www.wso2.com/.
> Mobile; *+94772786766*
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] More than jars of same feature in BAM plugins folder

2013-05-02 Thread Buddhika Chamith
Hi Amani,

Yes you are right. :). Sorry for the noise. Apparently the culprit
is analytics feature upon a closer inspection.

Regards
Buddhika


On Thu, May 2, 2013 at 3:40 PM, Amani Soysa  wrote:

> Hi Buddhika,
>
> BAM does not contain logging-mgt feature. We have removed logging-mgt
> feature from carbon 4.0.0 onwards.
>
> Regards,
> Amani
>
>
> On Thu, May 2, 2013 at 3:26 PM, Buddhika Chamith wrote:
>
>> zookeeper duplication is already fixed and will be available with the
>> next pack. Apparently the cassandra duplication is occuring due to
>> logging-mgt component still referring to older Cassandra version.
>>
>> Regards
>> Buddhika
>>
>>
>> On Thu, May 2, 2013 at 3:07 PM, Chamara Ariyarathne wrote:
>>
>>> I found this in the {$BAM_HOME}/repository/components/plugins folder
>>>
>>> -rw-r--r--  1 chamara chamara 3.1M Apr 22 22:18
>>> apache-cassandra_1.1.3.wso2v2.jar
>>> -rw-r--r--  1 chamara chamara 3.1M Apr 22 22:18
>>> apache-cassandra_1.1.3.wso2v3.jar
>>> -rw-r--r--  1 chamara chamara 598K Apr 22 22:18
>>> apache-zookeeper_3.3.6.wso2v1.jar
>>> -rw-r--r--  1 chamara chamara 768K Apr 22 22:18
>>> apache-zookeeper_3.4.4.wso2v1.jar
>>>
>>> Is it to support those versions? Or are these added here by mistake?
>>>
>>> --
>>> *Chamara Ariyarathne*
>>> Senior Software Engineer - QA;
>>> WSO2 Inc; http://www.wso2.com/.
>>> Mobile; *+94772786766*
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Amani Soysa
> Senior Software Engineer
> Mobile: +94772325528
> WSO2, Inc. | http://wso2.com/
> Lean . Enterprise . Middleware
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Experiencing a failure situation with BAM Fully distributed cluster

2013-05-06 Thread Buddhika Chamith
Yes if it is the case then that's some thing which needs to be looked at.
After deleting scheduled tasks I believe the tasks were rescheduled right?
Were there any errors while re-scheduling the tasks?

Regards
Buddhika


On Mon, May 6, 2013 at 12:40 PM, Chamara Ariyarathne wrote:

>
>
> On Mon, May 6, 2013 at 12:38 PM, Tharindu Mathew wrote:
>
>>
>>
>> On Mon, May 6, 2013 at 12:35 PM, Chamara Ariyarathne 
>> wrote:
>>
>>>
>>>
>>> On Mon, May 6, 2013 at 12:32 PM, Tharindu Mathew wrote:
>>>
 #3 looks like a bug...

>>>
>>> Is it wrong to do that? Copy cassandra folder from an old one to a new
>>> pack?
>>>
>>
>> Ermmm... Are you ok? #3 was related to analyzer scripts ;)
>>
>> If you deleted the scripts and it didn't get unscheduled it's a bug in
>> the product...
>>
>
> Oh sorry I was confused.
>
> Yes I deleted the script and it seems the tasks were not unscheduled.
>
>>
 If the analyzer tasks are not runnning, then I would check the state of
 the Hadoop nodes and submit a new sample job and check...

 BAM folks, please assist ChamaraA...

 On Mon, May 6, 2013 at 12:27 PM, Chamara Ariyarathne >>> > wrote:

> Ccing to the BAM team.
>
>
> On Mon, May 6, 2013 at 12:22 PM, Chamara Ariyarathne <
> chama...@wso2.com> wrote:
>
>> Last friday I updated the setup with new BAM packs which is
>> 6e16710a86111e182656553a1b8bed7e  wso2bam-2.3.0.zip
>>
>> Steps I followed;
>> 1. Configuring the 5 nodes (2 Receiver/analyzer, 3 bam nodes as
>> cassandra).
>>   The configurations are done with a shell script written by me
>> which was the one used with eariler pack too so I believe I did not miss
>> any configuration.
>> 2. Copied the cassandra folder from the old cassandra instances.
>> 3. There was a problem I faced with analyzer scripts. I have deleted
>> the analyzer scripts from the UI but did not unschedule the tasks before
>> that. So it kept firing exceptions that the script could not be found. 
>> But
>> then I deleted the tasks browsing through registry which was in the;
>>
>> /_system/governance/repository/components/org.wso2.carbon.tasks/definitions/-1234/HIVE_TASK
>>
>> 4. After that restarted the zookeeper instances too. But not the
>> hadoop.
>>
>> After all that I see the receiver functionality is working fine,
>> regardless of one exception occurred with one cassandra node down,
>> https://wso2.org/jira/browse/BAM-1169
>> And now I can see in the cassandra logs that the events are being
>> written in cassandra sent from an esb.
>>
>> But I cannot see that the analyzer tasks are running, and also
>> checked the output database that data are not updated since the packs
>> swapped date. The test kind of in a halted state. Need assistance from 
>> the
>> BAM team.
>>
>> Also here again the need rises for a puppet based solution to install
>> BAM, and also on how to change the packs in a situation where a service
>> pack is released.
>>
> If you want this you need to drive this... You can either learn by
 yourself (plenty of resources on the web) or get a puppet expert or get a
 training organized...

>>>
>> --
>> *Chamara Ariyarathne*
>> Senior Software Engineer - QA;
>> WSO2 Inc; http://www.wso2.com/.
>> Mobile; *+94772786766*
>>
>
>
>
> --
> *Chamara Ariyarathne*
> Senior Software Engineer - QA;
> WSO2 Inc; http://www.wso2.com/.
> Mobile; *+94772786766*
>



 --
 Regards,

 Tharindu Mathew

 Associate Technical Lead, WSO2 BAM
 Member - Data Mgmt. Committee

 blog: http://tharindumathew.com/
 M: +9459908

>>>
>>>
>>>
>>> --
>>> *Chamara Ariyarathne*
>>> Senior Software Engineer - QA;
>>> WSO2 Inc; http://www.wso2.com/.
>>> Mobile; *+94772786766*
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Tharindu Mathew
>>
>> Associate Technical Lead, WSO2 BAM
>> Member - Data Mgmt. Committee
>>
>> blog: http://tharindumathew.com/
>> M: +9459908
>>
>
>
>
> --
> *Chamara Ariyarathne*
> Senior Software Engineer - QA;
> WSO2 Inc; http://www.wso2.com/.
> Mobile; *+94772786766*
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] SSO enabling BAM dashboard

2013-05-26 Thread Buddhika Chamith
Hi Pradeep,

We don't use SSO with our dashboard Jaggery app at the moment. User needs
to log in to the BAM dashboard separately. There was some difficulty in
sharing sessions between the management console and Jaggery applications at
the time BAM dashboard was being done IIRC. Chamath would be able to fill
in the specific details on that. Anyway I guess now we might be able to
rethink this with above mentioned approach if it solves our problem.

Regards
Buddhika



On Sun, May 26, 2013 at 11:58 AM, Nuwan Bandara  wrote:

> Hi pradeep,
>
> [1] are the apps with SSO, the portal app and store app are connected with
> SSO app.
>
> Regards,
> /Nuwan
>
> [1]
> https://svn.wso2.org/repos/wso2/carbon/platform/branches/4.0.0/products/ues/1.0.0/modules/apps/
>
>
> On Sun, May 26, 2013 at 11:41 AM, Pradeep Fernando wrote:
>
>> Hi All,
>>
>> I want to do the $subject. Since dashboard is a jaggery app, I believe it
>> is all about SSO enabling a jaggery app. This post [1] suggest that It is
>> done for UES.
>>
>> Is this functionality available OOTB in BAM dashbaords ? if not what it
>> takes to make it work. Code pointers/references highly appreciated.
>>
>>
>> [1] http://architects.dzone.com/articles/enabling-sso-wso2-user
>>
>> Thanks,
>> --Pradeep
>>
>>
>> --
>> *Pradeep Fernando*
>> Member, Management Committee - Platform & Cloud Technologies
>> Senior Software Engineer;WSO2 Inc.; http://wso2.com
>>
>> blog: http://pradeepfernando.blogspot.com
>> m: +94776603662
>>
>
>
>
> --
> *Thanks & Regards,
>
> Nuwan Bandara
> Technical Lead & Member, MC, Development Technologies
> WSO2 Inc. - lean . enterprise . middleware |  http://wso2.com
> blog : http://nuwanbando.com; email: nu...@wso2.com; phone: +94 11 763
> 9629
> *
> 
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Behavior of Entitlement mediator when authorization fails

2012-06-16 Thread Buddhika Chamith
Hi,

I was able to return a SOAP fault to the client by explicitly specifying a
fault sequence (was under the impression if a fault sequence is not
specified it would go through the default fault sequence on an error
condition) containing a makeFault mediator for the proxy in order to return
the client the proper error. Otherwise it silently drops the message.
Thanks Asela for the off-line tip.

Regards
Buddhika

On Sun, Jun 17, 2012 at 11:10 AM, Andun Gunawardena  wrote:

> +1 For Buddhika.
>
> I am also notified that, because these days I am trying to build a Tomcat
> Valve to do the same task one by Entitlement Mediator. So I did also play
> with the the existing one. So notice that client only show a Exception when
> authorization is false.
>
> Exception in thread "main" org.apache.axis2.AxisFault: The input stream
> for an incoming message is null.
>
> Also ESB console only show some info if authorization is false only when
> debug log is enabled. It shows this,
>
> [2012-06-17 09:33:35,098] DEBUG - EntitlementCallbackHandler Service name
> http://localhost:8280/services/echo
> [2012-06-17 09:33:36,098] DEBUG - EntitlementMediator User not authorized
> to perform the action :Deny
>
> So I think Buddhika's suggestion is good to make the scenario good.
> Because the message "The input stream for an incoming message is null." can
> miss lead the person who use it.
>
> Thanks
> AndunSLG
>
> On Sun, Jun 17, 2012 at 8:45 AM, Suresh Attanayaka wrote:
>
>> Hi Chamith,
>>
>> Sorry for the mistake. I was trying a Oauth-XACML scenario so was
>> mistaken. No, it did not went through the fault sequence.
>>
>> Thanks,
>> Suresh
>>
>>
>> On Sun, Jun 17, 2012 at 7:02 AM, Buddhika Chamith wrote:
>>
>>> Hi Suresh,
>>>
>>> Well it's the entitlement mediator I tried out. I think you have tried
>>> out the OAuth mediator. Anyway I am getting following log at IS.
>>>
>>> [2012-06-17 08:31:45,595]  INFO
>>> {org.wso2.carbon.identity.entitlement.policy.PolicyCollection} -  Matching
>>> XACML policy found urn:sample:xacml:2.0:samplepolicy
>>> [2012-06-17 08:31:45,599]  INFO
>>> {org.wso2.carbon.identity.entitlement.pip.CarbonAttributeFinder} -  No
>>> attribute designators defined for the attribute group
>>>
>>> Did the flow went through the fault sequence when the OAuth
>>> authorization failed?
>>>
>>> Thanks and Regards
>>> Buddhika
>>>
>>>
>>> On Sun, Jun 17, 2012 at 4:11 AM, Suresh Attanayaka wrote:
>>>
>>>> Hi Chamith,
>>>>
>>>> I do get an error log for failed authorizations at the ESB console.
>>>> Given bellow is the exception I could generate.
>>>>
>>>> [2012-06-17 02:25:53,044] ERROR - OAuthMediator Error occured while
>>>> validating oauth consumer
>>>> org.apache.synapse.SynapseException: OAuth authentication failed
>>>> at
>>>> org.wso2.carbon.identity.oauth.mediator.OAuthMediator.mediate(OAuthMediator.java:120)
>>>>  at
>>>> org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:60)
>>>> at
>>>> org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:114)
>>>>  at
>>>> org.apache.synapse.core.axis2.ProxyServiceMessageReceiver.receive(ProxyServiceMessageReceiver.java:154)
>>>> at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:181)
>>>>  at
>>>> org.apache.axis2.transport.http.util.RESTUtil.invokeAxisEngine(RESTUtil.java:144)
>>>> at
>>>> org.apache.axis2.transport.http.util.RESTUtil.processURLRequest(RESTUtil.java:139)
>>>>  at
>>>> org.apache.synapse.transport.nhttp.util.RESTUtil.processGetAndDeleteRequest(RESTUtil.java:139)
>>>> at
>>>> org.apache.synapse.transport.nhttp.DefaultHttpGetProcessor.processGetAndDelete(DefaultHttpGetProcessor.java:464)
>>>>  at
>>>> org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor.process(NHttpGetProcessor.java:296)
>>>> at
>>>> org.apache.synapse.transport.nhttp.ServerWorker.run(ServerWorker.java:258)
>>>>  at
>>>> org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:173)
>>>> at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>  at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>> at java.lang.Thre

[Dev] Using DBReport mediator with a stored procedure

2012-06-19 Thread Buddhika Chamith
Hi,

Is it possible to use DBReport mediator to call a stored procedure having
both IN and OUT parameters? So basically we want to get some return value
from the procedure by invoking it with a set of parameters.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] BAM2 M5 Demo Session Notes

2012-06-25 Thread Buddhika Chamith
Rename KPI sample to Retail store sample.
Make figures for each item in the sample varied. (looks bad in the graphs
currently with all having same values)
Make some gadgets based on time.(requests per each hour etc.)


Look in to feature repo based tool box distribution and get tool boxes list
down from a p2 repo
  - Local p2 repo
  - From remote repo
Tool boxes can be released separately on continuous basis separate from BAM
major release.
Should have another name for tool box


Several UI issues need to be fixed.
  - Fix "Awaiting to deploy" text
  - Wrap text in the Hive editor


We need to call M5 as Alpha as it is feature complete?
Rename org.apache.hadoop.hive.jdbc to org.wso2.hadoop.hive.jdbc
Rename org.apache.hadoop.hive.cassandra to org.wso2.hadoop.hive.cassandra
Use secure vault for credentials provided in scripts.


Is database Hive JDBC handler writes to multi-tenanted? We can leverage
data sources to make this tenant aware.
Add support for JNDI, Carbon data sources for STORED BY clause in Hive
create table.


Event stream name in the api doesn't look good.
Need to review the REST API.
Take JAX-RS support discussion seperately (go through web app or carbon
component based support)


Make the BAM mediator consistent with the log mediator. Give options to
dump whole message body, part of it (xpath) or no message body at all with
only sending the headers.
Need a new name for BAM mediator now that this will push data for BAM and
CEP as well? Syntax should be consistent with log mediator syntax.
Published data should  include httpd style log lines, transport headers etc.


Have gadget server inside BAM. Use gadget repo with BAM specific gadgets.
Need to lose the dashboard.
Have a useful tool box (need to have better name for this) sample. Retail
store example plus some thing like log analytics with a sample log data
set? Process OT logs with BAM2.


For statistics BAM1 level stats monitoring is required.
Make the dashboard story work with Jasper Reports for mediation and service
statistics (need tps figures too).
Its ok to include Jasper war files inside bar file.
For the Jasper visualization story there couple of options. Need to
investigate and adopt the suitable option.
 - Make the whole dashboard with Jasper stuff
 - Make gadgets refer to Jasper UI elements within iframes

Please add anything I have missed.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Compilation Error in org.wso2.carbon.core in Kernel

2012-06-25 Thread Buddhika Chamith
I am getting following compilation error with a svn up with revision
131137. Anybody else facing the same issue?

[INFO] -
[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR]
/opt/builds/trunk/carbon/kernel/trunk/core/org.wso2.carbon.core/src/main/java/org/wso2/carbon/core/transports/metering/MeteredServletRequest.java:[38,7]
org.wso2.carbon.core.transports.metering.MeteredServletRequest is not
abstract and does not override abstract method getPart(java.lang.String) in
javax.servlet.http.HttpServletRequest
[ERROR]
/opt/builds/trunk/carbon/kernel/trunk/core/org.wso2.carbon.core/src/main/java/org/wso2/carbon/core/transports/metering/MeteredServletResponse.java:[32,7]
org.wso2.carbon.core.transports.metering.MeteredServletResponse is not
abstract and does not override abstract method getHeaderNames() in
javax.servlet.http.HttpServletResponse
[INFO] 2 errors
[INFO] -
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 6.631s
[INFO] Finished at: Tue Jun 26 12:17:16 IST 2012
[INFO] Final Memory: 25M/157M
[INFO]



Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Compilation Error in org.wso2.carbon.core in Kernel

2012-06-26 Thread Buddhika Chamith
My bad. :). Sorry for the noise.

On Tue, Jun 26, 2012 at 12:24 PM, Janaka Ranabahu  wrote:

> Hi Buddhika,
>
> I believe the same error have been reported in "[Dev] [kernal] Compilation
> failure" mail thread by Ajith.
>
> Thanks,
> Janaka
>
> On Tue, Jun 26, 2012 at 12:20 PM, Buddhika Chamith wrote:
>
>> I am getting following compilation error with a svn up with revision
>> 131137. Anybody else facing the same issue?
>>
>> [INFO] -
>> [ERROR] COMPILATION ERROR :
>> [INFO] -
>> [ERROR]
>> /opt/builds/trunk/carbon/kernel/trunk/core/org.wso2.carbon.core/src/main/java/org/wso2/carbon/core/transports/metering/MeteredServletRequest.java:[38,7]
>> org.wso2.carbon.core.transports.metering.MeteredServletRequest is not
>> abstract and does not override abstract method getPart(java.lang.String) in
>> javax.servlet.http.HttpServletRequest
>> [ERROR]
>> /opt/builds/trunk/carbon/kernel/trunk/core/org.wso2.carbon.core/src/main/java/org/wso2/carbon/core/transports/metering/MeteredServletResponse.java:[32,7]
>> org.wso2.carbon.core.transports.metering.MeteredServletResponse is not
>> abstract and does not override abstract method getHeaderNames() in
>> javax.servlet.http.HttpServletResponse
>> [INFO] 2 errors
>> [INFO] -
>> [INFO]
>> 
>> [INFO] BUILD FAILURE
>> [INFO]
>> 
>> [INFO] Total time: 6.631s
>> [INFO] Finished at: Tue Jun 26 12:17:16 IST 2012
>> [INFO] Final Memory: 25M/157M
>> [INFO]
>> 
>>
>>
>> Regards
>> Buddhika
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Janaka Ranabahu
> Software Engineer
> WSO2 Inc.
>
> Mobile +94 718370861
> Email : jan...@wso2.com
> Blog : janakaranabahu.blogspot.com
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Jaggery is a self contained product which sits in platform/dependencies

2012-06-27 Thread Buddhika Chamith
On Thu, Jun 28, 2012 at 11:40 AM, Nuwan Bandara  wrote:

> Hi Tharindu,
>
> I have fixed this in BAM2 p2-profile-gen. Anyhow, well the parent prop is
> in jaggery product. its better if you define the property in BAM2 product
> parent pom.
>

The reason is the gadgetgen feature is still importing to the old jaggery
feature.

Regards
Buddhika


>
> Regards,
> /Nuwan
>
> On Thu, Jun 28, 2012 at 11:34 AM, Tharindu Mathew wrote:
>
>> Hi Nuwan,
>>
>> The BAM product build breaks because of this (at [1]). I will fix it...
>> Is there a parent property that you have defined?
>>
>> [1] -
>>
>> Missing requirement: WSO2 Carbon - Gadget Generation Server Feature
>> 4.0.0.SNAPSHOT (org.wso2.carbon.gadgetgenwizard.server.feature.group
>> 4.0.0.SNAPSHOT) requires 'org.wso2.carbon.jaggery.server.feature.group
>> [1.0.0.SNAPSHOT,1.1.0)' but it could not be found
>>
>> On Thu, Jun 28, 2012 at 8:03 AM, Pradeep Fernando wrote:
>>
>>> Hi,
>>>
>>> yes, we are working on that. first snapshot repo has to be cleaned,
>>> populated. dependencies/orbit should be released. note the mail from
>>> Dimuthu on "releasing orbits."
>>>
>>> thanks,
>>> --Pradeep
>>>
>>
>>
>>
>> --
>> Regards,
>>
>> Tharindu
>>
>> blog: http://mackiemathew.com/
>> M: +9459908
>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> *Thanks & Regards,
>
> Nuwan Bandara
> Associate Technical Lead & Member, MC, Development Technologies
> WSO2 Inc. - lean . enterprise . middleware |  http://wso2.com
> blog : http://nuwanbando.com; email: nu...@wso2.com; phone: +94 11 763
> 9629
> *
> 
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] New API methods in SuperTenantCarbonContext to get hold of OSGi service references

2012-06-29 Thread Buddhika Chamith
Yes. It's working as expected from my web app. Thanks.

Regards
Buddhika

On Fri, Jun 29, 2012 at 9:10 PM, Afkham Azeez  wrote:

> I just tested this with the attached webapp and the OSGi service call from
> the webapp works as well.
>
>
> On Thu, Jun 28, 2012 at 6:31 PM, Afkham Azeez  wrote:
>
>> After a discussion during today'd BAM REST API review, we came up with
>> this idea of allowing webapps, services etc. to obtain references to OSGi
>> services deployed in the platform using the Carbon APIs. As a result of
>> this, I have added the following methods to the  SuperTenantCarbonContext.
>>
>> /**
>>  * Obtain the first OSGi service found for interface or class
>> clazz
>>  * @param clazz The type of the OSGi service
>>  * @return The OSGi service
>>  */
>> public Object getOSGiService(Class clazz)
>>
>>
>>  /**
>>  * Obtain the OSGi services found for interface or class
>> clazz
>>  * @param clazz The type of the OSGi service
>>  * @return The List of OSGi services
>>  */
>> public List getOSGiServices(Class clazz)
>>
>>
>>
>> Usage example;
>> ListenerManager listenerManager = (ListenerManager)
>> *
>> SuperTenantCarbonContext.getCurrentContext().getOSGiService(ListenerManager.class)
>> *;
>> System.out.println("Is listener running: " +
>> !listenerManager.isStopped());
>>
>> I have added & tested the above call in one of the Carbon kernel
>> integration tests.
>>
>> --
>> *Afkham Azeez*
>> Director of Architecture; WSO2, Inc.; http://wso2.com
>> Member; Apache Software Foundation; http://www.apache.org/
>> * **
>> email: **az...@wso2.com* * cell: +94 77 3320919
>> blog: **http://blog.afkham.org* *
>> twitter: **http://twitter.com/afkham_azeez*
>> *
>> linked-in: **http://lk.linkedin.com/in/afkhamazeez*
>> *
>> *
>> *Lean . Enterprise . Middleware*
>>
>>
>
>
> --
> *Afkham Azeez*
> Director of Architecture; WSO2, Inc.; http://wso2.com
> Member; Apache Software Foundation; http://www.apache.org/
> * **
> email: **az...@wso2.com* * cell: +94 77 3320919
> blog: **http://blog.afkham.org* *
> twitter: **http://twitter.com/afkham_azeez*
> *
> linked-in: **http://lk.linkedin.com/in/afkhamazeez*
> *
> *
> *Lean . Enterprise . Middleware*
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Getting NumberFormatException during product build

2012-07-05 Thread Buddhika Chamith
I am getting this error during the p2-profile-gen. Any idea how to
overcome/debug this? My feeling is that version provided somewhere is wrong.

[INFO] Running Equinox P2 Publisher Application for Repository Generation
[INFO] Command line:
/bin/sh -c cd
/opt/builds/trunk/carbon/platform/trunk/products/bam2/modules/p2-profile-gen
&& /opt/installations/jdk1.6.0_13/jre/bin/java -jar
/home/chamith/.m2/repository/org/eclipse/tycho/tycho-p2-runtime/0.13.0/eclipse/plugins/org.eclipse.equinox.launcher_1.2.0.v20110725-1610.jar
-nosplash -application
org.eclipse.equinox.p2.publisher.FeaturesAndBundlesPublisher -source
/opt/builds/trunk/carbon/platform/trunk/products/bam2/modules/p2-profile-gen/target/tmp.1341491384322/featureExtract
-metadataRepository
file:/opt/builds/trunk/carbon/platform/trunk/products/bam2/modules/p2-profile-gen/target/p2-repo
-metadataRepositoryName wso2bam-p2-profile-gen -artifactRepository
file:/opt/builds/trunk/carbon/platform/trunk/products/bam2/modules/p2-profile-gen/target/p2-repo
-artifactRepositoryName wso2bam-p2-profile-gen -publishArtifacts
-publishArtifactRepository -compress -append
Generating metadata for ..
Status ERROR: org.eclipse.equinox.p2.artifact.repository code=0 For input
string: "[0" java.lang.NumberFormatException: For input string: "[0"

Product publishing ended with the following exception:
java.lang.NumberFormatException: For input string: "[0"
at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:48)
at java.lang.Integer.parseInt(Integer.java:447)
at java.lang.Integer.parseInt(Integer.java:497)
at org.osgi.framework.Version.(Version.java:127)
at org.osgi.framework.Version.parseVersion(Version.java:225)
at
org.eclipse.osgi.internal.resolver.StateBuilder.addExportPackages(StateBuilder.java:338)
at
org.eclipse.osgi.internal.resolver.StateBuilder.createExportPackages(StateBuilder.java:320)
at
org.eclipse.osgi.internal.resolver.StateBuilder.createBundleDescription(StateBuilder.java:110)
at
org.eclipse.osgi.internal.resolver.StateObjectFactoryImpl.createBundleDescription(StateObjectFactoryImpl.java:32)
at
org.eclipse.equinox.p2.publisher.eclipse.BundlesAction.createBundleDescription(BundlesAction.java:531)
at
org.eclipse.equinox.p2.publisher.eclipse.BundlesAction.createBundleDescription(BundlesAction.java:546)
at
org.eclipse.equinox.p2.publisher.eclipse.BundlesAction.getBundleDescriptions(BundlesAction.java:846)
at
org.eclipse.equinox.p2.publisher.eclipse.BundlesAction.perform(BundlesAction.java:657)
at
org.eclipse.equinox.p2.publisher.Publisher$ArtifactProcess.run(Publisher.java:207)
at
org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.executeBatch(SimpleArtifactRepository.java:1294)
at
org.eclipse.equinox.p2.publisher.Publisher.publish(Publisher.java:231)
at
org.eclipse.equinox.p2.publisher.AbstractPublisherApplication.run(AbstractPublisherApplication.java:283)
at
org.eclipse.equinox.p2.publisher.AbstractPublisherApplication.run(AbstractPublisherApplication.java:253)
at
org.eclipse.equinox.p2.publisher.AbstractPublisherApplication.start(AbstractPublisherApplication.java:315)
at
org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)
at
org.eclipse.equinox.internal.app.AnyThreadAppLauncher.run(AnyThreadAppLauncher.java:26)
at java.lang.Thread.run(Thread.java:619)

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Getting NumberFormatException during product build

2012-07-05 Thread Buddhika Chamith
Seems local to me. I will check the versions of recent modifications.

Regards
Buddhika

On Fri, Jul 6, 2012 at 7:40 AM, Pradeep Fernando  wrote:

> Hi,
>
> is this local to you, if so follow dimuthus' steps. Otherwise you have to
> search through the manifests of all bundles.
>
> --Pradeep
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Logging Implementation {was: Re: Any Possibility of defining the Hive output directory programmatically?}

2012-07-23 Thread Buddhika Chamith
Hi All,

It is not that it is impossible to inject runtime variables (bit like query
parameters in DSS) to Hive query execution it might take some modifications
from the hive side to make it possible in order to do it programatically.
Currently I am doing some work in Hive for making it tenant aware. What I
mentioned was that I can look to this as a part of that effort though it
might take couple of days time since I have to figure out a clean way to
expose tenant specific Hive Configuration to the carbon environment. Anyway
I was not aware of the thread on Hive user list and now going through that
I see that they have suggested an alternative way provided that we are ok
with modifying the original script.

Regards
Buddhika

On Mon, Jul 23, 2012 at 6:41 PM, Tharindu Mathew  wrote:

> If you are planning to do a few MB, that would mean that the size of logs
> will be ( size of logs * no. of tenants ), so roughly for 200 active
> tenants and 2 MB of logs, it would come to around 400 MB. This is still
> manageable in a custom task if your data processing is low.
>
> On Mon, Jul 23, 2012 at 6:24 PM, Afkham Azeez  wrote:
>
>> Like you said, the task may not be the best way to do this. Like we
>> discussed the other day, we can publish logs to unique column families
>> which contain the __ as the unique identifier. We
>> need to generate logs in a file format & allow tenant users to download
>> those. What is the best approach to generate these log files from the data
>> collected? Typically, such a log file can run into a few MB.
>
> I'm a bit confused as we did not need to use Hive as per our earlier
> conversation. This is because as the data is published it is already
> grouped by server/ tenant and date.
>
>>
>> Azeez
>>
>>
>> On Mon, Jul 23, 2012 at 6:18 PM, Tharindu Mathew wrote:
>>
>>> I'm no expert, but I immediately question the scale of this approach.
>>>
>>> Do you have an idea of how much of logs you plan to process per task?
>>>
>>>
>>> On Mon, Jul 23, 2012 at 6:13 PM, Afkham Azeez  wrote:
>>>
 The requirement is simple. We need to generate log files on a per
 tenant, per date, per Service basis. Now as a big data & analytics expert,
 please advise us on what is the best solution for this.

 Azeez


 On Mon, Jul 23, 2012 at 6:05 PM, Tharindu Mathew wrote:

> So through this custom java task, what is the scale of log processing
> you will support? 100MB, 1 GB, 100 GB, 1 TB?
>
> On Mon, Jul 23, 2012 at 5:14 PM, Manisha Gayathri wrote:
>
>> Contacted Hive User Group as well on this matter.
>> They also mentioned that this approach is not possible.
>> Also as per the chat I had with Buddhika, right now, these kind of
>> dynamic variable creations is not possible in Hive that comes with BAM2.
>>
>> Therefore IMO, without going ahead with this cumbersome process, the
>> best way will be to run a scheduled java task to pick data from relevant
>> Cassandra Column families and dynamically generate the relevant log files
>> (according to the tenantID and current date) which will be stored in 
>> Apache
>> Directory.
>>
> You are going to store the results in a LDAP?
>
>>
>> As per the offline chat had with Azeez, will start to work on a
>> custom Java task that can handle the above scenario.
>>
>> On Mon, Jul 23, 2012 at 2:27 PM, Manisha Gayathri 
>> wrote:
>>
>>> Hi,
>>>
>>> For a log file storing scenario using BAM2, I have a requirement to
>>> generate separate log files for each date. For that I have created a 
>>> Hive
>>> Analytic query along with a Hive UDF as well.
>>>
>>> I have the getFilePath function which should return a URL like this.
>>>
>>> home/user/Desktop/logDir/logs/log_0_testServer_2012_07_22
>>>
>>> The defined function works perfectly if I put *getFilePath(
>>> "0","testServer" ) *into the *select* statement.
>>>
>>> But I want to get that particular URL as the *local directory name*.
>>> (The requirement is such that this should not be hard-coded in the hive
>>> query. Rather should be generated in the custom UDF. )
>>>
>>> So can I do something like I v shown below?
>>>
>>> *set file_name= getFilePath( "0","testServer" );*//Define a
>>> parameter.* *
>>> *.*
>>> *..*
>>> *INSERT OVERWRITE LOCAL DIRECTORY 'file:///${hiveconf:file_name}'
>>>  *//Assign the above parameter as the file URL
>>>
>>> I tried this way. But the directory name is returned as
>>>
>>> file:/getFilePath( "0" , "testServer" )
>>>
>>> Does that mean I cannot use UDF to define the local directory name?
>>> Or am I doing anything wrong in here?
>>>
>>>
>>> --
>>> ~Regards
>>> *Manisha Eleperuma*
>>> Software Engineer
>>> WSO2, Inc.: http://wso2.com
>>> lean.enterprise.mid

Re: [Dev] Logging Implementation {was: Re: Any Possibility of defining the Hive output directory programmatically?}

2012-07-23 Thread Buddhika Chamith
So if I understand right the data are stored in seperate column families
per each tenant,server,day and the requirement is to transfer these column
family data directly to a flat file which corresponds to a logs from a
tenant for a server in a given day with no analytics involved. If it is the
case may I suggest using what tharindu suggested (insert select * from foo)
in combination with [1] in a loop for each column family. In order to
dynamically  provide the directory name and the column family name we can
use SET hive command and append it to the script before passing in to the
Hive execution service as also suggested at [2].

Regards
Buddhika

[1]
https://cwiki.apache.org/Hive/languagemanual-dml.html#LanguageManualDML-Writingdataintofilesystemfromqueries

[2] http://mail-archives.apache.org/mod_mbox/hive-user/201207.mbox/browser

On Mon, Jul 23, 2012 at 7:21 PM, Tharindu Mathew  wrote:

> insert select * from foo
>
>
> On Mon, Jul 23, 2012 at 7:15 PM, Afkham Azeez  wrote:
>
>>
>>
>> On Mon, Jul 23, 2012 at 6:41 PM, Tharindu Mathew wrote:
>>
>>> If you are planning to do a few MB, that would mean that the size of
>>> logs will be ( size of logs * no. of tenants ), so roughly for 200 active
>>> tenants and 2 MB of logs, it would come to around 400 MB. This is still
>>> manageable in a custom task if your data processing is low.
>>>
>>> On Mon, Jul 23, 2012 at 6:24 PM, Afkham Azeez  wrote:
>>>
 Like you said, the task may not be the best way to do this. Like we
 discussed the other day, we can publish logs to unique column families
 which contain the __ as the unique identifier. We
 need to generate logs in a file format & allow tenant users to download
 those. What is the best approach to generate these log files from the data
 collected? Typically, such a log file can run into a few MB.
>>>
>>> I'm a bit confused as we did not need to use Hive as per our earlier
>>> conversation. This is because as the data is published it is already
>>> grouped by server/ tenant and date.
>>>
>>
>> Yeah, there is no analytics to be done. It is a problem of converting
>> data stored in Cassandra into a flat file.
>>
>>
>>>
 Azeez


 On Mon, Jul 23, 2012 at 6:18 PM, Tharindu Mathew wrote:

> I'm no expert, but I immediately question the scale of this approach.
>
> Do you have an idea of how much of logs you plan to process per task?
>
>
> On Mon, Jul 23, 2012 at 6:13 PM, Afkham Azeez  wrote:
>
>> The requirement is simple. We need to generate log files on a per
>> tenant, per date, per Service basis. Now as a big data & analytics 
>> expert,
>> please advise us on what is the best solution for this.
>>
>> Azeez
>>
>>
>> On Mon, Jul 23, 2012 at 6:05 PM, Tharindu Mathew 
>> wrote:
>>
>>> So through this custom java task, what is the scale of log
>>> processing you will support? 100MB, 1 GB, 100 GB, 1 TB?
>>>
>>> On Mon, Jul 23, 2012 at 5:14 PM, Manisha Gayathri 
>>> wrote:
>>>
 Contacted Hive User Group as well on this matter.
 They also mentioned that this approach is not possible.
 Also as per the chat I had with Buddhika, right now, these kind of
 dynamic variable creations is not possible in Hive that comes with 
 BAM2.

 Therefore IMO, without going ahead with this cumbersome process,
 the best way will be to run a scheduled java task to pick data from
 relevant Cassandra Column families and dynamically generate the 
 relevant
 log files (according to the tenantID and current date) which will be 
 stored
 in Apache Directory.

>>> You are going to store the results in a LDAP?
>>>

 As per the offline chat had with Azeez, will start to work on a
 custom Java task that can handle the above scenario.

 On Mon, Jul 23, 2012 at 2:27 PM, Manisha Gayathri >>> > wrote:

> Hi,
>
> For a log file storing scenario using BAM2, I have a requirement
> to generate separate log files for each date. For that I have created 
> a
> Hive Analytic query along with a Hive UDF as well.
>
> I have the getFilePath function which should return a URL like
> this.
>
> home/user/Desktop/logDir/logs/log_0_testServer_2012_07_22
>
> The defined function works perfectly if I put *getFilePath(
> "0","testServer" ) *into the *select* statement.
>
> But I want to get that particular URL as the *local directory name
> *. (The requirement is such that this should not be hard-coded in
> the hive query. Rather should be generated in the custom UDF. )
>
> So can I do something like I v shown below?
>
> *set file_name= getFilePath( "0","testServer" );*//Define a
> par

Re: [Dev] Logging Implementation {was: Re: Any Possibility of defining the Hive output directory programmatically?}

2012-07-23 Thread Buddhika Chamith
Pleaase find my comments in line.

On Mon, Jul 23, 2012 at 8:35 PM, Manisha Gayathri  wrote:

> If I give a more clear picture into the scenario;
> We have separate column families for each tenant, server and day.
> Eg:
>
> log_0_esbserver_2012_07_23
> log_1_esbserver_2012_07_23
> log_2_esbserver_2012_07_23
> log_0_esbserver_2012_07_24
> log_2_appserver_2012_07_24
> log_3_appserver_2012_07_24   (0,1,2.. denotes the tenantID)
>
>
> With the task/summarizer, running at the end of the day, we need to create
> compressed files containing info in each of the above col. family.
> Eg:
>
> ../0/esbserver/2012_07_23/logs.gz
> ../1/esbserver/2012_07_23/logs.gz
> ../2/esbserver/2012_07_23/logs.gz
> ../0/esbserver/2012_07_24/logs.gz
>
>
> If we are doing this with Hive, we need to consider the following facts;
>
>1. Dynamically pick the ALL the column families that is related to the
>particular date.
>2. Dynamically generate the file URL for each of the log file.
>
> Any particular reason why we need to assume the above need to be a
responsibility of Hive? I am under the impression that this dynamic URL can
be easily created out side the Hive query from within the logging component
rather that pushing it in to a Hive query (Using concat etc.). Please
correct me if I am wrong.

If there isn't an issue preventing that we can do the following.

Assume below is your hive query which transfers data from cassandra to
local file system.

DROP TABLE cassandraTable;
CREATE TABLE cassandraTable (tenantId INT,
serverName STRING,serviceName
STRING,date TIMESTAMP, logString STRING) STORED BY
'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH
SERDEPROPERTIES ( "cassandra.host" =
"127.0.0.1",
"cassandra.port" = "9160","cassandra.ks.name" = "EVENT_KS",
"cassandra.ks.username"
= "admin","cassandra.ks.password" = "admin",
"cassandra.cf.name" =
"${hiveconf:cfname}",
"cassandra.columns.mapping" =
":key,serverName,
payload_user,serviceName,date,logString" );
INSERT OVERWRITE LOCAL DIRECTORY {hiveconf:dir} select * from
cassandraTable;

Now say we have following cf log tables.

log_0_esbserver_2012_07_23
log_1_esbserver_2012_07_23
log_2_esbserver_2012_07_23

Now within the logging component we can do the following. (in pseudo-code)

for (each tenant i)
  for (each server j)
var logCf := log + _ + i + _ + server + _ + cur_date()
var logLocation := file:///../i/server/cur_date() // May need to create
folders if not existing at this point

var hiveScript := getScriptFromRegistry()
var setCfName := "SET cfname=" + logCf // set cf name variable
var hiveScript := setCfName + hiveScript // append it to the script

hiveScript.replace( "{hiveconf:dir}", logLocation) // Need to do a
regex replace since it is not possible to use variable substitution for
entity

// names such as directories/ tables etc.
HiveExecutionService.execute(hiveScript) // Execute the runtime
modified Hive script using Hive script execution OSGi service

There will be chances of parallelizing this script execution once we get
this basic form working. HTH.

Regards
Buddhika





> Tried various options to achieve above, but with no luck. In [1], that you
> have suggested, we need to give the file URL. But how can I dynamically
> generate the URL per each day, each server and each tenant? (Because none
> of the operations like concat, work for proving this file URL)
>
> Appreciate if you could suggest a concrete plan to implement this.
>
> [1].
> https://cwiki.apache.org/Hive/languagemanual-dml.html#LanguageManualDML-Writingdataintofilesystemfromqueries
>
> Thanks
> Rgds
> Manisha
>
> On Mon, Jul 23, 2012 at 7:55 PM, Buddhika Chamith wrote:
>
>> So if I understand right the data are stored in seperate column families
>> per each tenant,server,day and the requirement is to transfer these column
>> family data directly to a flat file which corresponds to a logs from a
>> tenant for a server in a given day with no analytics involved. If it is the
>> case may I suggest using what tharindu suggested (insert select * from foo)
>> in combination with [1] in a loop for each column family. In order to
>> dynamically  provide the directory name and the column family name we can
>> use SET hive command and append it to the script before passing in to the
>> Hive execution service as also suggested at [2].
>>
>> Regards
>> Buddhika
>>
>> [1]
>> https://cwiki.apache.org/Hive/languagemanual-dml.html#LanguageManualDML-Writingdataintofilesystemfromqueries
>>
>> [2]
>>

Re: [Dev] Moving AS to new BAM data publisher

2012-07-24 Thread Buddhika Chamith
On Tue, Jul 24, 2012 at 7:36 PM, Supun Malinga  wrote:

> Hi,
>
> On Sat, Jul 21, 2012 at 8:35 PM, Supun Malinga  wrote:
>
>> Hi devs,
>>
>> This is regarding the issue [1].
>> As per the offline chat I had with buddika got to know that the new
>> implementation is org.wso2.carbon.bam.service.agent.feature. Hence we may
>> need to move to that.
>>
>> But I'm not sure if stratos parts still need the old version. I'm going
>> to build AS with new changes and run and see.
>>
>
> I'm getting p2 level issue requiring more dependent features for
>  org.wso2.carbon.bam.service.agent.feature. There are few. So I need to
> include them as well?
> Buddika  can you specify what features I need to include in p2-gen without
> getting too much unwanted stuff?.
>

Data-bridge features are required for service agent feature functionality.
So you will need to include these features as well.

Regards
Buddhika


>
> Also please acknowledge if there are any concerns.
>>
> Stratos folks  please let me know if there are any dependency
> for org.wso2.carbon.bam.data.publisher.servicestats.feature from stratos
> components.
>
> thanks,
>
>>
>> [1] https://wso2.org/jira/browse/WSAS-884
>>
>> thanks,
>> --
>> Supun Malinga,
>>
>> Software Engineer,
>> WSO2 Inc.
>> http://wso2.com
>> http://wso2.org
>> email - sup...@wso2.com 
>> mobile - 071 56 91 321
>>
>>
>
>
> --
> Supun Malinga,
>
> Software Engineer,
> WSO2 Inc.
> http://wso2.com
> http://wso2.org
> email - sup...@wso2.com 
> mobile - 071 56 91 321
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Platform Build Failure - branch : analytics

2012-07-28 Thread Buddhika Chamith
Fixed. Sorry for the inconvenience.

Regards
Buddhika

On Sat, Jul 28, 2012 at 4:03 PM, Kasun Indrasiri  wrote:

> Platform is updated.
>
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 3.512s
> [INFO] Finished at: Sat Jul 28 16:01:21 IST 2012
> [INFO] Final Memory: 28M/341M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
> (default-compile) on project org.wso2.carbon.analytics.hive: Compilation
> failure: Compilation failure:
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/ServiceHolder.java:[27,46]
> package org.wso2.carbon.rssmanager.core.service does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/ServiceHolder.java:[40,19]
> cannot find symbol
> [ERROR] symbol  : class RSSManagerService
> [ERROR] location: class org.wso2.carbon.analytics.hive.ServiceHolder
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/ServiceHolder.java:[104,44]
> cannot find symbol
> [ERROR] symbol  : class RSSManagerService
> [ERROR] location: class org.wso2.carbon.analytics.hive.ServiceHolder
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/ServiceHolder.java:[108,18]
> cannot find symbol
> [ERROR] symbol  : class RSSManagerService
> [ERROR] location: class org.wso2.carbon.analytics.hive.ServiceHolder
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/internal/HiveServiceComponent.java:[42,46]
> package org.wso2.carbon.rssmanager.core.service does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/internal/HiveServiceComponent.java:[202,40]
> cannot find symbol
> [ERROR] symbol  : class RSSManagerService
> [ERROR] location: class
> org.wso2.carbon.analytics.hive.internal.HiveServiceComponent
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/internal/HiveServiceComponent.java:[210,42]
> cannot find symbol
> [ERROR] symbol  : class RSSManagerService
> [ERROR] location: class
> org.wso2.carbon.analytics.hive.internal.HiveServiceComponent
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/impl/HiveExecutorServiceImpl.java:[20,39]
> cannot find symbol
> [ERROR] symbol  : class HiveContext
> [ERROR] location: package org.apache.hadoop.hive.metastore
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/multitenancy/HiveAxis2ConfigObserver.java:[25,45]
> package org.wso2.carbon.rssmanager.core.entity does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/multitenancy/HiveAxis2ConfigObserver.java:[26,45]
> package org.wso2.carbon.rssmanager.core.entity does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/multitenancy/HiveAxis2ConfigObserver.java:[27,45]
> package org.wso2.carbon.rssmanager.core.entity does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/multitenancy/HiveAxis2ConfigObserver.java:[28,45]
> package org.wso2.carbon.rssmanager.core.entity does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/wso2/carbon/analytics/hive/multitenancy/HiveAxis2ConfigObserver.java:[29,46]
> package org.wso2.carbon.rssmanager.core.service does not exist
> [ERROR]
> /home/kasun/development/wso2/wso2svn/carbon/platform/branches/4.0.0/components/analytics/org.wso2.carbon.analytics.hive/4.0.0/src/main/java/org/w

[Dev] Compilation error in Kernel

2012-07-30 Thread Buddhika Chamith
Getting following while building kernel. (r135869)

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
(default-compile) on project org.wso2.carbon.registry.core: Compilation
failure
[ERROR]
/opt/build/trunk/carbon/kernel/branches/4.0.0/core/org.wso2.carbon.registry.core/4.0.0/src/main/java/org/wso2/carbon/registry/core/session/UserRegistry.java:[363,38]
';' expected
[ERROR] -> [Help 1]


Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Build Faliure - Identity Component

2012-07-31 Thread Buddhika Chamith
Getting this build faliure while building identity component. (r136001)

[ERROR] Failed to execute goal on project org.wso2.carbon.identity.sts.mgt:
Could not resolve dependencies for project
org.wso2.carbon:org.wso2.carbon.identity.sts.mgt:bundle:4.0.0: The
repository system is offline but the artifact
org.wso2.carbon:org.wso2.carbon.xfer:jar:3.2.0 is not available in the
local repository. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


[Dev] Getting Tenant Domain via CarbonContext APIs

2012-08-01 Thread Buddhika Chamith
I am seeing the following behavior when fetching tenant domain from
CarbonContext APIs.

CarbonContextHolder.getCurrentCarbonContextHolder().getTenantDomain()
returns NULL.
CarbonContext.getCurrentContext().getTenantDomain() returns NULL.
SuperTenantCarbonContext.getCurrentContext().getTenantDomain() returns NULL.

But getTenantId() is always successful returning -1234. Now if I try below
it gives super tenant domain.

SuperTenantCarbonContext.getCurrentContext().getTenantDomain(true) returns
the super tenant domain.

Afterwards above three methods will also return the super tenant domain
properly. So does this mean we have to use the variant accepting the
boolean (boolean resolve) always to be sure that the domain gets resolved
properly? Any best practices regarding the CarbonContext API usage would be
helpful in this regard. I am invoking these methods during component
initialization. IIRC it used to be the case that component initialization
codes did not run as any tenant in previous carbon version. I assume this
is not the case anymore with recent Tomcat OSGification changes?

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] 2 Versions of Antlr in the Platform and Product Binaries

2012-08-02 Thread Buddhika Chamith
Hi,

I had to add a new version (r125163) (a lower version) since Hive was
having troubles working with the latest antlr version during query
compilation. I will try to have another look at it and possibly move to the
newer version of antlr if possible.

Regards
Buddhika

On Thu, Aug 2, 2012 at 4:35 PM, Pradeep Fernando  wrote:

> guys why we are having two versions. ?? Please reply to the mail, if you
> are one who created the new versions of above bundles.
>
> --Pradeep
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] 2 Versions of Antlr in the Platform and Product Binaries

2012-08-02 Thread Buddhika Chamith
yes. there were two separate commits. Obviously both versions needed to
match.

On Thu, Aug 2, 2012 at 5:04 PM, Pradeep Fernando  wrote:

> so you added both antlr and antlr-runtime bundles ?
>
> --Pradeep
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] Startup error when restarting carbon 4.0.1 after installing BAM feature category

2012-09-07 Thread Buddhika Chamith
Currently this db is created at BAM distribution creation time. Ideally
this should be created at runtime if the db has not been created already
which would require some changes in RSS component. However I was able to
succefully startup BAM server after copying this h2 db to
repository/databases location. So currently that can be used as a
workaround. Jira at [1].

[1] https://wso2.org/jira/browse/BAM-855

On Fri, Sep 7, 2012 at 8:55 PM, Dileepa Jayakody  wrote:

> Hi All,
>
> The feature installation is successful for BAM category but when
> restarting the server following error is shown [1];
> This must be due to files not been copied to repository/resources (eg:
> dashboard files, gadget-repo files) and the database required for RSS
> components is not being created, when we install the BAM features.
>
> Thanks,
> Dileepa
>
> [1]
> 2012-09-07 20:47:46,406]  INFO
> {org.wso2.carbon.dashboard.dashboardpopulator.GadgetPopulator} -  Couldn't
> find a Dashboard at
> '/media/Windows7_OS/UBUNTU/RELEASE_7_9/wso2carbon-4.0.1/repository/resources/dashboard/dashboard.xml'.
> Giving up.
> [2012-09-07 20:47:46,407]  INFO
> {org.wso2.carbon.dashboard.dashboardpopulator.GadgetPopulator} -  Couldn't
> find contents at
> '/media/Windows7_OS/UBUNTU/RELEASE_7_9/wso2carbon-4.0.1/repository/resources/dashboard/gadgets'.
> Giving up.
> [2012-09-07 20:47:46,431]  INFO
> {org.wso2.carbon.dashboard.gadgetrepopopulator.GadgetRepoPopulator} -
> Couldn't find a Dashboard at
> '/media/Windows7_OS/UBUNTU/RELEASE_7_9/wso2carbon-4.0.1/repository/resources/gadget-repo/gadget-repo.xml'.
> Giving up.
> [2012-09-07 20:47:46,433]  INFO
> {org.wso2.carbon.dashboard.gadgetrepopopulator.GadgetRepoPopulator} -
> Couldn't find contents at
> '/media/Windows7_OS/UBUNTU/RELEASE_7_9/wso2carbon-4.0.1/repository/resources/gadget-repo/gadgets'.
> Giving up.
> [2012-09-07 20:47:46,453]  INFO
> {org.wso2.carbon.dashboard.themepopulator.ThemePopulator} -  Couldn't find
> contents at
> '/media/Windows7_OS/UBUNTU/RELEASE_7_9/wso2carbon-4.0.1/repository/resources/gs-themes'.
> Giving up.
> [2012-09-07 20:47:46,463]  WARN
> {org.apache.synapse.commons.util.MiscellaneousUtil} -  Error loading
> properties from a file at from the System defined location:
> datasources.properties
> [2012-09-07 20:47:47,628] ERROR
> {org.wso2.carbon.rssmanager.core.internal.RSSManagerServiceComponent} -
> Error occurred while initializing system RSS instances
> org.wso2.carbon.rssmanager.core.RSSManagerException: Error occurred while
> retrieving system RSS instances
> at
> org.wso2.carbon.rssmanager.core.internal.dao.RSSDAOImpl.getAllSystemRSSInstances(RSSDAOImpl.java:386)
> at
> org.wso2.carbon.rssmanager.core.internal.RSSManagerServiceComponent.initSystemRSSInstances(RSSManagerServiceComponent.java:237)
> at
> org.wso2.carbon.rssmanager.core.internal.RSSManagerServiceComponent.activate(RSSManagerServiceComponent.java:97)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponent.activate(ServiceComponent.java:252)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.activate(ServiceComponentProp.java:146)
> at
> org.eclipse.equinox.internal.ds.model.ServiceComponentProp.build(ServiceComponentProp.java:346)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponent(InstanceProcess.java:588)
> at
> org.eclipse.equinox.internal.ds.InstanceProcess.buildComponents(InstanceProcess.java:196)
> at
> org.eclipse.equinox.internal.ds.Resolver.getEligible(Resolver.java:328)
> at
> org.eclipse.equinox.internal.ds.SCRManager.serviceChanged(SCRManager.java:221)
> at
> org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:104)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.dispatchEvent(BundleContextImpl.java:861)
> at
> org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
> at
> org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:819)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:771)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:130)
> at
> org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:214)
> at
> org.eclipse.osgi.framework.internal.core.BundleContextImpl.registerService(BundleContextImpl.java:433)
> at
> org.eclipse.osgi.framework.internal.

Re: [Dev] Doesn't BAM run on Windows?

2012-09-10 Thread Buddhika Chamith
On Mon, Sep 10, 2012 at 10:41 PM, Senaka Fernando  wrote:

> Hi all,
>
> BTW, AFAIU we should be able to use "Bootup Validator" to validate whether
> Cygwin is installed for BAM on Windows.
>

+1.

Regards
Buddhika


>
> Thanks,
> Senaka.
>
>
> On Mon, Sep 10, 2012 at 10:23 PM, Senaka Fernando  wrote:
>
>> Hi Tharindu,
>>
>> Thanks. Actually I just figured that out by reading the doc just now too.
>> That's helpful. By the way, about that other error, I found [1]. Will
>> install Cygwin and check this out.
>>
>> [1] https://wso2.org/jira/browse/BAM-643
>>
>> Thanks,
>> Senaka.
>>
>>
>> On Mon, Sep 10, 2012 at 10:18 PM, Tharindu Mathew wrote:
>>
>>> Refer [1] and [2].
>>>
>>> [1] -
>>> http://docs.wso2.org/wiki/display/BAM200/Installation+Prerequisites
>>>
>>> [2] -
>>> http://docs.wso2.org/wiki/display/BAM200/FAQ#FAQ-Igetanexceptionstating- 
>>> ERRORorgapachehadoophiveqlexecExecDriver-JobSubmissionfailedwithexceptionjavaioIOExceptionCannotrunprogramchmodCreateProcesserror2Thesystemcannotfindthefilespecified
>>>  javaioIOExceptionCannotrunprogramchmodCreateProcesserror2Thes
>>>
>>> On Mon, Sep 10, 2012 at 10:11 PM, Senaka Fernando wrote:
>>>
 Hi Tharindu,

 Job Submission failed with exception 'java.io.IOException(Cannot run
 program "chmod": CreateProcess error=2, The system cannot find the file
 specified)'
 [2012-09-10 22:10:03,389] ERROR
 {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Job Submission failed with
 exception 'java.io.IOException(Cannot run program "chmod": CreateProcess
 error=2, The sys
 tem cannot find the file specified)'
 java.io.IOException: Cannot run program "chmod": CreateProcess error=2,
 The system cannot find the file specified
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
 at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
 at org.apache.hadoop.util.Shell.run(Shell.java:182)
 at
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
 at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:553)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.execSetPermission(RawLocalFileSystem.java:545)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:531)
 at
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:324)
 at
 org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at
 org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:798)
 at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:396)
 at
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1123)
 at
 org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:792)
 at
 org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:766)
 at
 org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:458)
 at
 org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:728)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
 at java.lang.reflect.Method.invoke(Method.java:597)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 Caused by: java.io.IOException: CreateProcess error=2, The system
 cannot find the file specified
 at java.lang.ProcessImpl.create(Native Method)
 at java.lang.ProcessImpl.(ProcessImpl.java:81)
 at java.lang.ProcessImpl.start(ProcessImpl.java:30)
 at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
 ... 25 more

 Execution failed with exit status: 2
 [2012-09-10 22:10:05,771] ERROR {org.apache.hadoop.hive.ql.exec.Task} -
  Execution failed with exit status: 2

 Thanks,
 Senaka.

 --
 *Senaka Fernando*
 Member - Integration Technologies Management Committee;
 Technical Lead; WSO2 Inc.; http://wso2.com*
 Member; Apache Software Foundation; http://apache.org

 E-mail: senaka AT wso2.com
 **P: +1 408 754 7388; ext: 51736*; *M: +94 77 322 1818
 Linked-In: http://linkedin.com/in/senakafernando

 *Lean . Enterprise . Middleware


>>>
>>>
>>> --
>>> Regards,

Re: [Dev] Doesn't BAM run on Windows?

2012-09-10 Thread Buddhika Chamith
On Mon, Sep 10, 2012 at 11:12 PM, Senaka Fernando  wrote:

> Hi guys,
>
> Tried with Cygwin and works perfectly. By the way also add the following
> to the BAM docs to make this complete.
>
> "After installing Cygwin, please update your PATH variable by appending
> ";C:\cygwin\bin". This is required since the default installation of Cygwin
> might not do this."
>
> Better if done with some images.
>

Noted. Thanks for feedback.

Regards
Buddhika

>
> Thanks,
> Senaka.
>
>
> On Mon, Sep 10, 2012 at 10:49 PM, Buddhika Chamith wrote:
>
>>
>>
>> On Mon, Sep 10, 2012 at 10:41 PM, Senaka Fernando wrote:
>>
>>> Hi all,
>>>
>>> BTW, AFAIU we should be able to use "Bootup Validator" to validate
>>> whether Cygwin is installed for BAM on Windows.
>>>
>>
>> +1.
>>
>> Regards
>> Buddhika
>>
>>
>>>
>>> Thanks,
>>> Senaka.
>>>
>>>
>>> On Mon, Sep 10, 2012 at 10:23 PM, Senaka Fernando wrote:
>>>
>>>> Hi Tharindu,
>>>>
>>>> Thanks. Actually I just figured that out by reading the doc just now
>>>> too. That's helpful. By the way, about that other error, I found [1]. Will
>>>> install Cygwin and check this out.
>>>>
>>>> [1] https://wso2.org/jira/browse/BAM-643
>>>>
>>>> Thanks,
>>>> Senaka.
>>>>
>>>>
>>>> On Mon, Sep 10, 2012 at 10:18 PM, Tharindu Mathew wrote:
>>>>
>>>>> Refer [1] and [2].
>>>>>
>>>>> [1] -
>>>>> http://docs.wso2.org/wiki/display/BAM200/Installation+Prerequisites
>>>>>
>>>>> [2] -
>>>>> http://docs.wso2.org/wiki/display/BAM200/FAQ#FAQ-Igetanexceptionstating- 
>>>>> ERRORorgapachehadoophiveqlexecExecDriver-JobSubmissionfailedwithexceptionjavaioIOExceptionCannotrunprogramchmodCreateProcesserror2Thesystemcannotfindthefilespecified
>>>>>  javaioIOExceptionCannotrunprogramchmodCreateProcesserror2Thes
>>>>>
>>>>> On Mon, Sep 10, 2012 at 10:11 PM, Senaka Fernando wrote:
>>>>>
>>>>>> Hi Tharindu,
>>>>>>
>>>>>> Job Submission failed with exception 'java.io.IOException(Cannot run
>>>>>> program "chmod": CreateProcess error=2, The system cannot find the file
>>>>>> specified)'
>>>>>> [2012-09-10 22:10:03,389] ERROR
>>>>>> {org.apache.hadoop.hive.ql.exec.ExecDriver} -  Job Submission failed with
>>>>>> exception 'java.io.IOException(Cannot run program "chmod": CreateProcess
>>>>>> error=2, The sys
>>>>>> tem cannot find the file specified)'
>>>>>> java.io.IOException: Cannot run program "chmod": CreateProcess
>>>>>> error=2, The system cannot find the file specified
>>>>>> at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
>>>>>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
>>>>>> at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>>>>> at
>>>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>>>>> at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>>>>> at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>>>>> at
>>>>>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:553)
>>>>>> at
>>>>>> org.apache.hadoop.fs.RawLocalFileSystem.execSetPermission(RawLocalFileSystem.java:545)
>>>>>> at
>>>>>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:531)
>>>>>> at
>>>>>> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:324)
>>>>>> at
>>>>>> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
>>>>>> at
>>>>>> org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:116)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:798)
>>>>>> at
>>>>>> org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:792)
>>>>>>

[Dev] Merging BAM Activity Service and Mediator stream definitions

2012-10-25 Thread Buddhika Chamith
Currently we are having a single stream definition for service data agent
for including both service statistics and activity information. However for
activity mediator (BAM mediator) agent a different stream definition is
used since it's different in that no statistics information is present.
However this makes things bit difficult and less performant to correlate a
message flow between an App server and ESB since it would require join to
get data from two data column families. Analysis would become easier if
both activity service and mediation are to be published in to a single
stream.

A possible solution might be to separate out publishing of statistics data
and activity data in service agent in to two streams (this means two events
need to be published as well instead of one) and also having a separate
stream definition for baseline attributes (another stream def would be
required in the case of publishing custom attributes) of BAM mediator
publisher. This would require changes to both service and mediation
publisher to accommodate publishing to separate streams and make things bit
complex IMO since there would be couple of additional stream definitions in
certain cases. We might also have to think about the performance cost of
publishing two events instead of one.

Another option might be to use the current stream definition of activity
service agent to activity mediator agent so that all the data for activity
service, service statistics and activity mediation would be published to a
single column family. In this case we can just use Hive to have different
views of the same column family for different types of analytics (e.g:
analytics required for service stats, activity etc). Downside of this
approach is since all the data is that a single column family is now loaded
more.

We were thinking on going along with the first approach after some
discussion. Kasun please add anything else I have missed. Please shout if
there are any concerns and suggestions.

Regards
Buddhika
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] 4.0.3 build break - logging summarizer

2012-10-25 Thread Buddhika Chamith
poms for these jars are generated and installed to the repo during the hive
dependency build since they are not present by default in Hive sources. If
we can exclude transitive dependency check it should be ok I think.

Regards
Buddhika

On Thu, Oct 25, 2012 at 3:40 PM, Amila Maha Arachchi wrote:

> These jars are not included in the logging summarizer pom. But seems they
> are being looked by transitive dependencies.
>
> On Thu, Oct 25, 2012 at 3:30 PM, Amila Maha Arachchi wrote:
>
>> I think these hive jars are not deployed to the repo. Most probably due
>> to them not having pom.xmls (they have ivy.xmls)
>>
>> Pradeep, is it the case?
>>
>>
>> On Thu, Oct 25, 2012 at 3:15 PM, Samisa Abeysinghe wrote:
>>
>>> Note that I am building 4.0.3 on a clean repo
>>>
>>>
>>> On Thu, Oct 25, 2012 at 3:12 PM, Samisa Abeysinghe wrote:
>>>
 [INFO] Building WSO2 Carbon - Logging Summarizer 4.0.3
 [INFO]
 
 [WARNING] The POM for
 org.eclipse.osgi:org.eclipse.osgi.services:jar:3.2.0.v20090520-1800 is
 invalid, transitive dependencies (if any) will not be available, enable
 debug logging for more details
 [WARNING] The POM for org.apache.hive:hive-exec:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-shims:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-builtins:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-service:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-serde:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-metastore:jar:0.8.1-wso2v4
 is missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-cassandra:jar:0.8.1-wso2v4
 is missing, no dependency information available
 [WARNING] The POM for org.apache.hive:hive-jdbc:jar:0.8.1-wso2v4 is
 missing, no dependency information available
 [WARNING] The POM for
 org.wso2.carbon:hive-jdbc-handler:jar:0.8.1-wso2v4 is missing, no
 dependency information available
 [WARNING] The POM for net.sf.saxon:saxon:jar:8.9 is missing, no
 dependency information available
 [WARNING] The POM for
 org.eclipse.equinox:org.eclipse.equinox.http.servlet:jar:1.0.200.v20090520-1800
 is invalid, transitive dependencies (if any) will not be available, enable
 debug logging for more details

 Thanks,
 Samisa...

 Samisa Abeysinghe
 VP Engineering
 WSO2 Inc.
 http://wso2.com
 http://wso2.org

  Thanks,
>>> Samisa...
>>>
>>> Samisa Abeysinghe
>>> VP Engineering
>>> WSO2 Inc.
>>> http://wso2.com
>>> http://wso2.org
>>>
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>>
>> --
>> *Amila Maharachchi*
>> Technical Lead
>> Member, Management Committee - Cloud & Platform TG
>> WSO2, Inc.; http://wso2.com
>>
>> Blog: http://maharachchi.blogspot.com
>> Mobile: +94719371446
>>
>>
>>
>
>
> --
> *Amila Maharachchi*
> Technical Lead
> Member, Management Committee - Cloud & Platform TG
> WSO2, Inc.; http://wso2.com
>
> Blog: http://maharachchi.blogspot.com
> Mobile: +94719371446
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] usability issue with BAM 2.0.1

2012-10-25 Thread Buddhika Chamith
We are currently working on improving the Dashboard to detect the deployed
toolboxes and generate navigation links accordingly. Will be available in
2.0.2. JIRA at [1].

Regards
Buddhika

[1] https://wso2.org/jira/browse/BAM-954

On Fri, Oct 26, 2012 at 10:46 AM, Samisa Abeysinghe  wrote:

> https://wso2.org/jira/browse/BAM-960
>
>
> On Fri, Oct 26, 2012 at 5:03 AM, Jorge Infante Osorio wrote:
>
>> Hi all.
>>
>> I´m using BAM 2.0.1 with Service_Statistics_Monitoring  and
>> Mediation_Statistics_Monitoring tool box installed.
>>
>> When I go to the BAM Dashboard I don´t see a way to switch between AS and
>> ESB statistic. I don´t see a tool boxes list or something that help me to
>> switch between them.
>> I had to change the URL to see the AS or ESB statistic.
>>
>> AS: https://ip:port/bamdashboards/service_stats/index.jsp
>> ESB:https://ip:port/bamdashboards/mediation_stats/esb_proxy.jsp
>>
>>
>> Saludos,
>> Ing. Jorge Infante Osorio.
>> CDAE.
>> Fac. 5.
>> UCI.
>> “En un mundo perfecto las pizzas serían una comida saludable, las laptops
>> se
>> cargarían desde una fuente de corriente inalámbrica y todos los JAR serían
>> bundles de OSGI ”
>>
>>
>>
>>
>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
>> INFORMATICAS...
>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>>
>> http://www.uci.cu
>> http://www.facebook.com/universidad.uci
>> http://www.flickr.com/photos/universidad_uci
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
> Thanks,
> Samisa...
>
> Samisa Abeysinghe
> VP Engineering
> WSO2 Inc.
> http://wso2.com
> http://wso2.org
>
>
>
> ___
> Dev mailing list
> Dev@wso2.org
> http://wso2.org/cgi-bin/mailman/listinfo/dev
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] usability issue with BAM 2.0.1

2012-10-26 Thread Buddhika Chamith
Yes. It could be done by uncommenting required dashboard elements in
dashboard.xml found under
repository/deployment/server/jaggeryapps/bamdashboards.

Regards
Buddhika

On Fri, Oct 26, 2012 at 9:53 PM, Tharindu Mathew  wrote:

> Hi,
>
> The way to do this for now is by editing a config file, right? That should
> be good enough... Buddhika, is this correct?
>
>
> On Thu, Oct 25, 2012 at 11:25 PM, Buddhika Chamith wrote:
>
>> We are currently working on improving the Dashboard to detect the
>> deployed toolboxes and generate navigation links accordingly. Will be
>> available in 2.0.2. JIRA at [1].
>>
>> Regards
>> Buddhika
>>
>> [1] https://wso2.org/jira/browse/BAM-954
>>
>>
>> On Fri, Oct 26, 2012 at 10:46 AM, Samisa Abeysinghe wrote:
>>
>>> https://wso2.org/jira/browse/BAM-960
>>>
>>>
>>> On Fri, Oct 26, 2012 at 5:03 AM, Jorge Infante Osorio wrote:
>>>
>>>> Hi all.
>>>>
>>>> I´m using BAM 2.0.1 with Service_Statistics_Monitoring  and
>>>> Mediation_Statistics_Monitoring tool box installed.
>>>>
>>>> When I go to the BAM Dashboard I don´t see a way to switch between AS
>>>> and
>>>> ESB statistic. I don´t see a tool boxes list or something that help me
>>>> to
>>>> switch between them.
>>>> I had to change the URL to see the AS or ESB statistic.
>>>>
>>>> AS: https://ip:port/bamdashboards/service_stats/index.jsp
>>>> ESB:https://ip:port/bamdashboards/mediation_stats/esb_proxy.jsp
>>>>
>>>>
>>>> Saludos,
>>>> Ing. Jorge Infante Osorio.
>>>> CDAE.
>>>> Fac. 5.
>>>> UCI.
>>>> “En un mundo perfecto las pizzas serían una comida saludable, las
>>>> laptops se
>>>> cargarían desde una fuente de corriente inalámbrica y todos los JAR
>>>> serían
>>>> bundles de OSGI ”
>>>>
>>>>
>>>>
>>>>
>>>> 10mo. ANIVERSARIO DE LA CREACION DE LA UNIVERSIDAD DE LAS CIENCIAS
>>>> INFORMATICAS...
>>>> CONECTADOS AL FUTURO, CONECTADOS A LA REVOLUCION
>>>>
>>>> http://www.uci.cu
>>>> http://www.facebook.com/universidad.uci
>>>> http://www.flickr.com/photos/universidad_uci
>>>> ___
>>>> Dev mailing list
>>>> Dev@wso2.org
>>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>>
>>> Thanks,
>>> Samisa...
>>>
>>> Samisa Abeysinghe
>>> VP Engineering
>>> WSO2 Inc.
>>> http://wso2.com
>>> http://wso2.org
>>>
>>>
>>>
>>> ___
>>> Dev mailing list
>>> Dev@wso2.org
>>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>>
>>>
>>
>> ___
>> Dev mailing list
>> Dev@wso2.org
>> http://wso2.org/cgi-bin/mailman/listinfo/dev
>>
>>
>
>
> --
> Regards,
>
> Tharindu
>
> blog: http://mackiemathew.com/
> M: +9459908
>
>
___
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev


Re: [Dev] [BUILD FAILED] WSO2 Carbon 4.0.x Platform_4.0.3 build #45

2012-11-01 Thread Buddhika Chamith
This was fixed a while ago. Should build with a SVN up inside
components/mediators/bam.

Regards
Buddhika

On Thu, Nov 1, 2012 at 3:39 PM, Maheshika Goonetilleke
wrote:

> Hi All
>
> The Carbon Platform_4.0.3 has failed due to the below errors;
>
> 01-Nov-2012 02:31:48 [INFO]
> 01-Nov-2012
> 02:31:48 [INFO] BUILD FAILURE 01-Nov-2012 02:31:48[INFO]
>  
> 01-Nov-2012
> 02:31:48 [INFO] Total time: 32:53.358s01-Nov-2012 02:31:48 [INFO]
> Finished at: Thu Nov 01 02:31:47 PDT 201201-Nov-2012 02:31:59 [INFO]
> Final Memory: 1427M/2662M01-Nov-2012 02:31:59 [INFO]
> 01-Nov-2012
> 02:31:59 [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-compiler-plugin:2.3.2:compile
> (default-compile) on project org.wso2.carbon.mediator.bam: Compilation
> failure: Compilation failure: 01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[34,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[35,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[36,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[51,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> PayloadDataBuilder01-Nov-2012 02:31:59 [ERROR] location: class
> org.wso2.carbon.mediator.bam.Stream01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[52,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> MetaDataBuilder01-Nov-2012 02:31:59 [ERROR] location: class
> org.wso2.carbon.mediator.bam.Stream01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[53,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> CorrelationDataBuilder01-Nov-2012 02:31:59 [ERROR] location: class
> org.wso2.carbon.mediator.bam.Stream01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[34,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[35,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[36,44]
> package org.wso2.carbon.mediator.bam.builders does not exist 01-Nov-2012
> 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[51,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> PayloadDataBuilder01-Nov-2012 02:31:59 [ERROR] location: class
> org.wso2.carbon.mediator.bam.Stream01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[52,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> MetaDataBuilder01-Nov-2012 02:31:59 [ERROR] location: class
> org.wso2.carbon.mediator.bam.Stream01-Nov-2012 02:31:59 [ERROR]
> /home/bamboo/Bamboo-3.4/source-repository/build-dir/WCB001-PLA001-JOB1/components/mediators/bam/org.wso2.carbon.mediator.bam/4.0.3/src/main/java/org/wso2/carbon/mediator/bam/Stream.java:[53,12]
> cannot find symbol 01-Nov-2012 02:31:59 [ERROR] symbol  : class
> C