[jira] [Created] (EAGLE-880) Policy get corrupted when timestamp is given a valid date in milliseconds

2017-01-20 Thread DanielZhou (JIRA)
DanielZhou created EAGLE-880:


 Summary: Policy get corrupted when timestamp is given a  valid 
date in milliseconds
 Key: EAGLE-880
 URL: https://issues.apache.org/jira/browse/EAGLE-880
 Project: Eagle
  Issue Type: Bug
 Environment: OS: centos 6
Database: MySQL
Storm: 0.10


Reporter: DanielZhou


Description:
Timestamp metadata type: long
Input for timestamp: 1484938526200 (equaisl to Fri Jan 20 2017 10:55:26 
GMT-0800 )
In storm topology log, it indicates policy get corrupted because the input of 
timestamp.

Error messages:
Caused by: java.lang.NumberFormatException: For input string: "1484938526200"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:495) ~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:527) ~[na:1.7.0_55]

Cause:
Siddhi query parser ALWAYS parse field timestamp  as "singed_int", no matter 
the constant type defined in siddhi stream is of type "long", "double" or 
"float".

org.wso2.siddhi.query.compiler.internal.SiddhiQLBaseVisitorImpl: 1177

public Constant visitConstant_value(@NotNull Constant_valueContext ctx) {
if(ctx.bool_value() != null) {
return 
Expression.value(((BoolConstant)this.visit(ctx.bool_value())).getValue().booleanValue());
} else if(ctx.signed_double_value() != null) {
return 
Expression.value(((DoubleConstant)this.visit(ctx.signed_double_value())).getValue().doubleValue());
} else if(ctx.signed_float_value() != null) {
return 
Expression.value(((FloatConstant)this.visit(ctx.signed_float_value())).getValue().floatValue());
} else if(ctx.signed_long_value() != null) {
return 
Expression.value(((LongConstant)this.visit(ctx.signed_long_value())).getValue().longValue());
} else if(ctx.signed_int_value() != null) {
return 
Expression.value(((IntConstant)this.visit(ctx.signed_int_value())).getValue().intValue());
} else if(ctx.time_value() != null) {
return (TimeConstant)this.visit(ctx.time_value());
} else if(ctx.string_value() != null) {
return 
Expression.value(((StringConstant)this.visit(ctx.string_value())).getValue());
} else {
throw this.newSiddhiParserException(ctx);
}
}


Possible solutions:
- We can update the dependency from "3.0.5" to "3.1.x" to verify if this is a 
siddhi engine related bug.
  This approach will introduce code changes related to siddhi's API call as 
some api has changed.

- Or instead of using "milliseconds", we can use "seconds" as input , then in 
all audit event parser I we convert date time to seconds.

Let me know if you can reproduce this bug and which approach you prefer.

Thanks and regards,
Da



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (EAGLE-880) Policy get corrupted when timestamp is given a valid date in milliseconds

2017-01-20 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou updated EAGLE-880:
-
Affects Version/s: v0.4.0

> Policy get corrupted when timestamp is given a  valid date in milliseconds
> --
>
> Key: EAGLE-880
> URL: https://issues.apache.org/jira/browse/EAGLE-880
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.4.0
> Environment: OS: centos 6
> Database: MySQL
> Storm: 0.10
>Reporter: DanielZhou
>
> Description:
> Timestamp metadata type: long
> Input for timestamp: 1484938526200 (equaisl to Fri Jan 20 2017 10:55:26 
> GMT-0800 )
> In storm topology log, it indicates policy get corrupted because the input of 
> timestamp.
> Error messages:
> Caused by: java.lang.NumberFormatException: For input string: "1484938526200"
>  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
> ~[na:1.7.0_55]
>  at java.lang.Integer.parseInt(Integer.java:495) ~[na:1.7.0_55]
>  at java.lang.Integer.parseInt(Integer.java:527) ~[na:1.7.0_55]
> Cause:
> Siddhi query parser ALWAYS parse field timestamp  as "singed_int", no matter 
> the constant type defined in siddhi stream is of type "long", "double" or 
> "float".
> org.wso2.siddhi.query.compiler.internal.SiddhiQLBaseVisitorImpl: 1177
> public Constant visitConstant_value(@NotNull Constant_valueContext ctx) {
> if(ctx.bool_value() != null) {
> return 
> Expression.value(((BoolConstant)this.visit(ctx.bool_value())).getValue().booleanValue());
> } else if(ctx.signed_double_value() != null) {
> return 
> Expression.value(((DoubleConstant)this.visit(ctx.signed_double_value())).getValue().doubleValue());
> } else if(ctx.signed_float_value() != null) {
> return 
> Expression.value(((FloatConstant)this.visit(ctx.signed_float_value())).getValue().floatValue());
> } else if(ctx.signed_long_value() != null) {
> return 
> Expression.value(((LongConstant)this.visit(ctx.signed_long_value())).getValue().longValue());
> } else if(ctx.signed_int_value() != null) {
> return 
> Expression.value(((IntConstant)this.visit(ctx.signed_int_value())).getValue().intValue());
> } else if(ctx.time_value() != null) {
> return (TimeConstant)this.visit(ctx.time_value());
> } else if(ctx.string_value() != null) {
> return 
> Expression.value(((StringConstant)this.visit(ctx.string_value())).getValue());
> } else {
> throw this.newSiddhiParserException(ctx);
> }
> }
>   
> Possible solutions:
> - We can update the dependency from "3.0.5" to "3.1.x" to verify if this is a 
> siddhi engine related bug.
>   This approach will introduce code changes related to siddhi's API call as 
> some api has changed.
> - Or instead of using "milliseconds", we can use "seconds" as input , then in 
> all audit event parser I we convert date time to seconds.
>   
> Let me know if you can reproduce this bug and which approach you prefer.
> Thanks and regards,
> Da



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (EAGLE-880) Policy get corrupted when timestamp is given a valid date in milliseconds

2017-01-20 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou updated EAGLE-880:
-
Description: 
Description:
Timestamp metadata type: long
Input for timestamp: 1484938526200 (equaisl to Fri Jan 20 2017 10:55:26 
GMT-0800 )
In storm topology log, it indicates policy get corrupted because the input of 
timestamp.

Error messages:
Caused by: java.lang.NumberFormatException: For input string: "1484938526200"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:495) ~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:527) ~[na:1.7.0_55]

Cause:
Siddhi query parser ALWAYS parse field timestamp  as "singed_int", no matter 
the constant type defined in siddhi stream is of type "long", "double" or 
"float".

org.wso2.siddhi.query.compiler.internal.SiddhiQLBaseVisitorImpl: 1177

public Constant visitConstant_value(@NotNull Constant_valueContext ctx) {
if(ctx.bool_value() != null) {
return 
Expression.value(((BoolConstant)this.visit(ctx.bool_value())).getValue().booleanValue());
} else if(ctx.signed_double_value() != null) {
return 
Expression.value(((DoubleConstant)this.visit(ctx.signed_double_value())).getValue().doubleValue());
} else if(ctx.signed_float_value() != null) {
return 
Expression.value(((FloatConstant)this.visit(ctx.signed_float_value())).getValue().floatValue());
} else if(ctx.signed_long_value() != null) {
return 
Expression.value(((LongConstant)this.visit(ctx.signed_long_value())).getValue().longValue());
} else if(ctx.signed_int_value() != null) {
return 
Expression.value(((IntConstant)this.visit(ctx.signed_int_value())).getValue().intValue());
} else if(ctx.time_value() != null) {
return (TimeConstant)this.visit(ctx.time_value());
} else if(ctx.string_value() != null) {
return 
Expression.value(((StringConstant)this.visit(ctx.string_value())).getValue());
} else {
throw this.newSiddhiParserException(ctx);
}
}


Possible solutions:
- We can update the dependency from "3.0.5" to "3.1.x" to verify if this is a 
siddhi engine related bug.
  This approach will introduce code changes related to siddhi's API call as 
some api has changed.

- Or instead of using "milliseconds", we can use "seconds" as input , then in 
all audit event parser we convert date time to seconds.

Let me know if you can reproduce this bug and which approach you prefer.

Thanks and regards,
Da

  was:
Description:
Timestamp metadata type: long
Input for timestamp: 1484938526200 (equaisl to Fri Jan 20 2017 10:55:26 
GMT-0800 )
In storm topology log, it indicates policy get corrupted because the input of 
timestamp.

Error messages:
Caused by: java.lang.NumberFormatException: For input string: "1484938526200"
 at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:495) ~[na:1.7.0_55]
 at java.lang.Integer.parseInt(Integer.java:527) ~[na:1.7.0_55]

Cause:
Siddhi query parser ALWAYS parse field timestamp  as "singed_int", no matter 
the constant type defined in siddhi stream is of type "long", "double" or 
"float".

org.wso2.siddhi.query.compiler.internal.SiddhiQLBaseVisitorImpl: 1177

public Constant visitConstant_value(@NotNull Constant_valueContext ctx) {
if(ctx.bool_value() != null) {
return 
Expression.value(((BoolConstant)this.visit(ctx.bool_value())).getValue().booleanValue());
} else if(ctx.signed_double_value() != null) {
return 
Expression.value(((DoubleConstant)this.visit(ctx.signed_double_value())).getValue().doubleValue());
} else if(ctx.signed_float_value() != null) {
return 
Expression.value(((FloatConstant)this.visit(ctx.signed_float_value())).getValue().floatValue());
} else if(ctx.signed_long_value() != null) {
return 
Expression.value(((LongConstant)this.visit(ctx.signed_long_value())).getValue().longValue());
} else if(ctx.signed_int_value() != null) {
return 
Expression.value(((IntConstant)this.visit(ctx.signed_int_value())).getValue().intValue());
} else if(ctx.time_value() != null) {
return (TimeConstant)this.visit(ctx.time_value());
} else if(ctx.string_value() != null) {
return 
Expression.value(((StringConstant)this.visit(ctx.string_value())).getValue());
} else {
throw this.newSiddhiParserException(ctx);
}
}


Possible solutions:
- We can update the dependency from "3.0.5" to "3.1.x" to verify if this is a 
siddhi engine related bug.
  This approach will introduce code changes related to siddhi's API call as 
some api has changed.

- Or instead of using "mill

[jira] [Created] (EAGLE-881) Url of scala-tools repository is no longer valid

2017-01-20 Thread DanielZhou (JIRA)
DanielZhou created EAGLE-881:


 Summary: Url of scala-tools repository is no longer valid
 Key: EAGLE-881
 URL: https://issues.apache.org/jira/browse/EAGLE-881
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.4.0
Reporter: DanielZhou


After clear ~/.m2/repository, Maven build failed with error messages:

Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project eagle-security-hive: Error resolving project artifact: Could not 
transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde 
from/to scala-tools.org (http://scala-tools.org/repo-releases): Received fatal 
alert: protocol_version for project 
org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde -> [Help 1]

Reason:
The URL of scala-tools repository provided in eagle-parent is no longer valid:  
http://scala-tools.org/repo-releases
It will redirects to some unknown blog and won't find related resources.

Solution:
Remove the repositories declaration of "Scala-Tools Maven2 Repository", so that 
maven can download them from default maven repo.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (EAGLE-881) Url of scala-tools repository is no longer valid

2017-01-20 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou updated EAGLE-881:
-
Description: 
After clear ~/.m2/repository, Maven build failed with error messages:

Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project eagle-security-hive: Error resolving project artifact: Could not 
transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde 
from/to scala-tools.org (http://scala-tools.org/repo-releases): Received fatal 
alert: protocol_version for project 
org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde -> [Help 1]

Reason:
The URL of scala-tools repository provided in "eagle-parent" pom file is no 
longer valid:  http://scala-tools.org/repo-releases
It will redirects to some unknown blog and won't find related resources.

Solution:
Remove the repositories declaration of "Scala-Tools Maven2 Repository" from 
"eagle-parent" pom file, so that maven can download them from default maven 
repo.





  was:
After clear ~/.m2/repository, Maven build failed with error messages:

Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on 
project eagle-security-hive: Error resolving project artifact: Could not 
transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde 
from/to scala-tools.org (http://scala-tools.org/repo-releases): Received fatal 
alert: protocol_version for project 
org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde -> [Help 1]

Reason:
The URL of scala-tools repository provided in eagle-parent is no longer valid:  
http://scala-tools.org/repo-releases
It will redirects to some unknown blog and won't find related resources.

Solution:
Remove the repositories declaration of "Scala-Tools Maven2 Repository", so that 
maven can download them from default maven repo.






> Url of scala-tools repository is no longer valid
> 
>
> Key: EAGLE-881
> URL: https://issues.apache.org/jira/browse/EAGLE-881
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.4.0
>Reporter: DanielZhou
>
> After clear ~/.m2/repository, Maven build failed with error messages:
> Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project eagle-security-hive: Error resolving project artifact: Could not 
> transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde 
> from/to scala-tools.org (http://scala-tools.org/repo-releases): Received 
> fatal alert: protocol_version for project 
> org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde -> [Help 1]
> Reason:
> The URL of scala-tools repository provided in "eagle-parent" pom file is no 
> longer valid:  http://scala-tools.org/repo-releases
> It will redirects to some unknown blog and won't find related resources.
> Solution:
> Remove the repositories declaration of "Scala-Tools Maven2 Repository" from 
> "eagle-parent" pom file, so that maven can download them from default maven 
> repo.
> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (EAGLE-293) Size of field "alertContext " is too small in table "alertdetail_hadoop" in MySQL

2017-02-05 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou closed EAGLE-293.

Resolution: Fixed

this has already been fixed, since close it

> Size of field "alertContext " is too small in table "alertdetail_hadoop" in 
> MySQL
> -
>
> Key: EAGLE-293
> URL: https://issues.apache.org/jira/browse/EAGLE-293
> Project: Eagle
>  Issue Type: Bug
> Environment: mysql
>Reporter: DanielZhou
>
> Problem:
> When using MySQL as eagle's storage, some alert messages didn't show in UI.
> Reason:
> In  table "alertdetail_hadoop",  field size of "alertContext" is set as 
> varchar(1024), which is too small, it can lead to the failure to save alert 
> messages.
> Suggestion:
> Rearrange the size of these columns based on their usages.
> Eg:  It makes more sense if we arrange more storage to  field "description" 
> and "alertContext" , because they usually contain much more characters than  
> "site", "hostname", "application", "alertSource", "streamId",etc.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-888) Application submitted to Storm is always shown as “HBaseAuditLogApp”

2017-02-07 Thread DanielZhou (JIRA)
DanielZhou created EAGLE-888:


 Summary: Application submitted to  Storm is always shown as 
“HBaseAuditLogApp”
 Key: EAGLE-888
 URL: https://issues.apache.org/jira/browse/EAGLE-888
 Project: Eagle
  Issue Type: Bug
  Components: Application Framework
Affects Versions: v0.5.0
Reporter: DanielZhou
Assignee: DanielZhou


*Issue*:
Steps to reproduce:
- Started application from Eagle UI (eg: alert engine)
- Go to Storm UI, topology name is shown as *"HBaseAuditLogApp"* 

*Reason*:
In the constructor function of class *"ApplicationAction"*:
{quote}
this.effectiveConfig = ConfigFactory.parseMap(executionConfig)
.withFallback(serverConfig)
.withFallback(ConfigFactory.parseMap(metadata.getContext()))
{quote}

According to the java doc of 
[withFallBack(theOther)|http://typesafehub.github.io/config/latest/api/com/typesafe/config/Config.html#withFallback-com.typesafe.config.ConfigMergeable-]
 :
{quote}
Returns a new value computed by merging this value with another, with keys in 
this value "winning" over the other one.
{quote}

As a result, "serverConfig" will win over 
"ConfigFactory.parseMap(metadata.getContext())" which means the default 
"ConfigString(appId="HBaseAuditApp")" and "ConfigString(siteId="testSite")" 
will win over the meta data of the user's topology.

*Fix*:
Change the order of "withFallBack" to:
{quote}
this.effectiveConfig = ConfigFactory.parseMap(executionConfig)
.withFallback(ConfigFactory.parseMap(metadata.getContext()))
.withFallback(serverConfig)
{quote}






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (EAGLE-888) Application submitted to Storm is always shown as “HBaseAuditLogApp”

2017-02-22 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou closed EAGLE-888.

Resolution: Fixed

This only happens in dev env: when starting eagle server from class 
"ServerMain", due to multiple "application.conf" exist in.
The correct way to debug is to use "ServerDebug" class to start eagle server.
 
Issues resolved, hence closing it

> Application submitted to  Storm is always shown as “HBaseAuditLogApp”
> -
>
> Key: EAGLE-888
> URL: https://issues.apache.org/jira/browse/EAGLE-888
> Project: Eagle
>  Issue Type: Bug
>  Components: Core::App Engine
>Affects Versions: v0.5.0
>Reporter: DanielZhou
>Assignee: DanielZhou
>
> *Issue*:
> Steps to reproduce:
> - Started application from Eagle UI (eg: alert engine)
> - Go to Storm UI, topology name is shown as *"HBaseAuditLogApp"* 
> *Reason*:
> In the constructor function of class *"ApplicationAction"*:
> {quote}
> this.effectiveConfig = ConfigFactory.parseMap(executionConfig)
> .withFallback(serverConfig)
> 
> .withFallback(ConfigFactory.parseMap(metadata.getContext()))
> {quote}
> According to the java doc of 
> [withFallBack(theOther)|http://typesafehub.github.io/config/latest/api/com/typesafe/config/Config.html#withFallback-com.typesafe.config.ConfigMergeable-]
>  :
> {quote}
> Returns a new value computed by merging this value with another, with keys in 
> this value "winning" over the other one.
> {quote}
> As a result, "serverConfig" will win over 
> "ConfigFactory.parseMap(metadata.getContext())" which means the default 
> "ConfigString(appId="HBaseAuditApp")" and "ConfigString(siteId="testSite")" 
> will win over the meta data of the user's topology.
> *Fix*:
> Change the order of "withFallBack" to:
> {quote}
> this.effectiveConfig = ConfigFactory.parseMap(executionConfig)
> 
> .withFallback(ConfigFactory.parseMap(metadata.getContext()))
> .withFallback(serverConfig)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (EAGLE-936) eagle-0.4.0 :mvn clean package -DskipTests meet this question:Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.0

2017-03-01 Thread DanielZhou (JIRA)

[ 
https://issues.apache.org/jira/browse/EAGLE-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15890434#comment-15890434
 ] 

DanielZhou commented on EAGLE-936:
--

Hi Edward Chen,

It might be caused by invalid url of scala-tools repository.

Could you try modifying your "pom.xml" by following this PR:  
https://github.com/apache/eagle/pull/795/files ?

Let me know if i you meet other issues.

Thanks and regards,
Da

> eagle-0.4.0 :mvn clean package -DskipTests  meet this question:Failed to 
> execute goal org.scala-tools:maven-scala-plugin:2.15.0
> ---
>
> Key: EAGLE-936
> URL: https://issues.apache.org/jira/browse/EAGLE-936
> Project: Eagle
>  Issue Type: Bug
> Environment:  Version:apache-eagle-0.4.0-incubating-src.tar.gz
> mvn clean package -DskipTests 
> meet this question:Failed to execute goal 
> org.scala-tools:maven-scala-plugin:2.15.0
> How can I solve it ?Thanks.
>Reporter: eward chen
>
> [INFO] Total time: 28.374s
> [INFO] Finished at: Wed Mar 01 04:57:40 PST 2017
> [INFO] Final Memory: 30M/90M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.scala-tools:maven-scala-plugin:2.15.0:compile (scala-compile-first) on 
> project eagle-stream-process-base: wrap: 
> org.apache.commons.exec.ExecuteException: Process exited with an error: 
> 1(Exit value: 1) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :eagle-stream-process-base
> [root@master eagle-0.4]# mvn clean package -DskipTests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (EAGLE-881) Url of scala-tools repository is no longer valid

2017-03-01 Thread DanielZhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielZhou closed EAGLE-881.

Resolution: Fixed
  Assignee: DanielZhou

fix merged

> Url of scala-tools repository is no longer valid
> 
>
> Key: EAGLE-881
> URL: https://issues.apache.org/jira/browse/EAGLE-881
> Project: Eagle
>  Issue Type: Bug
>Affects Versions: v0.4.0
>Reporter: DanielZhou
>Assignee: DanielZhou
>
> After clear ~/.m2/repository, Maven build failed with error messages:
> Failed to execute goal 
> org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) 
> on project eagle-security-hive: Error resolving project artifact: Could not 
> transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde 
> from/to scala-tools.org (http://scala-tools.org/repo-releases): Received 
> fatal alert: protocol_version for project 
> org.pentaho:pentaho-aggdesigner-algorithm:jar:5.1.5-jhyde -> [Help 1]
> Reason:
> The URL of scala-tools repository provided in "eagle-parent" pom file is no 
> longer valid:  http://scala-tools.org/repo-releases
> It will redirects to some unknown blog and won't find related resources.
> Solution:
> Remove the repositories declaration of "Scala-Tools Maven2 Repository" from 
> "eagle-parent" pom file, so that maven can download them from default maven 
> repo.
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (EAGLE-936) eagle-0.4.0 :mvn clean package -DskipTests meet this question:Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.0

2017-03-01 Thread DanielZhou (JIRA)

[ 
https://issues.apache.org/jira/browse/EAGLE-936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15891735#comment-15891735
 ] 

DanielZhou commented on EAGLE-936:
--

Hi Edward Chen,

Fix has been committed into git repo branch-0.4, could you try it? 

Regards,
Da

> eagle-0.4.0 :mvn clean package -DskipTests  meet this question:Failed to 
> execute goal org.scala-tools:maven-scala-plugin:2.15.0
> ---
>
> Key: EAGLE-936
> URL: https://issues.apache.org/jira/browse/EAGLE-936
> Project: Eagle
>  Issue Type: Bug
> Environment:  Version:apache-eagle-0.4.0-incubating-src.tar.gz
> mvn clean package -DskipTests 
> meet this question:Failed to execute goal 
> org.scala-tools:maven-scala-plugin:2.15.0
> How can I solve it ?Thanks.
>Reporter: eward chen
>
> [INFO] Total time: 28.374s
> [INFO] Finished at: Wed Mar 01 04:57:40 PST 2017
> [INFO] Final Memory: 30M/90M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.scala-tools:maven-scala-plugin:2.15.0:compile (scala-compile-first) on 
> project eagle-stream-process-base: wrap: 
> org.apache.commons.exec.ExecuteException: Process exited with an error: 
> 1(Exit value: 1) -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :eagle-stream-process-base
> [root@master eagle-0.4]# mvn clean package -DskipTests



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (EAGLE-360) Kafka Java Producer with kerberos

2017-07-12 Thread DanielZhou (JIRA)

[ 
https://issues.apache.org/jira/browse/EAGLE-360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16084546#comment-16084546
 ] 

DanielZhou commented on EAGLE-360:
--

Hi, [~jhsenjaliya],
This was for an internal test. I tested in local env but it didn't work.

I think we don't need to merge this code as Logstash kafka plugin already added 
support for Kerberos SASL (requires plugin version 5.1.0 or later). 
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-kafka.html

Regards,
Da

> Kafka Java Producer with kerberos
> -
>
> Key: EAGLE-360
> URL: https://issues.apache.org/jira/browse/EAGLE-360
> Project: Eagle
>  Issue Type: Improvement
>Affects Versions: v0.3.0
>Reporter: Hao Chen
>Assignee: DanielZhou
> Fix For: v0.6.0
>
>
> *log4j.properties*
> {code}
> log4j.appender.KAFKA_HDFS_AUDIT.SecurityProtocol=PLAINTEXTSASL
> log4j.appender.KAFKA_HDFS_AUDIT.SecurityKrb5Conf=/etc/krb5.conf
> log4j.appender.KAFKA_HDFS_AUDIT.SecurityAuthLoginConfig=/etc/kafka/kafka_jaas.conf
> log4j.appender.KAFKA_HDFS_AUDIT.SecurityAuthUseSubjectCredsOnly=false
> log4j.appender.KAFKA_HDFS_AUDIT.SecurityKrb5Debug=true
> {code}
> *kafka_jaas.conf*
> {code}
> KafkaClient {
> com.sun.security.auth.module.Krb5LoginModule required
> doNotPrompt=true
> useTicketCache=true
> principal="ctad...@eagle.incubator.apache.org"
> useKeyTab=true
> serviceName="kafka"
> keyTab="/etc/security/keytabs/ctadmin.keytab"
> client=true;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)