[jira] [Closed] (EAGLE-948) can not package by maven

2017-03-23 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh closed EAGLE-948.

Resolution: Cannot Reproduce

> can not package by maven
> 
>
> Key: EAGLE-948
> URL: https://issues.apache.org/jira/browse/EAGLE-948
> Project: Eagle
>  Issue Type: Bug
>  Components: Core::Alert Engine
>Affects Versions: v0.5.0
> Environment: JDK:jdk1.8.0_111
> Maven :apache-maven-3.3.9
> OS:windows7
>Reporter: Han Hui Wen 
>Assignee: Jayesh
>Priority: Blocker
>  Labels: build
> Fix For: v0.5.0
>
>
> When I package eagle ,find following error :
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.5.1:compile 
> (default-compile) on project eagle-common: Compilation fai
> lure -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.apache.maven.plugins:maven-compiler-plugin:3.5.1:compile (default-c
> ompile) on project eagle-common: Compilation failure
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
> at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
> at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
> at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
> at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
> at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
> at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
> at 
> org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
> Caused by: org.apache.maven.plugin.compiler.CompilationFailureException: 
> Compilation failure
> at 
> org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:976)
> at 
> org.apache.maven.plugin.compiler.CompilerMojo.execute(CompilerMojo.java:129)
> at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
> at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
> ... 20 more
> [ERROR]
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :eagle-common



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-976) verify docs instructions

2017-03-23 Thread Jayesh (JIRA)
Jayesh created EAGLE-976:


 Summary: verify docs instructions
 Key: EAGLE-976
 URL: https://issues.apache.org/jira/browse/EAGLE-976
 Project: Eagle
  Issue Type: Sub-task
Reporter: Jayesh






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-975) deploy site

2017-03-23 Thread Jayesh (JIRA)
Jayesh created EAGLE-975:


 Summary: deploy site
 Key: EAGLE-975
 URL: https://issues.apache.org/jira/browse/EAGLE-975
 Project: Eagle
  Issue Type: Sub-task
Reporter: Jayesh






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-972) Create 0.5 Release

2017-03-23 Thread Jayesh (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jayesh updated EAGLE-972:
-
Summary: Create 0.5 Release  (was: 0.5 Release Tasks)

> Create 0.5 Release
> --
>
> Key: EAGLE-972
> URL: https://issues.apache.org/jira/browse/EAGLE-972
> Project: Eagle
>  Issue Type: Task
>Affects Versions: v0.5.0
>Reporter: Jayesh
>Assignee: Jayesh
>  Labels: release
> Fix For: v0.5.0
>
>
> This ticket should include all the required tasks that needs to be complete 
> in order to release 0.5



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-973) update site document content

2017-03-23 Thread Jayesh (JIRA)
Jayesh created EAGLE-973:


 Summary: update site document content
 Key: EAGLE-973
 URL: https://issues.apache.org/jira/browse/EAGLE-973
 Project: Eagle
  Issue Type: Sub-task
Reporter: Jayesh






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (EAGLE-972) 0.5 Release Tasks

2017-03-23 Thread Jayesh (JIRA)
Jayesh created EAGLE-972:


 Summary: 0.5 Release Tasks
 Key: EAGLE-972
 URL: https://issues.apache.org/jira/browse/EAGLE-972
 Project: Eagle
  Issue Type: Task
Affects Versions: v0.5.0
Reporter: Jayesh
Assignee: Jayesh
 Fix For: v0.5.0


This ticket should include all the required tasks that needs to be complete in 
order to release 0.5



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-971:

Description: 
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3, reuseBoltInStreams = 
true}} in server config
2. create four policies which has the same partition and consume the same stream
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.

  was:
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3, reuseBoltInStreams = 
true}} in server config
2. create four policies which the same partition and consuming 

[jira] [Updated] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)

 [ 
https://issues.apache.org/jira/browse/EAGLE-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhao, Qingwen updated EAGLE-971:

Description: 
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{policiesPerBolt = 2, streamsPerBolt = 3}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.

  was:
This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{{policiesPerBolt = 2, streamsPerBolt = 3}}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from 

[jira] [Created] (EAGLE-971) Duplicated queues are generated under a monitored stream

2017-03-23 Thread Zhao, Qingwen (JIRA)
Zhao, Qingwen created EAGLE-971:
---

 Summary: Duplicated queues are generated under a monitored stream
 Key: EAGLE-971
 URL: https://issues.apache.org/jira/browse/EAGLE-971
 Project: Eagle
  Issue Type: Bug
Affects Versions: v0.5.0
Reporter: Zhao, Qingwen
Assignee: Zhao, Qingwen


This issue is caused by the wrong routing spec generated by the coordinator. 
Here is the procedure to reproduce it. 

1. setting {{{policiesPerBolt = 2, streamsPerBolt = 3}}} in server config
2. create four policies which the same partition and consuming the same streamId
{code}
 from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(2) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.rpc.callqueuelength"]#window.length(30) select site, host, 
component, metric, min(convert(value, "long")) as minValue group by site, host, 
component, metric having minValue >= 1 insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_CALL_QUEUE_EXCEEDS_OUT;

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count"]#window.length(2) select site, host, 
component, metric, timestamp, min(value) as minValue group by site, host, 
component, metric insert into 
HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT

from HADOOP_JMX_METRIC_STREAM_SANDBOX[metric == 
"hadoop.namenode.hastate.failed.count.test"]#window.length(3) select site, 
host, component, metric, count(value) as cnt group by site, host, component, 
metric insert into HADOOP_JMX_METRIC_STREAM_SANDBOX_NN_NO_RESPONSE_OUT;
{code}

After creating the four policies, the routing spec is 
{code}
routerSpecs: [
{
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
targetQueue: [
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
},
{
partition: {
streamId: "HADOOP_JMX_METRIC_STREAM_SANDBOX",
type: "GROUPBY",
columns: [
"site",
"host",
"component",
"metric"
],
sortSpec: null
},
workers: [
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt9"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt0"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt1"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt2"
},
{
topologyName: "ALERT_UNIT_TOPOLOGY_APP_SANDBOX",
boltId: "alertBolt3"
}
]
}
]
}
]
{code}

and the alert spec is 
{code}
boltPolicyIdsMap: {
alertBolt9: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt0: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt1: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt2: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
],
alertBolt3: [
"NameNodeWithOneNoResponse",
"NameNodeHAHasNoResponse",
"CallQueueLengthExceeds30Times",
"CallQueueLengthExceeds2Times"
]
}
{code}

3. produce messages into kafka topic 'hadoop_jmx_metrics_sandbox' and trigger 
NameNodeWithOneNoResponse.

{code}
{"timestamp": 1490250963445, "metric": "hadoop.namenode.hastate.failed.count", 
"component": "namenode", "site": "artemislvs", "value": 0.0, "host": 
"localhost"}
{code}

Then one message is sent three times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)