[jira] [Updated] (STORM-3171) java.lang.NoSuchMethodError in org.apache.storm:storm-kafka-monitor:jar:1.1.2 caused by dependency conflict issue

2018-08-01 Thread LeoAugust19 (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

LeoAugust19  updated STORM-3171:

Description: 
Hi, we found a dependency conflict issue in 
*org.apache.storm:storm-kafka-monitor:jar:1.1.2*, *caused by 
org.apache.zookeeper:zookeeper:jar*. As shown in the following dependency tree, 
due to Maven version management, *org.apache.zookeeper:zookeeper:jar:3.4.6* 
will be loaded, during the packaging process.

 

However, method *(java.util.Map)>* only defined in *org.apache.zookeeper:zookeeper:jar 
3.5.3-beta*, so that there is a crash with the following stack trace when your 
project referencing the missing method.

 

*Stack trace:*

Exception in thread "main" java.lang.NoSuchMethodError: 
org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V

 at 
org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)

 at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFrameworkImpl.java:159)

 at 
org.apache.curator.framework.CuratorFrameworkFactory$Builder.build(CuratorFrameworkFactory.java:158)

 at 
org.apache.curator.framework.CuratorFrameworkFactory.newClient(CuratorFrameworkFactory.java:109)

 

*Dependency tree:*

org.apache.storm:storm-kafka-monitor:jar:1.1.2

+- org.apache.kafka:kafka-clients:jar:0.10.1.0:compile
|  +- net.jpountz.lz4:lz4:jar:1.3.0:compile|
|  +- org.xerial.snappy:snappy-java:jar:1.1.2.6:compile|
|  - org.slf4j:slf4j-api:jar:1.7.21:compile|

+- org.apache.curator:curator-framework:jar:4.0.0:compile
|  - org.apache.curator:curator-client:jar:4.0.0:compile|
| +- *org.apache.zookeeper:zookeeper:jar:3.4.6:compile (version managed 
from 3.5.3-beta)*|
||  +- jline:jline:jar:0.9.94:compile|
||  - io.netty:netty:jar:3.9.9.Final:compile (version managed from 
3.7.0.Final)|
| +- com.google.guava:guava:jar:16.0.1:compile (version managed from 20.0)|
| - (org.slf4j:slf4j-api:jar:1.7.21:compile - version managed from 1.7.6; 
omitted for duplicate)|

+- com.googlecode.json-simple:json-simple:jar:1.1:compile

+- commons-cli:commons-cli:jar:1.3.1:compile

- junit:junit:jar:4.11:test

   - org.hamcrest:hamcrest-core:jar:1.3:test

 

*Solution:*

One choice is to upgrade *org.apache.zookeeper:zookeeper:jar to 3.5.3-beta,* 
but it is not the best solution, as 3.5.3-beta is not a release version.**

 

Thanks a lot!

Regards,

Leo

  was:
Hi, we found a dependency conflict issue in 
*org.apache.storm:storm-kafka-monitor:jar:1.1.2*, *caused by 
org.apache.zookeeper:zookeeper:jar*. As shown in the following dependency tree, 
due to Maven version management, *org.apache.zookeeper:zookeeper:jar:3.4.6* 
will be loaded, during the packaging process.

 

However, method *(java.util.Map)>* only defined in *org.apache.zookeeper:zookeeper:jar 
3.5.3-beta*, so that there is a crash with the following stack trace when your 
project referencing the missing method.

 

*Stack trace:*

Exception in thread "main" java.lang.NoSuchMethodError: 
org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V

 at 
org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)

 at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFrameworkImpl.java:159)

 at 
org.apache.curator.framework.CuratorFrameworkFactory$Builder.build(CuratorFrameworkFactory.java:158)

 at 
org.apache.curator.framework.CuratorFrameworkFactory.newClient(CuratorFrameworkFactory.java:109)

 

*Dependency tree:*

org.apache.storm:storm-kafka-monitor:jar:1.1.2

+- org.apache.kafka:kafka-clients:jar:0.10.1.0:compile

|  +- net.jpountz.lz4:lz4:jar:1.3.0:compile

|  +- org.xerial.snappy:snappy-java:jar:1.1.2.6:compile

|  \- org.slf4j:slf4j-api:jar:1.7.21:compile

+- org.apache.curator:curator-framework:jar:4.0.0:compile

|  \- org.apache.curator:curator-client:jar:4.0.0:compile

| +- *org.apache.zookeeper:zookeeper:jar:3.4.6:compile (version managed 
from 3.5.3-beta)*

| |  +- jline:jline:jar:0.9.94:compile

| |  \- io.netty:netty:jar:3.9.9.Final:compile (version managed from 
3.7.0.Final)

| +- com.google.guava:guava:jar:16.0.1:compile (version managed from 20.0)

| \- (org.slf4j:slf4j-api:jar:1.7.21:compile - version managed from 1.7.6; 
omitted for duplicate)

+- com.googlecode.json-simple:json-simple:jar:1.1:compile

+- commons-cli:commons-cli:jar:1.3.1:compile

\- junit:junit:jar:4.11:test

   \- org.hamcrest:hamcrest-core:jar:1.3:test

 

*Solution:*

One choice is to upgrade *org.apache.zookeeper:zookeeper:jar to 3.5.3-beta,* 
but it is not a best solution, as 3.5.3-beta is not a release version.**

 

Thanks a lot!

Regards,

Leo


> java.lang.NoSuchMethodError in org.apache.storm:storm-kafka-monitor:jar:1.1.2 
> caused by dependency conflict issue
> 

[jira] [Created] (STORM-3171) java.lang.NoSuchMethodError in org.apache.storm:storm-kafka-monitor:jar:1.1.2 caused by dependency conflict issue

2018-08-01 Thread LeoAugust19 (JIRA)
LeoAugust19  created STORM-3171:
---

 Summary: java.lang.NoSuchMethodError in 
org.apache.storm:storm-kafka-monitor:jar:1.1.2 caused by dependency conflict 
issue
 Key: STORM-3171
 URL: https://issues.apache.org/jira/browse/STORM-3171
 Project: Apache Storm
  Issue Type: Dependency upgrade
  Components: storm-kafka-monitor
Affects Versions: 1.1.2
Reporter: LeoAugust19 
 Fix For: 2.0.0


Hi, we found a dependency conflict issue in 
*org.apache.storm:storm-kafka-monitor:jar:1.1.2*, *caused by 
org.apache.zookeeper:zookeeper:jar*. As shown in the following dependency tree, 
due to Maven version management, *org.apache.zookeeper:zookeeper:jar:3.4.6* 
will be loaded, during the packaging process.

 

However, method *(java.util.Map)>* only defined in *org.apache.zookeeper:zookeeper:jar 
3.5.3-beta*, so that there is a crash with the following stack trace when your 
project referencing the missing method.

 

*Stack trace:*

Exception in thread "main" java.lang.NoSuchMethodError: 
org.apache.zookeeper.server.quorum.flexible.QuorumMaj.(Ljava/util/Map;)V

 at 
org.apache.curator.framework.imps.EnsembleTracker.(EnsembleTracker.java:57)

 at 
org.apache.curator.framework.imps.CuratorFrameworkImpl.(CuratorFrameworkImpl.java:159)

 at 
org.apache.curator.framework.CuratorFrameworkFactory$Builder.build(CuratorFrameworkFactory.java:158)

 at 
org.apache.curator.framework.CuratorFrameworkFactory.newClient(CuratorFrameworkFactory.java:109)

 

*Dependency tree:*

org.apache.storm:storm-kafka-monitor:jar:1.1.2

+- org.apache.kafka:kafka-clients:jar:0.10.1.0:compile

|  +- net.jpountz.lz4:lz4:jar:1.3.0:compile

|  +- org.xerial.snappy:snappy-java:jar:1.1.2.6:compile

|  \- org.slf4j:slf4j-api:jar:1.7.21:compile

+- org.apache.curator:curator-framework:jar:4.0.0:compile

|  \- org.apache.curator:curator-client:jar:4.0.0:compile

| +- *org.apache.zookeeper:zookeeper:jar:3.4.6:compile (version managed 
from 3.5.3-beta)*

| |  +- jline:jline:jar:0.9.94:compile

| |  \- io.netty:netty:jar:3.9.9.Final:compile (version managed from 
3.7.0.Final)

| +- com.google.guava:guava:jar:16.0.1:compile (version managed from 20.0)

| \- (org.slf4j:slf4j-api:jar:1.7.21:compile - version managed from 1.7.6; 
omitted for duplicate)

+- com.googlecode.json-simple:json-simple:jar:1.1:compile

+- commons-cli:commons-cli:jar:1.3.1:compile

\- junit:junit:jar:4.11:test

   \- org.hamcrest:hamcrest-core:jar:1.3:test

 

*Solution:*

One choice is to upgrade *org.apache.zookeeper:zookeeper:jar to 3.5.3-beta,* 
but it is not a best solution, as 3.5.3-beta is not a release version.**

 

Thanks a lot!

Regards,

Leo



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread Zhengdai Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhengdai Hu updated STORM-3170:
---
Description: In DirectoryCleaner#deleteOldestWhileTooLarge, the original 
implementation calls file#delete without checking if it succeeds or not, and 
they're always reported as deleted. This prevents DirectoryCleaner from clean 
up other files and invalidates any metrics built on top of this.  (was: In 
DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation calls 
file#delete without checking if it succeeds or not, even though they're always 
reported as deleted. This prevents DirectoryCleaner from clean up other files 
and invalidates any metrics built on top of this.)

> DirectoryCleaner may not correctly report correct number of deleted files
> -
>
> Key: STORM-3170
> URL: https://issues.apache.org/jira/browse/STORM-3170
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
> calls file#delete without checking if it succeeds or not, and they're always 
> reported as deleted. This prevents DirectoryCleaner from clean up other files 
> and invalidates any metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread Zhengdai Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhengdai Hu updated STORM-3170:
---
Description: In DirectoryCleaner#deleteOldestWhileTooLarge, the original 
implementation calls file#delete without checking if it succeeds or not, even 
though they're always reported as deleted. This prevents DirectoryCleaner from 
clean up other files and invalidates any metrics built on top of this.  (was: 
In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
calls file#delete without checking if it succeed or not, even though they're 
always reported as deleted. This prevents DirectoryCleaner from clean up other 
files and invalidates any metrics built on top of this.)

> DirectoryCleaner may not correctly report correct number of deleted files
> -
>
> Key: STORM-3170
> URL: https://issues.apache.org/jira/browse/STORM-3170
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
> calls file#delete without checking if it succeeds or not, even though they're 
> always reported as deleted. This prevents DirectoryCleaner from clean up 
> other files and invalidates any metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3170:
--
Labels: pull-request-available  (was: )

> DirectoryCleaner may not correctly report correct number of deleted files
> -
>
> Key: STORM-3170
> URL: https://issues.apache.org/jira/browse/STORM-3170
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
> calls file#delete without checking if it succeed or not, even though they're 
> always reported as deleted. This prevents DirectoryCleaner from clean up 
> other files and invalidates any metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread Zhengdai Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhengdai Hu updated STORM-3170:
---
Description: In DirectoryCleaner#deleteOldestWhileTooLarge, the original 
implementation calls file#delete without checking if it succeed or not, even 
though they're always reported as deleted. This prevents DirectoryCleaner from 
clean up other files and invalidates any metrics built on top of this.  (was: 
In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
calls file#delete without checking if it succeed or not, even though they're 
always reported as deleted. This invalidate any metrics built on top of this.)

> DirectoryCleaner may not correctly report correct number of deleted files
> -
>
> Key: STORM-3170
> URL: https://issues.apache.org/jira/browse/STORM-3170
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
> Fix For: 2.0.0
>
>
> In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
> calls file#delete without checking if it succeed or not, even though they're 
> always reported as deleted. This prevents DirectoryCleaner from clean up 
> other files and invalidates any metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3169) Misleading logviewer.cleanup.age.min

2018-08-01 Thread Zhengdai Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhengdai Hu updated STORM-3169:
---
Description: Config specification logviewer.cleanup.age.min labels the 
duration in minutes passed since a log file is modified before we consider the 
log to be old. However in the actual use it's been subtracted by nowMills, 
which is the current time in milliseconds. We should convert it to 
milliseconds.  (was: Config specification logviewer.cleanup.age.min labels the 
duration in minutes passed since a log file is modified before we consider the 
log to be old. However in the actual use it's been subtracted by nowMills, 
which is the current time in milliseconds. We should convert the minutes to 
millisecond for it to function correctly.)

> Misleading logviewer.cleanup.age.min
> 
>
> Key: STORM-3169
> URL: https://issues.apache.org/jira/browse/STORM-3169
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Config specification logviewer.cleanup.age.min labels the duration in minutes 
> passed since a log file is modified before we consider the log to be old. 
> However in the actual use it's been subtracted by nowMills, which is the 
> current time in milliseconds. We should convert it to milliseconds.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread Zhengdai Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhengdai Hu updated STORM-3170:
---
Description: In DirectoryCleaner#deleteOldestWhileTooLarge, the original 
implementation calls file#delete without checking if it succeed or not, even 
though they're always reported as deleted. This invalidate any metrics built on 
top of this.  (was: The original implementation calls file#delete without 
checking if it succeed or not, even though they're always reported as deleted. 
This invalidate any metrics built on top of this.)

> DirectoryCleaner may not correctly report correct number of deleted files
> -
>
> Key: STORM-3170
> URL: https://issues.apache.org/jira/browse/STORM-3170
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
> Fix For: 2.0.0
>
>
> In DirectoryCleaner#deleteOldestWhileTooLarge, the original implementation 
> calls file#delete without checking if it succeed or not, even though they're 
> always reported as deleted. This invalidate any metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (STORM-3170) DirectoryCleaner may not correctly report correct number of deleted files

2018-08-01 Thread Zhengdai Hu (JIRA)
Zhengdai Hu created STORM-3170:
--

 Summary: DirectoryCleaner may not correctly report correct number 
of deleted files
 Key: STORM-3170
 URL: https://issues.apache.org/jira/browse/STORM-3170
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-webapp
Affects Versions: 2.0.0
Reporter: Zhengdai Hu
Assignee: Zhengdai Hu
 Fix For: 2.0.0


The original implementation calls file#delete without checking if it succeed or 
not, even though they're always reported as deleted. This invalidate any 
metrics built on top of this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3169) Misleading logviewer.cleanup.age.min

2018-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3169:
--
Labels: pull-request-available  (was: )

> Misleading logviewer.cleanup.age.min
> 
>
> Key: STORM-3169
> URL: https://issues.apache.org/jira/browse/STORM-3169
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-webapp
>Affects Versions: 2.0.0
>Reporter: Zhengdai Hu
>Assignee: Zhengdai Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.0.0
>
>
> Config specification logviewer.cleanup.age.min labels the duration in minutes 
> passed since a log file is modified before we consider the log to be old. 
> However in the actual use it's been subtracted by nowMills, which is the 
> current time in milliseconds. We should convert the minutes to millisecond 
> for it to function correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (STORM-3169) Misleading logviewer.cleanup.age.min

2018-08-01 Thread Zhengdai Hu (JIRA)
Zhengdai Hu created STORM-3169:
--

 Summary: Misleading logviewer.cleanup.age.min
 Key: STORM-3169
 URL: https://issues.apache.org/jira/browse/STORM-3169
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-webapp
Affects Versions: 2.0.0
Reporter: Zhengdai Hu
Assignee: Zhengdai Hu
 Fix For: 2.0.0


Config specification logviewer.cleanup.age.min labels the duration in minutes 
passed since a log file is modified before we consider the log to be old. 
However in the actual use it's been subtracted by nowMills, which is the 
current time in milliseconds. We should convert the minutes to millisecond for 
it to function correctly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (STORM-3168) AsyncLocalizer cleanup appears to crash

2018-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3168:
--
Labels: pull-request-available  (was: )

> AsyncLocalizer cleanup appears to crash
> ---
>
> Key: STORM-3168
> URL: https://issues.apache.org/jira/browse/STORM-3168
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Aaron Gresch
>Assignee: Aaron Gresch
>Priority: Major
>  Labels: pull-request-available
>
> I was investigating these blobstore download messages which keep repeating 
> for hours in the supervisor (and nimbus logs).  I turned on debug logging, 
> and was expecting a cleanup debug message every 30 seconds 
> ([https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java#L606).]
>   It did not log.  I restarted the supervisor, and it started logging again.  
> It appears to have crashed with some error.  
> We should make sure the cleanup runs continuously and logs any failures to 
> investigate.
>  
> {code:java}
> 2018-07-30 23:25:35.691 o.a.s.l.AsyncLocalizer AsyncLocalizer Executor - 2 
> [ERROR] Could not update blob, will retry again later
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could 
> not download...
>         at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_131]
>         at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
> ~[?:1.8.0_131]
>         at 
> org.apache.storm.localizer.AsyncLocalizer.updateBlobs(AsyncLocalizer.java:303)
>  ~[storm-server-2.0.0.y.jar:2.0.0.y]
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_131]
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> [?:1.8.0_131]
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_131]
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  [?:1.8.0_131]
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_131]
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_131]
>         at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
> Caused by: java.lang.RuntimeException: Could not download...
>         at 
> org.apache.storm.localizer.AsyncLocalizer.lambda$downloadOrUpdate$69(AsyncLocalizer.java:268)
>  ~[storm-server-2.0.0.y.jar:2.0.0.y]
>         at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
>  ~[?:1.8.0_131]
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_131]
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_131]
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_131]
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  ~[?:1.8.0_131]
>         ... 3 more
> Caused by: org.apache.storm.generated.KeyNotFoundException
>         at 
> org.apache.storm.generated.Nimbus$getBlobMeta_result$getBlobMeta_resultStandardScheme.read(Nimbus.java:25853)
>  ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.generated.Nimbus$getBlobMeta_result$getBlobMeta_resultStandardScheme.read(Nimbus.java:25821)
>  ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.generated.Nimbus$getBlobMeta_result.read(Nimbus.java:25752) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.thrift.TServiceClient.receiveBase(TServiceClient.java:88) 
> ~[shaded-deps-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.generated.Nimbus$Client.recv_getBlobMeta(Nimbus.java:798) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.generated.Nimbus$Client.getBlobMeta(Nimbus.java:785) 
> ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.blobstore.NimbusBlobStore.getBlobMeta(NimbusBlobStore.java:85)
>  ~[storm-client-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.localizer.LocallyCachedTopologyBlob.getRemoteVersion(LocallyCachedTopologyBlob.java:122)
>  ~[storm-server-2.0.0.y.jar:2.0.0.y]
>         at 
> org.apache.storm.localizer.AsyncLocalizer.lambda$downloadOrUpdate$69(AsyncLocalizer.java:252)
>  ~[storm-server-2.0.0.y.jar:2.0.0.y]
>         at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
>  ~[?:1.8.0_131]
>         at 
> 

[jira] [Created] (STORM-3168) AsyncLocalizer cleanup appears to crash

2018-08-01 Thread Aaron Gresch (JIRA)
Aaron Gresch created STORM-3168:
---

 Summary: AsyncLocalizer cleanup appears to crash
 Key: STORM-3168
 URL: https://issues.apache.org/jira/browse/STORM-3168
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Aaron Gresch
Assignee: Aaron Gresch


I was investigating these blobstore download messages which keep repeating for 
hours in the supervisor (and nimbus logs).  I turned on debug logging, and was 
expecting a cleanup debug message every 30 seconds 
([https://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java#L606).]
  It did not log.  I restarted the supervisor, and it started logging again.  
It appears to have crashed with some error.  

We should make sure the cleanup runs continuously and logs any failures to 
investigate.

 
{code:java}
2018-07-30 23:25:35.691 o.a.s.l.AsyncLocalizer AsyncLocalizer Executor - 2 
[ERROR] Could not update blob, will retry again later

java.util.concurrent.ExecutionException: java.lang.RuntimeException: Could not 
download...

        at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_131]

        at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
~[?:1.8.0_131]

        at 
org.apache.storm.localizer.AsyncLocalizer.updateBlobs(AsyncLocalizer.java:303) 
~[storm-server-2.0.0.y.jar:2.0.0.y]

        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_131]

        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
[?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 [?:1.8.0_131]

        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_131]

        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_131]

        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

Caused by: java.lang.RuntimeException: Could not download...

        at 
org.apache.storm.localizer.AsyncLocalizer.lambda$downloadOrUpdate$69(AsyncLocalizer.java:268)
 ~[storm-server-2.0.0.y.jar:2.0.0.y]

        at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
 ~[?:1.8.0_131]

        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_131]

        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 ~[?:1.8.0_131]

        ... 3 more

Caused by: org.apache.storm.generated.KeyNotFoundException

        at 
org.apache.storm.generated.Nimbus$getBlobMeta_result$getBlobMeta_resultStandardScheme.read(Nimbus.java:25853)
 ~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.generated.Nimbus$getBlobMeta_result$getBlobMeta_resultStandardScheme.read(Nimbus.java:25821)
 ~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.generated.Nimbus$getBlobMeta_result.read(Nimbus.java:25752) 
~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.thrift.TServiceClient.receiveBase(TServiceClient.java:88) 
~[shaded-deps-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.generated.Nimbus$Client.recv_getBlobMeta(Nimbus.java:798) 
~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.generated.Nimbus$Client.getBlobMeta(Nimbus.java:785) 
~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.blobstore.NimbusBlobStore.getBlobMeta(NimbusBlobStore.java:85) 
~[storm-client-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.localizer.LocallyCachedTopologyBlob.getRemoteVersion(LocallyCachedTopologyBlob.java:122)
 ~[storm-server-2.0.0.y.jar:2.0.0.y]

        at 
org.apache.storm.localizer.AsyncLocalizer.lambda$downloadOrUpdate$69(AsyncLocalizer.java:252)
 ~[storm-server-2.0.0.y.jar:2.0.0.y]

        at 
java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
 ~[?:1.8.0_131]

        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_131]

        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_131]

        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 ~[?:1.8.0_131]

        ... 3 

[jira] [Updated] (STORM-3167) Flaky test in metrics_test.clj

2018-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3167:
--
Labels: pull-request-available  (was: )

> Flaky test in metrics_test.clj
> --
>
> Key: STORM-3167
> URL: https://issues.apache.org/jira/browse/STORM-3167
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> {code}
> classname: org.apache.storm.metrics-test / testname: test-builtin-metrics-2
> Uncaught exception, not in assertion.
> expected: nil
>  actual: java.util.ConcurrentModificationException: null
>  at java.util.ArrayList$Itr.checkForComodification (ArrayList.java:907)
>  java.util.ArrayList$Itr.next (ArrayList.java:857)
>  
> com.google.common.collect.AbstractMapBasedMultimap$WrappedCollection$WrappedIterator.next
>  (AbstractMapBasedMultimap.java:486)
>  clojure.lang.PersistentVector.create (PersistentVector.java:105)
>  clojure.lang.LazilyPersistentVector.create (LazilyPersistentVector.java:32)
>  clojure.core$vec.invoke (core.clj:361)
>  org.apache.storm.util$clojurify_structure$fn__206.invoke (util.clj:85)
>  clojure.walk$prewalk.invoke (walk.clj:64)
>  clojure.core$partial$fn__4527.invoke (core.clj:2493)
>  clojure.core$map$fn__4553.invoke (core.clj:2622)
>  clojure.lang.LazySeq.sval (LazySeq.java:40)
>  clojure.lang.LazySeq.seq (LazySeq.java:49)
>  clojure.lang.RT.seq (RT.java:507)
>  clojure.core/seq (core.clj:137)
>  clojure.core.protocols$seq_reduce.invoke (protocols.clj:30)
>  clojure.core.protocols/fn (protocols.clj:101)
>  clojure.core.protocols$fn__6452$G__6447__6465.invoke (protocols.clj:13)
>  clojure.core$reduce.invoke (core.clj:6519)
>  clojure.core$into.invoke (core.clj:6600)
>  clojure.walk$walk.invoke (walk.clj:49)
>  clojure.walk$prewalk.invoke (walk.clj:64)
>  clojure.core$partial$fn__4527.invoke (core.clj:2493)
>  clojure.core$map$fn__4553.invoke (core.clj:2624)
>  clojure.lang.LazySeq.sval (LazySeq.java:40)
>  clojure.lang.LazySeq.seq (LazySeq.java:49)
>  clojure.lang.RT.seq (RT.java:507)
>  clojure.core/seq (core.clj:137)
>  clojure.core.protocols$seq_reduce.invoke (protocols.clj:30)
>  clojure.core.protocols/fn (protocols.clj:101)
>  clojure.core.protocols$fn__6452$G__6447__6465.invoke (protocols.clj:13)
>  clojure.core$reduce.invoke (core.clj:6519)
>  clojure.core$into.invoke (core.clj:6600)
>  clojure.walk$walk.invoke (walk.clj:49)
>  clojure.walk$prewalk.invoke (walk.clj:64)
>  org.apache.storm.util$clojurify_structure.invoke (util.clj:83)
>  
> org.apache.storm.metrics_test$wait_for_atleast_N_buckets_BANG_$reify__1258.exec
>  (metrics_test.clj:79)
>  org.apache.storm.Testing.whileTimeout (Testing.java:103)
>  org.apache.storm.metrics_test$wait_for_atleast_N_buckets_BANG_.invoke 
> (metrics_test.clj:77)
>  org.apache.storm.metrics_test$assert_metric_running_sum_BANG_.invoke 
> (metrics_test.clj:98)
>  org.apache.storm.metrics_test/fn (metrics_test.clj:326)
>  clojure.test$test_var$fn__7670.invoke (test.clj:704)
>  clojure.test$test_var.invoke (test.clj:704)
>  clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722)
>  clojure.test$default_fixture.invoke (test.clj:674)
>  clojure.test$test_vars$fn__7692.invoke (test.clj:722)
>  clojure.test$default_fixture.invoke (test.clj:674)
>  clojure.test$test_vars.invoke (test.clj:718)
>  clojure.test$test_all_vars.invoke (test.clj:728)
>  clojure.test$test_ns.invoke (test.clj:747)
>  clojure.core$map$fn__4553.invoke (core.clj:2624)
>  clojure.lang.LazySeq.sval (LazySeq.java:40)
>  clojure.lang.LazySeq.seq (LazySeq.java:49)
>  clojure.lang.Cons.next (Cons.java:39)
>  clojure.lang.RT.boundedLength (RT.java:1735)
>  clojure.lang.RestFn.applyTo (RestFn.java:130)
>  clojure.core$apply.invoke (core.clj:632)
>  clojure.test$run_tests.doInvoke (test.clj:762)
>  clojure.lang.RestFn.invoke (RestFn.java:408)
>  
> org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132$fn__5133.invoke
>  (test_runner.clj:107)
>  
> org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132.invoke
>  (test_runner.clj:53)
>  org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131.invoke 
> (test_runner.clj:52)
>  clojure.lang.LazySeq.sval (LazySeq.java:40)
>  clojure.lang.LazySeq.seq (LazySeq.java:49)
>  clojure.lang.RT.seq (RT.java:507)
>  clojure.core/seq (core.clj:137)
>  clojure.core$dorun.invoke (core.clj:3009)
>  org.apache.storm.testrunner$eval5125.invoke (test_runner.clj:52)
>  clojure.lang.Compiler.eval (Compiler.java:6782)
>  clojure.lang.Compiler.load (Compiler.java:7227)
>  clojure.lang.Compiler.loadFile (Compiler.java:7165)
>  clojure.main$load_script.invoke (main.clj:275)
>  clojure.main$script_opt.invoke 

[jira] [Created] (STORM-3167) Flaky test in metrics_test.clj

2018-08-01 Thread JIRA
Stig Rohde Døssing created STORM-3167:
-

 Summary: Flaky test in metrics_test.clj
 Key: STORM-3167
 URL: https://issues.apache.org/jira/browse/STORM-3167
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 2.0.0
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


{code}
classname: org.apache.storm.metrics-test / testname: test-builtin-metrics-2
Uncaught exception, not in assertion.
expected: nil
 actual: java.util.ConcurrentModificationException: null
 at java.util.ArrayList$Itr.checkForComodification (ArrayList.java:907)
 java.util.ArrayList$Itr.next (ArrayList.java:857)
 
com.google.common.collect.AbstractMapBasedMultimap$WrappedCollection$WrappedIterator.next
 (AbstractMapBasedMultimap.java:486)
 clojure.lang.PersistentVector.create (PersistentVector.java:105)
 clojure.lang.LazilyPersistentVector.create (LazilyPersistentVector.java:32)
 clojure.core$vec.invoke (core.clj:361)
 org.apache.storm.util$clojurify_structure$fn__206.invoke (util.clj:85)
 clojure.walk$prewalk.invoke (walk.clj:64)
 clojure.core$partial$fn__4527.invoke (core.clj:2493)
 clojure.core$map$fn__4553.invoke (core.clj:2622)
 clojure.lang.LazySeq.sval (LazySeq.java:40)
 clojure.lang.LazySeq.seq (LazySeq.java:49)
 clojure.lang.RT.seq (RT.java:507)
 clojure.core/seq (core.clj:137)
 clojure.core.protocols$seq_reduce.invoke (protocols.clj:30)
 clojure.core.protocols/fn (protocols.clj:101)
 clojure.core.protocols$fn__6452$G__6447__6465.invoke (protocols.clj:13)
 clojure.core$reduce.invoke (core.clj:6519)
 clojure.core$into.invoke (core.clj:6600)
 clojure.walk$walk.invoke (walk.clj:49)
 clojure.walk$prewalk.invoke (walk.clj:64)
 clojure.core$partial$fn__4527.invoke (core.clj:2493)
 clojure.core$map$fn__4553.invoke (core.clj:2624)
 clojure.lang.LazySeq.sval (LazySeq.java:40)
 clojure.lang.LazySeq.seq (LazySeq.java:49)
 clojure.lang.RT.seq (RT.java:507)
 clojure.core/seq (core.clj:137)
 clojure.core.protocols$seq_reduce.invoke (protocols.clj:30)
 clojure.core.protocols/fn (protocols.clj:101)
 clojure.core.protocols$fn__6452$G__6447__6465.invoke (protocols.clj:13)
 clojure.core$reduce.invoke (core.clj:6519)
 clojure.core$into.invoke (core.clj:6600)
 clojure.walk$walk.invoke (walk.clj:49)
 clojure.walk$prewalk.invoke (walk.clj:64)
 org.apache.storm.util$clojurify_structure.invoke (util.clj:83)
 
org.apache.storm.metrics_test$wait_for_atleast_N_buckets_BANG_$reify__1258.exec 
(metrics_test.clj:79)
 org.apache.storm.Testing.whileTimeout (Testing.java:103)
 org.apache.storm.metrics_test$wait_for_atleast_N_buckets_BANG_.invoke 
(metrics_test.clj:77)
 org.apache.storm.metrics_test$assert_metric_running_sum_BANG_.invoke 
(metrics_test.clj:98)
 org.apache.storm.metrics_test/fn (metrics_test.clj:326)
 clojure.test$test_var$fn__7670.invoke (test.clj:704)
 clojure.test$test_var.invoke (test.clj:704)
 clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722)
 clojure.test$default_fixture.invoke (test.clj:674)
 clojure.test$test_vars$fn__7692.invoke (test.clj:722)
 clojure.test$default_fixture.invoke (test.clj:674)
 clojure.test$test_vars.invoke (test.clj:718)
 clojure.test$test_all_vars.invoke (test.clj:728)
 clojure.test$test_ns.invoke (test.clj:747)
 clojure.core$map$fn__4553.invoke (core.clj:2624)
 clojure.lang.LazySeq.sval (LazySeq.java:40)
 clojure.lang.LazySeq.seq (LazySeq.java:49)
 clojure.lang.Cons.next (Cons.java:39)
 clojure.lang.RT.boundedLength (RT.java:1735)
 clojure.lang.RestFn.applyTo (RestFn.java:130)
 clojure.core$apply.invoke (core.clj:632)
 clojure.test$run_tests.doInvoke (test.clj:762)
 clojure.lang.RestFn.invoke (RestFn.java:408)
 
org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132$fn__5133.invoke
 (test_runner.clj:107)
 org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132.invoke 
(test_runner.clj:53)
 org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131.invoke 
(test_runner.clj:52)
 clojure.lang.LazySeq.sval (LazySeq.java:40)
 clojure.lang.LazySeq.seq (LazySeq.java:49)
 clojure.lang.RT.seq (RT.java:507)
 clojure.core/seq (core.clj:137)
 clojure.core$dorun.invoke (core.clj:3009)
 org.apache.storm.testrunner$eval5125.invoke (test_runner.clj:52)
 clojure.lang.Compiler.eval (Compiler.java:6782)
 clojure.lang.Compiler.load (Compiler.java:7227)
 clojure.lang.Compiler.loadFile (Compiler.java:7165)
 clojure.main$load_script.invoke (main.clj:275)
 clojure.main$script_opt.invoke (main.clj:337)
 clojure.main$main.doInvoke (main.clj:421)
 clojure.lang.RestFn.invoke (RestFn.java:421)
 clojure.lang.Var.invoke (Var.java:383)
 clojure.lang.AFn.applyToHelper (AFn.java:156)
 clojure.lang.Var.applyTo (Var.java:700)
 clojure.main.main (main.java:37)
{code}
 

It looks to me like the issue is that the 
FakeMetricsConsumer.getTaskIdToBuckets returns a view of a map that may be 
modified at any time (the getTaskIdToBuckets is 

[jira] [Updated] (STORM-3166) Utils.threadDump does not account for dead threads

2018-08-01 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/STORM-3166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated STORM-3166:
--
Labels: pull-request-available  (was: )

> Utils.threadDump does not account for dead threads
> --
>
> Key: STORM-3166
> URL: https://issues.apache.org/jira/browse/STORM-3166
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0
>Reporter: Stig Rohde Døssing
>Assignee: Stig Rohde Døssing
>Priority: Minor
>  Labels: pull-request-available
>
> Saw this test failure
> {code}
> classname: integration.org.apache.storm.integration-test / testname: 
> test-validate-topology-structure
> Uncaught exception, not in assertion.
> expected: nil
>   actual: java.lang.NullPointerException: null
>  at org.apache.storm.utils.Utils.threadDump (Utils.java:1191)
> org.apache.storm.Testing.whileTimeout (Testing.java:107)
> org.apache.storm.Testing.completeTopology (Testing.java:437)
> 
> integration.org.apache.storm.integration_test$try_complete_wc_topology.invoke 
> (integration_test.clj:247)
> integration.org.apache.storm.integration_test/fn 
> (integration_test.clj:259)
> clojure.test$test_var$fn__7670.invoke (test.clj:704)
> clojure.test$test_var.invoke (test.clj:704)
> clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722)
> clojure.test$default_fixture.invoke (test.clj:674)
> clojure.test$test_vars$fn__7692.invoke (test.clj:722)
> clojure.test$default_fixture.invoke (test.clj:674)
> clojure.test$test_vars.invoke (test.clj:718)
> clojure.test$test_all_vars.invoke (test.clj:728)
> clojure.test$test_ns.invoke (test.clj:747)
> clojure.core$map$fn__4553.invoke (core.clj:2624)
> clojure.lang.LazySeq.sval (LazySeq.java:40)
> clojure.lang.LazySeq.seq (LazySeq.java:49)
> clojure.lang.Cons.next (Cons.java:39)
> clojure.lang.RT.boundedLength (RT.java:1735)
> clojure.lang.RestFn.applyTo (RestFn.java:130)
> clojure.core$apply.invoke (core.clj:632)
> clojure.test$run_tests.doInvoke (test.clj:762)
> clojure.lang.RestFn.invoke (RestFn.java:408)
> 
> org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132$fn__5133.invoke
>  (test_runner.clj:107)
> 
> org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132.invoke
>  (test_runner.clj:53)
> org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131.invoke 
> (test_runner.clj:52)
> clojure.lang.LazySeq.sval (LazySeq.java:40)
> clojure.lang.LazySeq.seq (LazySeq.java:49)
> clojure.lang.RT.seq (RT.java:507)
> clojure.core/seq (core.clj:137)
> clojure.core$dorun.invoke (core.clj:3009)
> org.apache.storm.testrunner$eval5125.invoke (test_runner.clj:52)
> clojure.lang.Compiler.eval (Compiler.java:6782)
> clojure.lang.Compiler.load (Compiler.java:7227)
> clojure.lang.Compiler.loadFile (Compiler.java:7165)
> clojure.main$load_script.invoke (main.clj:275)
> clojure.main$script_opt.invoke (main.clj:337)
> clojure.main$main.doInvoke (main.clj:421)
> clojure.lang.RestFn.invoke (RestFn.java:421)
> clojure.lang.Var.invoke (Var.java:383)
> clojure.lang.AFn.applyToHelper (AFn.java:156)
> clojure.lang.Var.applyTo (Var.java:700)
> clojure.main.main (main.java:37)
> {code}
> Utils.threadDump needs to check whether ThreadInfo objects are null before 
> trying to dereference them, since the ThreadMxBean.getThreadInfo method will 
> return null for threads that are dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (STORM-3166) Utils.threadDump does not account for dead threads

2018-08-01 Thread JIRA
Stig Rohde Døssing created STORM-3166:
-

 Summary: Utils.threadDump does not account for dead threads
 Key: STORM-3166
 URL: https://issues.apache.org/jira/browse/STORM-3166
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 2.0.0
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


Saw this test failure
{code}
classname: integration.org.apache.storm.integration-test / testname: 
test-validate-topology-structure
Uncaught exception, not in assertion.
expected: nil
  actual: java.lang.NullPointerException: null
 at org.apache.storm.utils.Utils.threadDump (Utils.java:1191)
org.apache.storm.Testing.whileTimeout (Testing.java:107)
org.apache.storm.Testing.completeTopology (Testing.java:437)

integration.org.apache.storm.integration_test$try_complete_wc_topology.invoke 
(integration_test.clj:247)
integration.org.apache.storm.integration_test/fn (integration_test.clj:259)
clojure.test$test_var$fn__7670.invoke (test.clj:704)
clojure.test$test_var.invoke (test.clj:704)
clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722)
clojure.test$default_fixture.invoke (test.clj:674)
clojure.test$test_vars$fn__7692.invoke (test.clj:722)
clojure.test$default_fixture.invoke (test.clj:674)
clojure.test$test_vars.invoke (test.clj:718)
clojure.test$test_all_vars.invoke (test.clj:728)
clojure.test$test_ns.invoke (test.clj:747)
clojure.core$map$fn__4553.invoke (core.clj:2624)
clojure.lang.LazySeq.sval (LazySeq.java:40)
clojure.lang.LazySeq.seq (LazySeq.java:49)
clojure.lang.Cons.next (Cons.java:39)
clojure.lang.RT.boundedLength (RT.java:1735)
clojure.lang.RestFn.applyTo (RestFn.java:130)
clojure.core$apply.invoke (core.clj:632)
clojure.test$run_tests.doInvoke (test.clj:762)
clojure.lang.RestFn.invoke (RestFn.java:408)

org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132$fn__5133.invoke
 (test_runner.clj:107)

org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131$fn__5132.invoke 
(test_runner.clj:53)
org.apache.storm.testrunner$eval5125$iter__5126__5130$fn__5131.invoke 
(test_runner.clj:52)
clojure.lang.LazySeq.sval (LazySeq.java:40)
clojure.lang.LazySeq.seq (LazySeq.java:49)
clojure.lang.RT.seq (RT.java:507)
clojure.core/seq (core.clj:137)
clojure.core$dorun.invoke (core.clj:3009)
org.apache.storm.testrunner$eval5125.invoke (test_runner.clj:52)
clojure.lang.Compiler.eval (Compiler.java:6782)
clojure.lang.Compiler.load (Compiler.java:7227)
clojure.lang.Compiler.loadFile (Compiler.java:7165)
clojure.main$load_script.invoke (main.clj:275)
clojure.main$script_opt.invoke (main.clj:337)
clojure.main$main.doInvoke (main.clj:421)
clojure.lang.RestFn.invoke (RestFn.java:421)
clojure.lang.Var.invoke (Var.java:383)
clojure.lang.AFn.applyToHelper (AFn.java:156)
clojure.lang.Var.applyTo (Var.java:700)
clojure.main.main (main.java:37)
{code}

Utils.threadDump needs to check whether ThreadInfo objects are null before 
trying to dereference them, since the ThreadMxBean.getThreadInfo method will 
return null for threads that are dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (STORM-1759) Viewing logs from the Storm UI doesn't work in dockerized environment

2018-08-01 Thread Chris Clarke (JIRA)


[ 
https://issues.apache.org/jira/browse/STORM-1759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565282#comment-16565282
 ] 

Chris Clarke commented on STORM-1759:
-

In case it's helpful, here's our proxy setup that solves this pretty well: 
[https://gist.github.com/xofer/e2a703d80979108c76ce53e5361fbc4d]

There are a handful of links that simply couldn't be rewritten, so remain 
broken.

Originally posted on https://issues.apache.org/jira/browse/STORM-580

> Viewing logs from the Storm UI doesn't work in dockerized environment
> -
>
> Key: STORM-1759
> URL: https://issues.apache.org/jira/browse/STORM-1759
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-ui
>Affects Versions: 0.10.0, 1.0.0, 0.9.6
>Reporter: Elisey Zanko
>Priority: Major
>
> I run the Storm using the following docker-compose.yml
> {code}
> version: '2'
> services:
> zookeeper:
> image: jplock/zookeeper:3.4.8
> restart: always
> nimbus:
> image: 31z4/storm:1.0.0
> command: nimbus -c storm.log.dir="/logs" -c 
> storm.zookeeper.servers="[\"zookeeper\"]" -c nimbus.host="nimbus"
> depends_on:
> - zookeeper
> restart: always
> ports:
> - 6627:6627
> volumes:
> - logs:/logs
> supervisor:
> image: 31z4/storm:1.0.0
> command: supervisor -c storm.log.dir="/logs" -c 
> storm.zookeeper.servers="[\"zookeeper\"]" -c nimbus.host="nimbus"
> depends_on:
> - nimbus
> restart: always
> volumes:
> - logs:/logs
> logviewer:
> image: 31z4/storm:1.0.0
> command: logviewer -c storm.log.dir="/logs"
> restart: always
> ports:
> - 8000:8000
> volumes:
> - logs:/logs
> ui:
> image: 31z4/storm:1.0.0
> command: ui -c storm.log.dir="/logs" -c nimbus.host="nimbus"
> depends_on:
> - nimbus
> - logviewer
> restart: always
> ports:
> - 8080:8080
> volumes:
> - logs:/log
> volumes:
> logs: {}
> {code}
> And opening the logs from the Storm UI doesn't work because all links are 
> pointing to different container ids as hosts.
> I guess adding an ability to explicitly specify the logviewer host in the 
> storm.yaml would solve the issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)