[jira] [Closed] (STORM-3816) Unrecognized VM option 'PrintGCDateStamps'

2022-04-21 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt closed STORM-3816.
--
Resolution: Fixed

Makes sense now. Thank you

> Unrecognized VM option 'PrintGCDateStamps'
> --
>
> Key: STORM-3816
> URL: https://issues.apache.org/jira/browse/STORM-3816
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.3.0
>Reporter: Max Schmidt
>Priority: Blocker
>
> When starting storm using the official docker images 
> [https://hub.docker.com/_/storm] following the listed example, then deploying 
> a topology - the worker does not come up (logs inside of the supervisor):
> {code:java}
> 2022-01-10 14:24:14.803 STDERR Thread-0 [INFO] Unrecognized VM option 
> 'PrintGCDateStamps'
> 2022-01-10 14:24:14.803 STDERR Thread-1 [INFO] [0.001s][warning][gc] -Xloggc 
> is deprecated. Will use -Xlog:gc:artifacts/gc.log instead.
> 2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: Could not create the 
> Java Virtual Machine.
> 2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: A fatal exception has 
> occurred. Program will exit. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (STORM-3816) Unrecognized VM option 'PrintGCDateStamps'

2022-01-10 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3816:
--

 Summary: Unrecognized VM option 'PrintGCDateStamps'
 Key: STORM-3816
 URL: https://issues.apache.org/jira/browse/STORM-3816
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 2.3.0
Reporter: Max Schmidt


When starting storm using the official docker images 
[https://hub.docker.com/_/storm] following the listed example, then deploying a 
topology - the worker does not come up (logs inside of the supervisor):
{code:java}
2022-01-10 14:24:14.803 STDERR Thread-0 [INFO] Unrecognized VM option 
'PrintGCDateStamps'
2022-01-10 14:24:14.803 STDERR Thread-1 [INFO] [0.001s][warning][gc] -Xloggc is 
deprecated. Will use -Xlog:gc:artifacts/gc.log instead.
2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: Could not create the Java 
Virtual Machine.
2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: A fatal exception has 
occurred. Program will exit. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17295973#comment-17295973
 ] 

Max Schmidt commented on STORM-3750:


Sorry this is duplicate of https://issues.apache.org/jira/browse/STORM-3529

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$tuple_action_fn__10733.invoke(executor.clj:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> 

[jira] [Closed] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt closed STORM-3750.
--
Resolution: Duplicate

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$tuple_action_fn__10733.invoke(executor.clj:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$mk_task_receiver$fn__10716.invoke(executor.clj:471)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> 

[jira] [Updated] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated STORM-3750:
---
Component/s: storm-kafka-client

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$tuple_action_fn__10733.invoke(executor.clj:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$mk_task_receiver$fn__10716.invoke(executor.clj:471)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> 

[jira] [Created] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3750:
--

 Summary: Deactivation throws 
java.nio.channels.ClosedSelectorException in KafkaSpout
 Key: STORM-3750
 URL: https://issues.apache.org/jira/browse/STORM-3750
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 1.2.2
Reporter: Max Schmidt


When deactivating a topology that uses a kafka spout, the following exception 
is thrown:
{code:java}
java.lang.RuntimeException: 
java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
java.nio.channels.ClosedSelectorException at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
[storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
[clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
org.apache.kafka.common.network.Selector.select(Selector.java:499) 
~[stormjar.jar:?] at 
org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
~[stormjar.jar:?] at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
 ~[stormjar.jar:?] at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
 ~[stormjar.jar:?] at 
org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
 ~[storm-core-1.2.2.jar:1.2.2] at 
clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
~[clojure-1.7.0.jar:?] at 
org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$fn__10727$tuple_action_fn__10733.invoke(executor.clj:522)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__10716.invoke(executor.clj:471)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.disruptor$clojure_handler$reify__10135.onEvent(disruptor.clj:41)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509)
 ~[storm-core-1.2.2.jar:1.2.2] ... 7 more{code}
Problem with that is that it leads to the worker dying. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (STORM-3698) AbstractHdfsBolt does not sync Writers that are purged

2020-09-11 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3698:
--

 Summary: AbstractHdfsBolt does not sync Writers that are purged
 Key: STORM-3698
 URL: https://issues.apache.org/jira/browse/STORM-3698
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-hdfs
Affects Versions: 1.2.2
Reporter: Max Schmidt


We just discovered when using a SequenceFileBolt (although it might happen with 
other implementations as well) that the writers it uses, held in the map 
AbstractHdfsBolt.writers are not closed/synced when they are removed from the 
map by the removeEldestEntry method.

This leads to data loss.

Can be reproduced by creating a SequenceFileBolt.withMaxOpenFiles(1) and 
writing to just two different files.

One will have a size of zero, the other has the data in it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)