[jira] [Closed] (STORM-3816) Unrecognized VM option 'PrintGCDateStamps'

2022-04-21 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt closed STORM-3816.
--
Resolution: Fixed

Makes sense now. Thank you

> Unrecognized VM option 'PrintGCDateStamps'
> --
>
> Key: STORM-3816
> URL: https://issues.apache.org/jira/browse/STORM-3816
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.3.0
>    Reporter: Max Schmidt
>Priority: Blocker
>
> When starting storm using the official docker images 
> [https://hub.docker.com/_/storm] following the listed example, then deploying 
> a topology - the worker does not come up (logs inside of the supervisor):
> {code:java}
> 2022-01-10 14:24:14.803 STDERR Thread-0 [INFO] Unrecognized VM option 
> 'PrintGCDateStamps'
> 2022-01-10 14:24:14.803 STDERR Thread-1 [INFO] [0.001s][warning][gc] -Xloggc 
> is deprecated. Will use -Xlog:gc:artifacts/gc.log instead.
> 2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: Could not create the 
> Java Virtual Machine.
> 2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: A fatal exception has 
> occurred. Program will exit. {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (STORM-3816) Unrecognized VM option 'PrintGCDateStamps'

2022-01-10 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3816:
--

 Summary: Unrecognized VM option 'PrintGCDateStamps'
 Key: STORM-3816
 URL: https://issues.apache.org/jira/browse/STORM-3816
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 2.3.0
Reporter: Max Schmidt


When starting storm using the official docker images 
[https://hub.docker.com/_/storm] following the listed example, then deploying a 
topology - the worker does not come up (logs inside of the supervisor):
{code:java}
2022-01-10 14:24:14.803 STDERR Thread-0 [INFO] Unrecognized VM option 
'PrintGCDateStamps'
2022-01-10 14:24:14.803 STDERR Thread-1 [INFO] [0.001s][warning][gc] -Xloggc is 
deprecated. Will use -Xlog:gc:artifacts/gc.log instead.
2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: Could not create the Java 
Virtual Machine.
2022-01-10 14:24:14.811 STDERR Thread-0 [INFO] Error: A fatal exception has 
occurred. Program will exit. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


[ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17295973#comment-17295973
 ] 

Max Schmidt commented on STORM-3750:


Sorry this is duplicate of https://issues.apache.org/jira/browse/STORM-3529

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>    Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor

[jira] [Closed] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt closed STORM-3750.
--
Resolution: Duplicate

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>    Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> 

[jira] [Updated] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/STORM-3750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated STORM-3750:
---
Component/s: storm-kafka-client

> Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout
> ---
>
> Key: STORM-3750
> URL: https://issues.apache.org/jira/browse/STORM-3750
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.2.2
>    Reporter: Max Schmidt
>Priority: Major
>
> When deactivating a topology that uses a kafka spout, the following exception 
> is thrown:
> {code:java}
> java.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
> java.nio.channels.ClosedSelectorException at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
> [storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
> [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] 
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
> org.apache.kafka.common.network.Selector.select(Selector.java:499) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
> ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
>  ~[stormjar.jar:?] at 
> org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
>  ~[stormjar.jar:?] at 
> org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
>  ~[stormjar.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
>  ~[storm-core-1.2.2.jar:1.2.2] at 
> clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
> clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
> clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
> clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
> clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
> ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
> ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
> ~[storm-core-1.2.2.jar:1.2.2] at 
> 

[jira] [Created] (STORM-3750) Deactivation throws java.nio.channels.ClosedSelectorException in KafkaSpout

2021-03-05 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3750:
--

 Summary: Deactivation throws 
java.nio.channels.ClosedSelectorException in KafkaSpout
 Key: STORM-3750
 URL: https://issues.apache.org/jira/browse/STORM-3750
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 1.2.2
Reporter: Max Schmidt


When deactivating a topology that uses a kafka spout, the following exception 
is thrown:
{code:java}
java.lang.RuntimeException: 
java.nio.channels.ClosedSelectorExceptionjava.lang.RuntimeException: 
java.nio.channels.ClosedSelectorException at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:522)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:487)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatch(DisruptorQueue.java:477) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.disruptor$consume_batch.invoke(disruptor.clj:70) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$fn__10727$fn__10742$fn__10773.invoke(executor.clj:634)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.util$async_loop$fn__553.invoke(util.clj:484) 
[storm-core-1.2.2.jar:1.2.2] at clojure.lang.AFn.run(AFn.java:22) 
[clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_181]Caused by: java.nio.channels.ClosedSelectorException at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:83) ~[?:1.8.0_181] at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) ~[?:1.8.0_181] at 
org.apache.kafka.common.network.Selector.select(Selector.java:499) 
~[stormjar.jar:?] at 
org.apache.kafka.common.network.Selector.poll(Selector.java:308) 
~[stormjar.jar:?] at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:349) 
~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:226)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:188)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.retrieveOffsetsByTimes(Fetcher.java:408)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.beginningOrEndOffset(Fetcher.java:451)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.internals.Fetcher.beginningOffsets(Fetcher.java:436)
 ~[stormjar.jar:?] at 
org.apache.kafka.clients.consumer.KafkaConsumer.beginningOffsets(KafkaConsumer.java:1473)
 ~[stormjar.jar:?] at 
org.apache.storm.kafka.spout.metrics.KafkaOffsetMetric.getValueAndReset(KafkaOffsetMetric.java:79)
 ~[stormjar.jar:?] at 
org.apache.storm.daemon.executor$metrics_tick$fn__10651.invoke(executor.clj:345)
 ~[storm-core-1.2.2.jar:1.2.2] at 
clojure.core$map$fn__4553.invoke(core.clj:2622) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at 
clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at 
clojure.core$filter$fn__4580.invoke(core.clj:2679) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at 
clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at 
clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.7.0.jar:?] at 
clojure.lang.RT.next(RT.java:674) ~[clojure-1.7.0.jar:?] at 
clojure.core$next__4112.invoke(core.clj:64) ~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6523.invoke(protocols.clj:170) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6478$G__6473__6487.invoke(protocols.clj:19) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$seq_reduce.invoke(protocols.clj:31) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6506.invoke(protocols.clj:101) 
~[clojure-1.7.0.jar:?] at 
clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) 
~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) 
~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) 
~[clojure-1.7.0.jar:?] at 
org.apache.storm.daemon.executor$metrics_tick.invoke(executor.clj:349) 
~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$fn__10727$tuple_action_fn__10733.invoke(executor.clj:522)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.daemon.executor$mk_task_receiver$fn__10716.invoke(executor.clj:471)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.disruptor$clojure_handler$reify__10135.onEvent(disruptor.clj:41)
 ~[storm-core-1.2.2.jar:1.2.2] at 
org.apache.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:509)
 ~[storm-core-1.2.2.jar:1.2.2] ... 7 more{code}
Problem with that is that it leads to the worker dying. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (STORM-3698) AbstractHdfsBolt does not sync Writers that are purged

2020-09-11 Thread Max Schmidt (Jira)
Max Schmidt created STORM-3698:
--

 Summary: AbstractHdfsBolt does not sync Writers that are purged
 Key: STORM-3698
 URL: https://issues.apache.org/jira/browse/STORM-3698
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-hdfs
Affects Versions: 1.2.2
Reporter: Max Schmidt


We just discovered when using a SequenceFileBolt (although it might happen with 
other implementations as well) that the writers it uses, held in the map 
AbstractHdfsBolt.writers are not closed/synced when they are removed from the 
map by the removeEldestEntry method.

This leads to data loss.

Can be reproduced by creating a SequenceFileBolt.withMaxOpenFiles(1) and 
writing to just two different files.

One will have a size of zero, the other has the data in it. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (DOXIA-609) Crosslinks from .md to .html not working when starting with a dot

2020-05-14 Thread Max Schmidt (Jira)


[ 
https://issues.apache.org/jira/browse/DOXIA-609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17107255#comment-17107255
 ] 

Max Schmidt commented on DOXIA-609:
---

DOXIA-584 was implemented to fix the problem mentioned in the linked 
stackoverflow question 
(https://stackoverflow.com/questions/36708241/getting-doxia-module-markdown-to-rewrite-md-links).
 In my opinion doxia should also rewrite links to other markdown files starting 
with a dot, not only links like [Link](otherMarkdownFile.md#anchor) as 
implemented in 
https://gitbox.apache.org/repos/asf?p=maven-doxia.git=commit=20203d6c7f119e96fcf466ad1d72d8f8dcf5640f

> Crosslinks from .md to .html not working when starting with a dot
> -
>
> Key: DOXIA-609
> URL: https://issues.apache.org/jira/browse/DOXIA-609
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Module - Markdown
>Affects Versions: 1.9.1
>    Reporter: Max Schmidt
>Priority: Major
> Attachments: maven-markdown.zip
>
>
> In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
> does not work for links starting with a dot (like 
> [Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like  href="../otherMarkdownFile.md#anchor>Link but should be  href="../otherMarkdownFile.html#anchor>Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DOXIA-609) Crosslinks from .md to .html not working when starting with a dot

2020-05-13 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated DOXIA-609:
--
Description: In DOXIA-584 was added support for rewriting crosslinked 
markdown files. This does not work for links starting with a dot (like 
[Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like Link (works on Github).  (was: In 
DOXIA-584 was added support for rewriting crosslinked markdown files. This does 
not work for links with an anchor starting with a dot (like 
[Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like Link (works on Github).)
Summary: Crosslinks from .md to .html not working when starting with a 
dot  (was: Crosslinks from .md to .html not working with anchors when starting 
with a dot)

> Crosslinks from .md to .html not working when starting with a dot
> -
>
> Key: DOXIA-609
> URL: https://issues.apache.org/jira/browse/DOXIA-609
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Module - Markdown
>Affects Versions: 1.9.1
>    Reporter: Max Schmidt
>Priority: Major
> Attachments: maven-markdown.zip
>
>
> In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
> does not work for links starting with a dot (like 
> [Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like  href="../otherMarkdownFile.md#anchor>Link but should be  href="../otherMarkdownFile.html#anchor>Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DOXIA-609) Crosslinks from .md to .html not working with anchors when starting with a dot

2020-05-13 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated DOXIA-609:
--
Description: In DOXIA-584 was added support for rewriting crosslinked 
markdown files. This does not work for links with an anchor starting with a dot 
(like [Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like Link (works on Github).  (was: In 
DOXIA-584 was added support for rewriting crosslinked markdown files. This does 
not work for links with an anchor (like [Link](otherMarkdownFile.md#anchor)). 
The HTML-Link looks like Link (works on Github).)

> Crosslinks from .md to .html not working with anchors when starting with a dot
> --
>
> Key: DOXIA-609
> URL: https://issues.apache.org/jira/browse/DOXIA-609
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Module - Markdown
>Affects Versions: 1.9.1
>    Reporter: Max Schmidt
>Priority: Major
> Attachments: maven-markdown.zip
>
>
> In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
> does not work for links with an anchor starting with a dot (like 
> [Link](../otherMarkdownFile.md#anchor)). The HTML-Link looks like  href="../otherMarkdownFile.md#anchor>Link but should be  href="../otherMarkdownFile.html#anchor>Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DOXIA-609) Crosslinks from .md to .html not working with anchors when starting with a dot

2020-05-13 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated DOXIA-609:
--
Summary: Crosslinks from .md to .html not working with anchors when 
starting with a dot  (was: Crosslinks from .md to .html not working with 
anchors)

> Crosslinks from .md to .html not working with anchors when starting with a dot
> --
>
> Key: DOXIA-609
> URL: https://issues.apache.org/jira/browse/DOXIA-609
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Module - Markdown
>Affects Versions: 1.9.1
>    Reporter: Max Schmidt
>Priority: Major
> Attachments: maven-markdown.zip
>
>
> In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
> does not work for links with an anchor (like 
> [Link](otherMarkdownFile.md#anchor)). The HTML-Link looks like  href="otherMarkdownFile.md#anchor>Link but should be  href="otherMarkdownFile.html#anchor>Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (DOXIA-609) Crosslinks from .md to .html not working with anchors

2020-05-13 Thread Max Schmidt (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated DOXIA-609:
--
Attachment: maven-markdown.zip

> Crosslinks from .md to .html not working with anchors
> -
>
> Key: DOXIA-609
> URL: https://issues.apache.org/jira/browse/DOXIA-609
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Module - Markdown
>Affects Versions: 1.9.1
>    Reporter: Max Schmidt
>Priority: Major
> Attachments: maven-markdown.zip
>
>
> In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
> does not work for links with an anchor (like 
> [Link](otherMarkdownFile.md#anchor)). The HTML-Link looks like  href="otherMarkdownFile.md#anchor>Link but should be  href="otherMarkdownFile.html#anchor>Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (DOXIA-609) Crosslinks from .md to .html not working with anchors

2020-05-13 Thread Max Schmidt (Jira)
Max Schmidt created DOXIA-609:
-

 Summary: Crosslinks from .md to .html not working with anchors
 Key: DOXIA-609
 URL: https://issues.apache.org/jira/browse/DOXIA-609
 Project: Maven Doxia
  Issue Type: Bug
  Components: Module - Markdown
Affects Versions: 1.9.1
Reporter: Max Schmidt


In DOXIA-584 was added support for rewriting crosslinked markdown files. This 
does not work for links with an anchor (like 
[Link](otherMarkdownFile.md#anchor)). The HTML-Link looks like Link (works on Github).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HDFS-8093) BP does not exist or is not under Constructionnull

2016-08-24 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15435064#comment-15435064
 ] 

Max Schmidt commented on HDFS-8093:
---

I am still facing this issue on my namenode (just happened once while creating 
a file with a java client), from my namenode.log:

{code}
java.io.IOException: BP-1876130894-10.5.0.4-1469019082320:blk_1073787208_63449 
does not exist or is not under Constructionnull
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6238)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6305)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:804)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:955)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
{code}

Iam using hadoop 2.7.1 with the corresponding java libraries.

> BP does not exist or is not under Constructionnull
> --
>
> Key: HDFS-8093
> URL: https://issues.apache.org/jira/browse/HDFS-8093
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer & mover
>Affects Versions: 2.6.0
> Environment: Centos 6.5
>Reporter: LINTE
>
> HDFS balancer run during several hours blancing blocs beetween datanode, it 
> ended by failing with the following error.
> getStoredBlock function return a null BlockInfo.
> java.io.IOException: Bad response ERROR for block 
> BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 from 
> datanode 192.168.0.18:1004
> at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
> 15/04/08 05:52:51 WARN hdfs.DFSClient: Error Recovery for block 
> BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 in pipeline 
> 192.168.0.63:1004, 192.168.0.1:1004, 192.168.0.18:1004: bad datanode 
> 192.168.0.18:1004
> 15/04/08 05:52:51 WARN hdfs.DFSClient: DataStreamer Exception
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): 
> BP-970443206-192.168.0.208-1397583979378:blk_1086729930_13046030 does not 
> exist or is not under Constructionnull
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkUCBlock(FSNamesystem.java:6913)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.updateBlockForPipeline(FSNamesystem.java:6980)
> at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.updateBlockForPipeline(NameNodeRpcServer.java:717)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.updateBlockForPipeline(ClientNamenodeProtocolServerSideTranslatorPB.java:931)
> at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
> at org.apache.hadoop.ipc.Client.call(Client.java:1468)
> at org.apache.hadoop.ipc.Client.call(Client.java:1399)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy11.updateBlockForPipeline(Unkn

Re: Where to set properties for the retainedJobs/Stages?

2016-04-04 Thread Max Schmidt
Okay I put the props in the spark-defaults, but they are not recognized,
as they don't appear in the 'Environment' tab during a application
execution.

spark.eventLog.enabled for example.

Am 01.04.2016 um 21:22 schrieb Ted Yu:
> Please
> read 
> https://spark.apache.org/docs/latest/configuration.html#dynamically-loading-spark-properties
> w.r.t. spark-defaults.conf
>
> On Fri, Apr 1, 2016 at 12:06 PM, Max Schmidt <m...@datapath.io
> <mailto:m...@datapath.io>> wrote:
>
> Yes but doc doesn't say any word for which variable the configs
> are valid, so do I have to set them for the history-server? The
> daemon? The workers?
>
> And what if I use the java API instead of spark-submit for the jobs?
>
> I guess that the spark-defaults.conf are obsolete for the java API?
>
>
> Am 2016-04-01 18:58, schrieb Ted Yu:
>
> You can set them in spark-defaults.conf
>
> See
> also https://spark.apache.org/docs/latest/configuration.html#spark-ui
> [1]
>
> On Fri, Apr 1, 2016 at 8:26 AM, Max Schmidt <m...@datapath.io
> <mailto:m...@datapath.io>> wrote:
>
> Can somebody tell me the interaction between the properties:
>
> spark.ui.retainedJobs
> spark.ui.retainedStages
> spark.history.retainedApplications
>
> I know from the bugtracker, that the last one describes
> the number of
> applications the history-server holds in memory.
>
> Can I set the properties in the spark-env.sh? And where?
>
>
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> <mailto:user-unsubscr...@spark.apache.org>
> For additional commands, e-mail:
> user-h...@spark.apache.org <mailto:user-h...@spark.apache.org>
>
>
>
>
> Links:
> --
> [1]
> https://spark.apache.org/docs/latest/configuration.html#spark-ui
>
>
>
>
>

-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO



Re: Where to set properties for the retainedJobs/Stages?

2016-04-01 Thread Max Schmidt
Yes but doc doesn't say any word for which variable the configs are 
valid, so do I have to set them for the history-server? The daemon? The 
workers?


And what if I use the java API instead of spark-submit for the jobs?

I guess that the spark-defaults.conf are obsolete for the java API?


Am 2016-04-01 18:58, schrieb Ted Yu:

You can set them in spark-defaults.conf

See 
also https://spark.apache.org/docs/latest/configuration.html#spark-ui 
[1]


On Fri, Apr 1, 2016 at 8:26 AM, Max Schmidt <m...@datapath.io> wrote:


Can somebody tell me the interaction between the properties:

spark.ui.retainedJobs
spark.ui.retainedStages
spark.history.retainedApplications

I know from the bugtracker, that the last one describes the number 
of

applications the history-server holds in memory.

Can I set the properties in the spark-env.sh? And where?


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




Links:
--
[1] https://spark.apache.org/docs/latest/configuration.html#spark-ui





-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Where to set properties for the retainedJobs/Stages?

2016-04-01 Thread Max Schmidt
Can somebody tell me the interaction between the properties:

spark.ui.retainedJobs
spark.ui.retainedStages
spark.history.retainedApplications

I know from the bugtracker, that the last one describes the number of
applications the history-server holds in memory.

Can I set the properties in the spark-env.sh? And where?

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



How to move a namenode to a new host properly

2016-03-31 Thread Max Schmidt
Hi there,

what are the correct steps to move a primary Hadoop DFS namenode from
one host to another?

I use the version 2.7.1 of hadoop on Ubuntu 14.04.3 LTS (without YARN).

Steps done:

  * Copy the whole hadoop directory to the new host
  * set the new master in $hadoop_home/etc/hadoop/master
  * updated the fs.default.name tag in $hadoop_home/etc/hadoop/core-site.xml
  * formatted the new namenode with the ClusterID of the old namenode:
$hadoop_home//bin/hadoop namenode -format -custerId $CLUSTER_ID (I
removed the slaves from the config just to be sure that none of the
slaves are affected; maybe that is a problem?)

Problem is that the datanodes still don't come up because of the
mismatch of the clusterid:

|2016-03-30 16:20:28,718 WARN
org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException:
Incompatible clusterIDs in /storage/data: namenode clusterID =
CID-c19c691d-10da-4449-a7b6-c953465ce237; datanode clusterID =
CID-af87cb62-d806-41d6-9638-e9e559dd3ed7 2016-03-30 16:20:28,718 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed
for Block pool  (Datanode Uuid unassigned) service to
XX. Exiting. java.io.IOException: All specified directories
are failed to load. |

Any suggestions? Do I have to add the BlockPool-ID as well?



-
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org



Re: No active SparkContext

2016-03-31 Thread Max Schmidt
Just to mark this question closed - we expierienced an OOM-Exception on
the Master, which we didn't see on the Driver, but made him crash.

Am 24.03.2016 um 09:54 schrieb Max Schmidt:
> Hi there,
>
> we're using with the java-api (1.6.0) a ScheduledExecutor that
> continuously executes a SparkJob to a standalone cluster.
>
> After each job we close the JavaSparkContext and create a new one.
>
> But sometimes the Scheduling JVM crashes with:
>
> 24.03.2016-08:30:27:375# error - Application has been killed. Reason:
> All masters are unresponsive! Giving up.
> 24.03.2016-08:30:27:398# error - Error initializing SparkContext.
> java.lang.IllegalStateException: Cannot call methods on a stopped
> SparkContext.
> This stopped SparkContext was created at:
>
> org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
> io.datapath.spark.AbstractSparkJob.createJavaSparkContext(AbstractSparkJob.java:53)
> io.datapath.measurement.SparkJobMeasurements.work(SparkJobMeasurements.java:130)
> io.datapath.measurement.SparkMeasurementScheduler.lambda$submitSparkJobMeasurement$30(SparkMeasurementScheduler.java:117)
> io.datapath.measurement.SparkMeasurementScheduler$$Lambda$17/1568787282.run(Unknown
> Source)
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> java.lang.Thread.run(Thread.java:745)
>
> The currently active SparkContext was created at:
>
> (No active SparkContext.)
>
> at
> org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:106)
> at
> org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1578)
> at
> org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2179)
> at org.apache.spark.SparkContext.(SparkContext.scala:579)
> at
> org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
> at
> io.datapath.spark.AbstractSparkJob.createJavaSparkContext(AbstractSparkJob.java:53)
> at
> io.datapath.measurement.SparkJobMeasurements.work(SparkJobMeasurements.java:130)
> at
> io.datapath.measurement.SparkMeasurementScheduler.lambda$submitSparkJobMeasurement$30(SparkMeasurementScheduler.java:117)
> at
> io.datapath.measurement.SparkMeasurementScheduler$$Lambda$17/1568787282.run(Unknown
> Source)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 24.03.2016-08:30:27:402# info - SparkMeasurement - finished.
>
> Any guess?
> -- 
> *Max Schmidt, Senior Java Developer* | m...@datapath.io | LinkedIn
> <https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
> Datapath.io
>  
> Decreasing AWS latency.
> Your traffic optimized.
>
> Datapath.io GmbH
> Mainz | HRB Nr. 46222
> Sebastian Spies, CEO
>

-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO



Re: No active SparkContext

2016-03-24 Thread Max Schmidt

Am 2016-03-24 18:00, schrieb Mark Hamstra:

You seem to be confusing the concepts of Job and Application.  A
Spark Application has a SparkContext.  A Spark Application is capable
of running multiple Jobs, each with its own ID, visible in the webUI.


Obviously I mixed it up, but then I would like to know how my Java 
application should be constrcuted if wanted to submit periodic 
'Applications' to my cluster?

Did anyone use the

http://spark.apache.org/docs/latest/api/java/index.html?org/apache/spark/launcher/package-summary.html

for this scenario?


On Thu, Mar 24, 2016 at 6:11 AM, Max Schmidt <m...@datapath.io> wrote:


Am 24.03.2016 um 10:34 schrieb Simon Hafner:


2016-03-24 9:54 GMT+01:00 Max Schmidt <m...@datapath.io>:
> we're using with the java-api (1.6.0) a ScheduledExecutor that 
continuously

> executes a SparkJob to a standalone cluster.
I'd recommend Scala.

Why should I use scala?


After each job we close the JavaSparkContext and create a new one.

Why do that? You can happily reuse it. Pretty sure that also causes
the other problems, because you have a race condition on waiting 
for

the job to finish and stopping the Context.
I do that because it is a very common pattern to create an object 
for specific "job" and release its resources when its done.


The first problem that came in my mind was that the appName is 
immutable once the JavaSparkContext was created, so it is, to me, not 
possible to resuse the JavaSparkContext for jobs with different IDs 
(that we wanna see in the webUI).


And of course it is possible to wait for closing the 
JavaSparkContext gracefully, except when there is some asynchronous 
action in the background?


--

MAX SCHMIDT, SENIOR JAVA DEVELOPER | m...@datapath.io | LinkedIn [1]

 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO




Links:
--
[1] https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: apache spark errors

2016-03-24 Thread Max Schmidt
es, TID = 47709
>
> 644989 [Executor task launch worker-13] ERROR
> org.apache.spark.executor.Executor  - Managed memory leak
> detected; size = 5326260 bytes, TID = 47863
>
> 720701 [Executor task launch worker-12] ERROR
> org.apache.spark.executor.Executor  - Managed memory leak
> detected; size = 5399578 bytes, TID = 48959
>
> 1147961 [Executor task launch worker-16] ERROR
> org.apache.spark.executor.Executor  - Managed memory leak
> detected; size = 5251872 bytes, TID = 54922
>
>  
>
>  
>
> How can I fix this?
>
>  
>
> With kind regard,
>
>  
>
> Michel
>
>  
>
>  
>

-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO



Re: No active SparkContext

2016-03-24 Thread Max Schmidt
Am 24.03.2016 um 10:34 schrieb Simon Hafner:
> 2016-03-24 9:54 GMT+01:00 Max Schmidt <m...@datapath.io
> <mailto:m...@datapath.io>>:
> > we're using with the java-api (1.6.0) a ScheduledExecutor that
> continuously
> > executes a SparkJob to a standalone cluster.
> I'd recommend Scala.
Why should I use scala?
>
> > After each job we close the JavaSparkContext and create a new one.
> Why do that? You can happily reuse it. Pretty sure that also causes
> the other problems, because you have a race condition on waiting for
> the job to finish and stopping the Context.
I do that because it is a very common pattern to create an object for
specific "job" and release its resources when its done.

The first problem that came in my mind was that the appName is immutable
once the JavaSparkContext was created, so it is, to me, not possible to
resuse the JavaSparkContext for jobs with different IDs (that we wanna
see in the webUI).

And of course it is possible to wait for closing the JavaSparkContext
gracefully, except when there is some asynchronous action in the background?

-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO



No active SparkContext

2016-03-24 Thread Max Schmidt
Hi there,

we're using with the java-api (1.6.0) a ScheduledExecutor that
continuously executes a SparkJob to a standalone cluster.

After each job we close the JavaSparkContext and create a new one.

But sometimes the Scheduling JVM crashes with:

24.03.2016-08:30:27:375# error - Application has been killed. Reason:
All masters are unresponsive! Giving up.
24.03.2016-08:30:27:398# error - Error initializing SparkContext.
java.lang.IllegalStateException: Cannot call methods on a stopped
SparkContext.
This stopped SparkContext was created at:

org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
io.datapath.spark.AbstractSparkJob.createJavaSparkContext(AbstractSparkJob.java:53)
io.datapath.measurement.SparkJobMeasurements.work(SparkJobMeasurements.java:130)
io.datapath.measurement.SparkMeasurementScheduler.lambda$submitSparkJobMeasurement$30(SparkMeasurementScheduler.java:117)
io.datapath.measurement.SparkMeasurementScheduler$$Lambda$17/1568787282.run(Unknown
Source)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)

The currently active SparkContext was created at:

(No active SparkContext.)

at
org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:106)
at
org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1578)
at
org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2179)
at org.apache.spark.SparkContext.(SparkContext.scala:579)
at
org.apache.spark.api.java.JavaSparkContext.(JavaSparkContext.scala:59)
at
io.datapath.spark.AbstractSparkJob.createJavaSparkContext(AbstractSparkJob.java:53)
at
io.datapath.measurement.SparkJobMeasurements.work(SparkJobMeasurements.java:130)
at
io.datapath.measurement.SparkMeasurementScheduler.lambda$submitSparkJobMeasurement$30(SparkMeasurementScheduler.java:117)
at
io.datapath.measurement.SparkMeasurementScheduler$$Lambda$17/1568787282.run(Unknown
Source)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
24.03.2016-08:30:27:402# info - SparkMeasurement - finished.

Any guess?
-- 
*Max Schmidt, Senior Java Developer* | m...@datapath.io
<mailto:m...@datapath.io> | LinkedIn
<https://www.linkedin.com/in/maximilian-schmidt-9893b7bb/>
Datapath.io
 
Decreasing AWS latency.
Your traffic optimized.

Datapath.io GmbH
Mainz | HRB Nr. 46222
Sebastian Spies, CEO



[jira] [Commented] (HDFS-6973) DFSClient does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets

2016-01-21 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15110342#comment-15110342
 ] 

Max Schmidt commented on HDFS-6973:
---

Sorry here, but i found out that our driver programm forgot to close the 
FSDataInputStream at some place, which fixed the behaviour above.

> DFSClient does not closing a closed socket resulting in thousand of 
> CLOSE_WAIT sockets
> --
>
> Key: HDFS-6973
> URL: https://issues.apache.org/jira/browse/HDFS-6973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0
> Environment: RHEL 6.3 -HDP 2.1 -6 RegionServers/Datanode -18T per 
> node -3108Regions
>Reporter: steven xu
>
> HBase as HDFS Client dose not close a dead connection with the datanode.
> This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not 
> connect to the datanode because too many mapped sockets from one host to 
> another on the same port:50010. 
> After I restart all RSs, the count of CLOSE_WAIT will increase always.
> $ netstat -an|grep CLOSE_WAIT|wc -l
> 2545
> netstat -nap|grep CLOSE_WAIT|grep 6569|wc -l
> 2545
> ps -ef|grep 6569
> hbase 6569 6556 21 Aug25 ? 09:52:33 /opt/jdk1.6.0_25/bin/java 
> -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m 
> -XX:+UseConcMarkSweepGC
> I aslo have reviewed these issues:
> [HDFS-5697]
> [HDFS-5671]
> [HDFS-1836]
> [HBASE-9393]
> I found in HBase 0.98/Hadoop 2.4.0 source codes of these patchs have been 
> added.
> But I donot understand why HBase 0.98/Hadoop 2.4.0 also have this isssue. 
> Please check. Thanks a lot.
> These codes have been added into 
> BlockReaderFactory.getRemoteBlockReaderFromTcp(). Another bug maybe lead my 
> problem,
> {code:title=BlockReaderFactory.java|borderStyle=solid}
> // Some comments here
>   private BlockReader getRemoteBlockReaderFromTcp() throws IOException {
> if (LOG.isTraceEnabled()) {
>   LOG.trace(this + ": trying to create a remote block reader from a " +
>   "TCP socket");
> }
> BlockReader blockReader = null;
> while (true) {
>   BlockReaderPeer curPeer = null;
>   Peer peer = null;
>   try {
> curPeer = nextTcpPeer();
> if (curPeer == null) break;
> if (curPeer.fromCache) remainingCacheTries--;
> peer = curPeer.peer;
> blockReader = getRemoteBlockReader(peer);
> return blockReader;
>   } catch (IOException ioe) {
> if (isSecurityException(ioe)) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace(this + ": got security exception while constructing " +
> "a remote block reader from " + peer, ioe);
>   }
>   throw ioe;
> }
> if ((curPeer != null) && curPeer.fromCache) {
>   // Handle an I/O error we got when using a cached peer.  These are
>   // considered less serious, because the underlying socket may be
>   // stale.
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Closed potentially stale remote peer " + peer, ioe);
>   }
> } else {
>   // Handle an I/O error we got when using a newly created peer.
>   LOG.warn("I/O error constructing remote block reader.", ioe);
>   throw ioe;
> }
>   } finally {
> if (blockReader == null) {
>   IOUtils.cleanup(LOG, peer);
> }
>   }
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6973) DFSClient does not closing a closed socket resulting in thousand of CLOSE_WAIT sockets

2016-01-20 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15108213#comment-15108213
 ] 

Max Schmidt commented on HDFS-6973:
---

I can relate to that. We're using a org.apache.hadoop.fs.FSDataInputStream for 
reading multiple files continuously 2 times an hour from a 2.7.1 cluster. 

I've added "-Djava.net.preferIPv4Stack=true" and 
"-Djava.net.preferIPv6Addresses=false" but it only changed that the sockets are 
now ipv4 instead of ipv6.

After 12 hours usage, 1.4K open sockets:

java10486 root 2233u  IPv4   28226850  0t0  TCP 
10.134.160.9:55927->10.134.160.28:50010 (CLOSE_WAIT)
java10486 root 2237u  IPv4   28223758  0t0  TCP 
10.134.160.9:37363->10.134.160.17:50010 (CLOSE_WAIT)
java10486 root 2240u  IPv4   28223759  0t0  TCP 
10.134.160.9:48976->10.134.160.41:50010 (CLOSE_WAIT)
java10486 root 2248u  IPv4   28222398  0t0  TCP 
10.134.160.9:55976->10.134.160.28:50010 (CLOSE_WAIT)
java10486 root 2274u  IPv4   28222403  0t0  TCP 
10.134.160.9:53185->10.134.160.35:50010 (CLOSE_WAIT)
java10486 root 2283u  IPv4   28211085  0t0  TCP 
10.134.160.9:56009->10.134.160.28:50010 (CLOSE_WAIT)

10.134.160.9 ip of the host with the driver programm, dst-ips are the 
hadoop-nodes.

> DFSClient does not closing a closed socket resulting in thousand of 
> CLOSE_WAIT sockets
> --
>
> Key: HDFS-6973
> URL: https://issues.apache.org/jira/browse/HDFS-6973
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs-client
>Affects Versions: 2.4.0
> Environment: RHEL 6.3 -HDP 2.1 -6 RegionServers/Datanode -18T per 
> node -3108Regions
>Reporter: steven xu
>
> HBase as HDFS Client dose not close a dead connection with the datanode.
> This resulting in over 30K+ CLOSE_WAIT and at some point HBase can not 
> connect to the datanode because too many mapped sockets from one host to 
> another on the same port:50010. 
> After I restart all RSs, the count of CLOSE_WAIT will increase always.
> $ netstat -an|grep CLOSE_WAIT|wc -l
> 2545
> netstat -nap|grep CLOSE_WAIT|grep 6569|wc -l
> 2545
> ps -ef|grep 6569
> hbase 6569 6556 21 Aug25 ? 09:52:33 /opt/jdk1.6.0_25/bin/java 
> -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -Xmx1000m 
> -XX:+UseConcMarkSweepGC
> I aslo have reviewed these issues:
> [HDFS-5697]
> [HDFS-5671]
> [HDFS-1836]
> [HBASE-9393]
> I found in HBase 0.98/Hadoop 2.4.0 source codes of these patchs have been 
> added.
> But I donot understand why HBase 0.98/Hadoop 2.4.0 also have this isssue. 
> Please check. Thanks a lot.
> These codes have been added into 
> BlockReaderFactory.getRemoteBlockReaderFromTcp(). Another bug maybe lead my 
> problem,
> {code:title=BlockReaderFactory.java|borderStyle=solid}
> // Some comments here
>   private BlockReader getRemoteBlockReaderFromTcp() throws IOException {
> if (LOG.isTraceEnabled()) {
>   LOG.trace(this + ": trying to create a remote block reader from a " +
>   "TCP socket");
> }
> BlockReader blockReader = null;
> while (true) {
>   BlockReaderPeer curPeer = null;
>   Peer peer = null;
>   try {
> curPeer = nextTcpPeer();
> if (curPeer == null) break;
> if (curPeer.fromCache) remainingCacheTries--;
> peer = curPeer.peer;
> blockReader = getRemoteBlockReader(peer);
> return blockReader;
>   } catch (IOException ioe) {
> if (isSecurityException(ioe)) {
>   if (LOG.isTraceEnabled()) {
> LOG.trace(this + ": got security exception while constructing " +
> "a remote block reader from " + peer, ioe);
>   }
>   throw ioe;
> }
> if ((curPeer != null) && curPeer.fromCache) {
>   // Handle an I/O error we got when using a cached peer.  These are
>   // considered less serious, because the underlying socket may be
>   // stale.
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Closed potentially stale remote peer " + peer, ioe);
>   }
> } else {
>   // Handle an I/O error we got when using a newly created peer.
>   LOG.warn("I/O error constructing remote block reader.", ioe);
>   throw ioe;
> }
>   } finally {
> if (blockReader == null) {
>   IOUtils.cleanup(LOG, peer);
> }
>   }
> }
> return null;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt
Hi there,

we're haveing a strange Problem here using Spark in a Java application
using the JavaSparkContext:

We are using java.util.logging.* for logging in our application with 2
Handlers (Console + Filehandler):

{{{
.handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler

.level = FINE

java.util.logging.ConsoleHandler.level=INFO
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter

java.util.logging.FileHandler.level= FINE
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.limit=1024
java.util.logging.FileHandler.count=5
java.util.logging.FileHandler.append= true
java.util.logging.FileHandler.pattern=%t/delivery-model.%u.%g.txt

java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td
%1$tH:%1$tM:%1$tS %5$s%6$s%n
}}}

The thing is, that when the JavaSparcContext is started, the Logging stops.

The log4j.properties for spark looks like this:

{{{
log4j.rootLogger=WARN, theConsoleAppender
log4j.additivity.io.datapath=false
log4j.appender.theConsoleAppender=org.apache.log4j.ConsoleAppender
log4j.appender.theConsoleAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.theConsoleAppender.layout.ConversionPattern=%d{-MM-dd
HH:mm:ss} %m%n
}}}

Obviously iam not an expert in the Logging-Architecture yet, but i
really need to understand how the Handler of our JUL-Logging are changed
by the spark-library.

Thanks in advance!



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt
I checked the handlers of my rootLogger
(java.util.logging.Logger.getLogger("")) which where
a Console and a FileHandler.

After the JavaSparkContext was created, the rootLogger only contained a
'org.slf4j.bridge.SLF4JBridgeHandler'.

Am 11.01.2016 um 10:56 schrieb Max Schmidt:
> Hi there,
>
> we're haveing a strange Problem here using Spark in a Java application
> using the JavaSparkContext:
>
> We are using java.util.logging.* for logging in our application with 2
> Handlers (Console + Filehandler):
>
> {{{
> .handlers=java.util.logging.ConsoleHandler, java.util.logging.FileHandler
>
> .level = FINE
>
> java.util.logging.ConsoleHandler.level=INFO
> java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
>
> java.util.logging.FileHandler.level= FINE
> java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
> java.util.logging.FileHandler.limit=1024
> java.util.logging.FileHandler.count=5
> java.util.logging.FileHandler.append= true
> java.util.logging.FileHandler.pattern=%t/delivery-model.%u.%g.txt
>
> java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td
> %1$tH:%1$tM:%1$tS %5$s%6$s%n
> }}}
>
> The thing is, that when the JavaSparcContext is started, the Logging stops.
>
> The log4j.properties for spark looks like this:
>
> {{{
> log4j.rootLogger=WARN, theConsoleAppender
> log4j.additivity.io.datapath=false
> log4j.appender.theConsoleAppender=org.apache.log4j.ConsoleAppender
> log4j.appender.theConsoleAppender.layout=org.apache.log4j.PatternLayout
> log4j.appender.theConsoleAppender.layout.ConversionPattern=%d{-MM-dd
> HH:mm:ss} %m%n
> }}}
>
> Obviously iam not an expert in the Logging-Architecture yet, but i
> really need to understand how the Handler of our JUL-Logging are changed
> by the spark-library.
>
> Thanks in advance!
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Logger overridden when using JavaSparkContext

2016-01-11 Thread Max Schmidt

Okay, i solved this problem...
It was my own fault by setting the RootLogger for the 
java.util.logging*.

An explicit name for the handler/level solved it.

Am 2016-01-11 12:33, schrieb Max Schmidt:

I checked the handlers of my rootLogger
(java.util.logging.Logger.getLogger("")) which where
a Console and a FileHandler.

After the JavaSparkContext was created, the rootLogger only contained 
a

'org.slf4j.bridge.SLF4JBridgeHandler'.

Am 11.01.2016 um 10:56 schrieb Max Schmidt:

Hi there,

we're haveing a strange Problem here using Spark in a Java 
application

using the JavaSparkContext:

We are using java.util.logging.* for logging in our application with 
2

Handlers (Console + Filehandler):

{{{
.handlers=java.util.logging.ConsoleHandler, 
java.util.logging.FileHandler


.level = FINE

java.util.logging.ConsoleHandler.level=INFO

java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter

java.util.logging.FileHandler.level= FINE

java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.limit=1024
java.util.logging.FileHandler.count=5
java.util.logging.FileHandler.append= true
java.util.logging.FileHandler.pattern=%t/delivery-model.%u.%g.txt

java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td
%1$tH:%1$tM:%1$tS %5$s%6$s%n
}}}

The thing is, that when the JavaSparcContext is started, the Logging 
stops.


The log4j.properties for spark looks like this:

{{{
log4j.rootLogger=WARN, theConsoleAppender
log4j.additivity.io.datapath=false
log4j.appender.theConsoleAppender=org.apache.log4j.ConsoleAppender

log4j.appender.theConsoleAppender.layout=org.apache.log4j.PatternLayout

log4j.appender.theConsoleAppender.layout.ConversionPattern=%d{-MM-dd
HH:mm:ss} %m%n
}}}

Obviously iam not an expert in the Logging-Architecture yet, but i
really need to understand how the Handler of our JUL-Logging are 
changed

by the spark-library.

Thanks in advance!




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org




-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



[jira] [Commented] (HDFS-6804) race condition between transferring block and appending block causes "Unexpected checksum mismatch exception"

2015-12-16 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061626#comment-15061626
 ] 

Max Schmidt commented on HDFS-6804:
---

Hadoop version is 2.7.1 used on Ubuntu 14.04.3 LTS.

> race condition between transferring block and appending block causes 
> "Unexpected checksum mismatch exception" 
> --
>
> Key: HDFS-6804
> URL: https://issues.apache.org/jira/browse/HDFS-6804
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Gordon Wang
>
> We found some error log in the datanode. like this
> {noformat}
> 2014-07-22 01:49:51,338 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ex
> ception for BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248 from 
> /192.168.2.101:39495
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:536)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:703)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:575)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
> While on the source datanode, the log says the block is transmitted.
> {noformat}
> 2014-07-22 01:49:50,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Da
> taTransfer: Transmitted 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997
> _9248 (numBytes=16188152) to /192.168.2.103:50010
> {noformat}
> When the destination datanode gets the checksum mismatch, it reports bad 
> block to NameNode and NameNode marks the replica on the source datanode as 
> corrupt. But actually, the replica on the source datanode is valid. Because 
> the replica can pass the checksum verification.
> In all, the replica on the source data is wrongly marked as corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6804) race condition between transferring block and appending block causes "Unexpected checksum mismatch exception"

2015-11-05 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14991579#comment-14991579
 ] 

Max Schmidt commented on HDFS-6804:
---

We can confirm this in a 3-node cluster with replication-factor=2.

The Exception happend, following this stacktrace, when accessing the corrupt 
file:

Caused by: org.apache.hadoop.ipc.RemoteException(java.io.IOException): append: 
lastBlock=blk_1073742163_2852 of src=testfile is not sufficiently replica
ted yet.
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2692)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2985)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2952)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:653)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:421)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy9.append(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:328)
at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.append(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1822)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1885)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1855)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:340)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:336)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:348)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:318)
at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)
at 
io.datapath.ps.FileAccessWriteImpl.(FileAccessWriteImpl.java:29)
... 8 more


Is there a way to manually replicate the file?

> race condition between transferring block and appending block causes 
> "Unexpected checksum mismatch exception" 
> --
>
> Key: HDFS-6804
> URL: https://issues.apache.org/jira/browse/HDFS-6804
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Gordon Wang
>
> We found some error log in the datanode. like this
> {noformat}
> 2014-07-22 01:49:51,338 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ex
> ception for BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248 from 
> /192.168.2.101:39495
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:536)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:703)
> 

[jira] [Commented] (SPARK-10221) RowReaderFactory does not work with blobs

2015-08-26 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14713038#comment-14713038
 ] 

Max Schmidt commented on SPARK-10221:
-

Sorry for not being clearly. Iam using the datastax driver, but i guess the 
inital problem is somewhere in the TypeConverter class:

com.datastax.spark.connector.types.TypeConverter.ChainedTypeConverter.CollectionConverter.OptionToNullConverter.orderingFor()

There is no way, a blob may be go through a TypeConverter, right?

 RowReaderFactory does not work with blobs
 -

 Key: SPARK-10221
 URL: https://issues.apache.org/jira/browse/SPARK-10221
 Project: Spark
  Issue Type: Bug
  Components: SQL
Reporter: Max Schmidt

 While using a RowReaderFactory out of the Util API here: 
 com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowToTuple(, 
 ClassByteBuffer) against a cassandra table with a column which is described 
 as a ByteBuffer get the following stacktrace:
 {quote}
 8786 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager  
 - Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
 java.nio.HeapByteBuffer
 Serialization stack:
 - object not serializable (class: java.nio.HeapByteBuffer, value: 
 java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
 - field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
 - object (class scala.Tuple4, 
 (/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
 11:00:23 CEST 2015,76.808)); not retrying
 Exception in thread main org.apache.spark.SparkException: Job aborted due 
 to stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable 
 result: java.nio.HeapByteBuffer
 Serialization stack:
 - object not serializable (class: java.nio.HeapByteBuffer, value: 
 java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
 - field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
 - object (class scala.Tuple4, 
 (/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
 11:00:23 CEST 2015,76.808))
 at 
 org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
 at 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
 at 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
 at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
 at 
 org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
 at 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
 at 
 org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
 at scala.Option.foreach(Option.scala:236)
 at 
 org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
 at 
 org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
 at 
 org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
 at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
 {quote}
 Using a kind of wrapper-class following bean conventions, doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10221) RowReaderFactory does not work with blobs

2015-08-25 Thread Max Schmidt (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated SPARK-10221:

Description: 
While using a RowReaderFactory out of the Util API here: 
com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowToTuple(, 
ClassByteBuffer) against a cassandra table with a column which is described 
as a ByteBuffer get the following stacktrace:

{quote}
8786 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager  - 
Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808)); not retrying
Exception in thread main org.apache.spark.SparkException: Job aborted due to 
stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808))
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
{quote}

Using a kind of wrapper-class following bean conventions, doesn't work either.

  was:
While using a RowReaderFactory out of the Util API here: 
com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowToTuple(, 
ClassByteBuffer) against a cassandra table with a column which is described 
as a ByteBuffer get the following stacktrace:

8786 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager  - 
Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808)); not retrying
Exception in thread main org.apache.spark.SparkException: Job aborted due to 
stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808))
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730

[jira] [Created] (SPARK-10221) RowReaderFactory does not work with blobs

2015-08-25 Thread Max Schmidt (JIRA)
Max Schmidt created SPARK-10221:
---

 Summary: RowReaderFactory does not work with blobs
 Key: SPARK-10221
 URL: https://issues.apache.org/jira/browse/SPARK-10221
 Project: Spark
  Issue Type: Bug
Reporter: Max Schmidt


While using a RowReaderFactory out of the Util API here: 
com.datastax.spark.connector.japi.CassandraJavaUtil.mapRowToTuple(, 
ClassByteBuffer) against a cassandra table with a column which is described 
as a ByteBuffer get the following stacktrace:

8786 [task-result-getter-0] ERROR org.apache.spark.scheduler.TaskSetManager  - 
Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808)); not retrying
Exception in thread main org.apache.spark.SparkException: Job aborted due to 
stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: 
java.nio.HeapByteBuffer
Serialization stack:
- object not serializable (class: java.nio.HeapByteBuffer, value: 
java.nio.HeapByteBuffer[pos=0 lim=2 cap=2])
- field (class: scala.Tuple4, name: _2, type: class java.lang.Object)
- object (class scala.Tuple4, 
(/104.130.160.121,java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],Tue Aug 25 
11:00:23 CEST 2015,76.808))
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1273)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1264)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1263)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1263)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1457)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

Using a kind of wrapper-class following bean conventions, doesn't work either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [Alsa-user] Constant delay between starting a playback and a capture stream

2014-07-02 Thread Max Schmidt

Hi,



just for correction if someone will ever read my question again:

of course its microseconds and not milliseconds



Sorry for the disturbance and regards


--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft___
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user


Re: [Alsa-user] Constant delay between starting a playback and a capture stream

2014-06-27 Thread Max Schmidt

Hi Dominique,



thanks for your answer!

Ye, have already thought that as well. Just thought its maybe not appropriate to ask there since Im not involved in the ALSA project. But will give it a try.

The JACK API was already in my mind, too. Just dont know if its rather for small (synchron but differnet after a new start of the application) delays between streams or it can handle that its always the same delay between streams, even after a new start, as well. Maybe Ill give it a try, too.



Thanks again and cheers!





Gesendet:Donnerstag, 26. Juni 2014 um 12:48 Uhr
Von:Dominique Michel dominique.mic...@vtxnet.ch
An:alsa-user@lists.sourceforge.net
Betreff:Re: [Alsa-user] Constant delay between starting a playback and a capture stream

Le Mon, 23 Jun 2014 18:20:43 +0200,
Max Schmidt schmidti...@web.de a crit :

Hi,

I am not an audio developer but I think the LAD list is a better place
for such highly technical issues. I also think most developers that
want a constant audio latency with their application will use the
JACK API instead of the ALSA API.

http://lists.linuxaudio.org/listinfo/linux-audio-dev

 Hey all,

 first a big hello to everyone since Im new to this mailing-list.
 Ive had a look regarding this issue here and in generell in the
 internet but didnt find anything relating. So, sorry if I overlooked
 something. Ive got a BeagleBone Black (ARM Cortex A 8 P) with an
 Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap
 based playback-capture application (both devices hw:0,0). The
 important thing is that the application needs a constant delay (not
 necessarily small) between the start of the playback and the capture
 stream. So when looping back the signal to the microfone the first
 played sample e.g. always (even after a restart of the BeagleBone or
 the app) has a delay in the capture stream of exactly 80 samples
 (when e.g. played with 48 kHz), and once measured can be seen as
 constant. To realize that I use the ALSA API function
 snd_pcm_link(c_handle, p_handle). When starting the playback stream
 (and therefore linked capture stream as welll) manually its buffer
 already is filled. There is no buffer underrun/overrun recovery. It
 already works quite fine but I still have some questions and it would
 be really nice if someone can help: 1) Looking at a plot of multiple
 measurements of looped back and captured square waves (or sines)
 there still is a jitter of about two to four samples (at 48 kHz;
 which is about 40 to 80 ms), no matter if I run it as RT-app (energy
 save modes etc. disabled, just important Processes have a higher
 Priority e.g. EDMA) or normal app. Please correct me if Im wrong,
 but as far as I understand when the streams are linked, at start the
 processor is going through a linked list triggering all linked
 streams. And the trigger start is an atomic process so shouldnt get
 interrupted. Shouldnt it always take the same time between starting
 the playback and capture stream then? And if yes, where could the
 variable start delay come from? 2) Looking at the time stamp of both
 streams it tells that the difference between the start trigger
 normally is between 2 to 7 ms. What, I think, does not really fit to
 the observation written above since then there normally should be no
 big sample jitter at 48 kHz. Are the time stamp values not precise
 enough (actually how precise are the time stamps, just that they can
 show microseconds does not imply that the resolution really is
 microseconds, or am I wrong?) or are they correct and the difference
 in latency seen in measured plots comes from somewhere else? Delay
 due to Hardware should be constant so the issue must have something
 to do with ALSA or Linux. Would be nice if someone can help or has an
 idea! Tipps for an improvement are wlecome as well! Many thanks and
 cheers!

--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft
___
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user




--
Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
http://p.sf.net/sfu/Bonitasoft___
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user


[Alsa-user] Constant delay between starting a playback and a capture stream

2014-06-23 Thread Max Schmidt
Hey all,



first a big hello to everyone since Im new to this mailing-list.

Ive had a look regarding this issue here and in generell in the internet but didnt find anything relating. So, sorry if I overlooked something.



Ive got a BeagleBone Black (ARM Cortex A 8 P) with an Audio-Cape (using McAsp and ALSA-Davinci-Drivers) and wrote an mmap based playback-capture application (both devices hw:0,0). The important thing is that the application needs a constant delay (not necessarily small) between the start of the playback and the capture stream. So when looping back the signal to the microfone the first played sample e.g. always (even after a restart of the BeagleBone or the app) has a delay in the capture stream of exactly 80 samples (when e.g. played with 48 kHz), and once measured can be seen as constant.

To realize that I use the ALSA API function snd_pcm_link(c_handle, p_handle). When starting the playback stream (and therefore linked capture stream as welll) manually its buffer already is filled. There is no buffer underrun/overrun recovery.

It already works quite fine but I still have some questions and it would be really nice if someone can help:



1) Looking at a plot of multiple measurements of looped back and captured square waves (or sines) there still is a jitter of about two to four samples (at 48 kHz; which is about 40 to 80 ms), no matter if I run it as RT-app (energy save modes etc. disabled, just important Processes have a higher Priority e.g. EDMA) or normal app.

Please correct me if Im wrong, but as far as I understand when the streams are linked, at start the processor is going through a linked list triggering all linked streams. And the trigger start is an atomic process so shouldnt get interrupted. Shouldnt it always take the same time between starting the playback and capture stream then? And if yes, where could the variable start delay come from?



2) Looking at the time stamp of both streams it tells that the difference between the start trigger normally is between 2 to 7 ms. What, I think, does not really fit to the observation written above since then there normally should be no big sample jitter at 48 kHz. Are the time stamp values not precise enough (actually how precise are the time stamps, just that they can show microseconds does not imply that the resolution really is microseconds, or am I wrong?) or are they correct and the difference in latency seen in measured plots comes from somewhere else? Delay due to Hardware should be constant so the issue must have something to do with ALSA or Linux.




Would be nice if someone can help or has an idea!

Tipps for an improvement are wlecome as well!

Many thanks and cheers!


--
HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing  Easy Data Exploration
http://p.sf.net/sfu/hpccsystems___
Alsa-user mailing list
Alsa-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/alsa-user


[jira] [Created] (ZOOKEEPER-1687) Number of past transaction retains in ZKDatabase.committedLog should be configurable

2013-04-08 Thread Max Schmidt (JIRA)
Max Schmidt created ZOOKEEPER-1687:
--

 Summary: Number of past transaction retains in 
ZKDatabase.committedLog should be configurable
 Key: ZOOKEEPER-1687
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1687
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Max Schmidt
Priority: Minor


ZKDatabase.committedLog retains the past 500 transactions. In case of memory 
usage is more important than speed and vice versa, this should be configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (ZOOKEEPER-1687) Number of past transactions retains in ZKDatabase.committedLog should be configurable

2013-04-08 Thread Max Schmidt (JIRA)

 [ 
https://issues.apache.org/jira/browse/ZOOKEEPER-1687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Max Schmidt updated ZOOKEEPER-1687:
---

Summary: Number of past transactions retains in ZKDatabase.committedLog 
should be configurable  (was: Number of past transaction retains in 
ZKDatabase.committedLog should be configurable)

 Number of past transactions retains in ZKDatabase.committedLog should be 
 configurable
 -

 Key: ZOOKEEPER-1687
 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-1687
 Project: ZooKeeper
  Issue Type: Improvement
Reporter: Max Schmidt
Priority: Minor

 ZKDatabase.committedLog retains the past 500 transactions. In case of memory 
 usage is more important than speed and vice versa, this should be 
 configurable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (SOLR-2968) Hunspell very high memory use when loading dictionary

2013-01-22 Thread Max Schmidt (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13559680#comment-13559680
 ] 

Max Schmidt commented on SOLR-2968:
---

Dictionaries with the same file location should be shared across all field and 
all indexes. This would minimize the problem if you're using multiple indexes. 

Currently I can't use Solr because I have 10 indexes with 5 field and for each 
field a DictionaryCompoundWordTokenFilterFactory is assigned. So the dictionary 
will be loaded 50 times. This is too much for my RAM.

 Hunspell very high memory use when loading dictionary
 -

 Key: SOLR-2968
 URL: https://issues.apache.org/jira/browse/SOLR-2968
 Project: Solr
  Issue Type: Bug
Affects Versions: 3.5
Reporter: Maciej Lisiewski
Priority: Minor
 Attachments: patch.txt


 Hunspell stemmer requires gigantic (for the task) amounts of memory to load 
 dictionary/rules files. 
 For example loading a 4.5 MB polish dictionary (with empty index!) will cause 
 whole core to crash with various out of memory errors unless you set max heap 
 size close to 2GB or more.
 By comparison Stempel using the same dictionary file works just fine with 1/8 
 of that (and possibly lower values as well).
 Sample error log entries:
 http://pastebin.com/fSrdd5W1
 http://pastebin.com/Lmi0re7Z

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[TYPO3-german] pdf_generator2 und geschützte Seiten (TYPO3 4.5.9, pdf_generator2 0.21.1)

2011-12-18 Thread Max Schmidt

Hallo Liste,

habe mittlerweile die Extension pdf_generator2 zum laufen bekommen.  
Ich habe allerdings auch zugriffsgeschützte Seiten. Hier wird zwar  
auch eine PDF generiert, allerdings sind Content-Elemente, die per  
Option Zugriff und gesetzter Benutzergruppe geschützt sind, trotz  
Login nicht enthalten.
Kann man der Extension einen FE-User mitgeben oder das Problem sonst  
irgendwie lösen?
Habe auch versucht, statt dem CE die Seite zu schützen, das bringt  
auch nichts. Wobei ich das für diesen Fall auch gern lösen wollen würde.


Oder gibt es eine andere Empfehlung für PDF-Generierung, die das kann?

Grüße
Max


___
TYPO3-german mailing list
TYPO3-german@lists.typo3.org
http://lists.typo3.org/cgi-bin/mailman/listinfo/typo3-german