Michel,

On Fri, Dec 12, 2014 at 5:10 PM, Michael Rose <mich...@fullcontact.com>
wrote:
>
> Hi Ernesto,
>
> Have multi-threaded bolts is fine as long as you synchronize on the
> OutputCollector before emitting/acking. That'll solve your issue.
>

Thanks for your answer! Even if the thread is not a Storm thread?


> Michael
>
> Michael Rose (@Xorlev <https://twitter.com/xorlev>)
> Senior Platform Engineer, FullContact <http://www.fullcontact.com/>
> mich...@fullcontact.com
>
> On Fri, Dec 12, 2014 at 6:05 AM, Ernesto Reinaldo Barreiro <
> reier...@gmail.com> wrote:
>>
>> Hi.
>>
>> This seem to be related to the fact that there are other threads (some
>> custom threads) interacting with my custom bolts. Removing them "fixes" the
>> problem.
>>
>> Side note: my bolts accumulate data and at some point when new data
>> arrive they dump this accumulated data... if no new data arrives at some
>> point data is not dumped. That's  why I had these dump forcing threads.
>>
>> On Fri, Dec 12, 2014 at 11:04 AM, Ernesto Reinaldo Barreiro <
>> reier...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> I'm relatively new to Storm, a couple of months using it. So, excuse any
>>> stupid question I  might post :-)
>>>
>>> I have a somewhat complex topology... At some point while developing it
>>> I have made some change that is producing following exception
>>>
>>> -------------------------------------------------
>>> 325692 [Thread-345-disruptor-executor[32 32]-send-queue] ERROR
>>> backtype.storm.util - Async loop died!
>>> java.lang.RuntimeException: java.lang.NullPointerException
>>> at
>>> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:128)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.disruptor$consume_loop_STAR_$fn__1460.invoke(disruptor.clj:94)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at backtype.storm.util$async_loop$fn__464.invoke(util.clj:463)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
>>> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_72]
>>> Caused by: java.lang.NullPointerException: null
>>> at clojure.lang.RT.intCast(RT.java:1087) ~[clojure-1.5.1.jar:na]
>>> at
>>> backtype.storm.daemon.worker$mk_transfer_fn$fn__3549.invoke(worker.clj:129)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.daemon.executor$start_batch_transfer__GT_worker_handler_BANG_$fn__3283.invoke(executor.clj:258)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.disruptor$clojure_handler$reify__1447.onEvent(disruptor.clj:58)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> at
>>> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
>>> ~[storm-core-0.9.3.jar:0.9.3]
>>> ... 6 common frames omitted
>>> 325693 [Thread-345-disruptor-executor[32 32]-send-queue] ERROR
>>> backtype.storm.daemon.executor
>>> ===================================
>>>
>>> It is clearly something I have added because this was not happening
>>> before yesterday. I was initially using 0.9.2-incubating and upgrading to
>>> 0.9.3 gives the same behavior. Is this a know issue? I can describe more
>>> what I'm doing if that would help find the culprit... or even try to create
>>> quick-start project. This happens running the topology on a local
>>> cluster... Not tested in in a production setting yet.
>>>
>>> --
>>> Regards - Ernesto Reinaldo Barreiro
>>>
>>
>>
>> --
>> Regards - Ernesto Reinaldo Barreiro
>>
>

-- 
Regards - Ernesto Reinaldo Barreiro

Reply via email to