If there is no tuple failure, it might be the intended way of working of a
worker. :) May be some with command over internal details of Storm can
comment here.
One more thing that comes into my mind, considering the delay in connection
reset after increasing the worker memory, is to check for memory leak, in
case heap space is continuously growing with increasing number of tuples
causing the worker to restart.


On Fri, Sep 5, 2014 at 2:10 AM, Alberto Cordioli <cordioli.albe...@gmail.com
> wrote:

> Do you mean the config in the yaml file? I increased the worker memory and
> the spout is able to emit more tuples; the error is delayed but still
> there! The weird thing is that there are no tuple failures..
> Il 04/Set/2014 20:08 "Vikas Agarwal" <vi...@infoobjects.com> ha scritto:
>
> I am not sure about it, however, looking at all possible config options
>> for storm would help. I did the same for one of my issues and found one
>> config option that was causing tuple failures.
>>
>>
>> On Thu, Sep 4, 2014 at 9:47 PM, Alberto Cordioli <
>> cordioli.albe...@gmail.com> wrote:
>>
>>> That one is the full error log for the worker. No errors in
>>> supervisors and nimbus.
>>> That worker is associated with a spout that tries to make connection
>>> to HDFS to read avro files. Could be a problem related to this?
>>>
>>>
>>> On 4 September 2014 18:07, Vikas Agarwal <vi...@infoobjects.com> wrote:
>>> > Is it full error log? I mean we can look into source code where the
>>> worker
>>> > is trying to make some connection and may be we can guess what is
>>> wrong with
>>> > it.
>>> >
>>> >
>>> > On Thu, Sep 4, 2014 at 9:09 PM, Alberto Cordioli
>>> > <cordioli.albe...@gmail.com> wrote:
>>> >>
>>> >> I've found this post describing the same problem. Unfortunately no
>>> >> answers:
>>> >>
>>> https://www.mail-archive.com/user@storm.incubator.apache.org/msg03623.html
>>> >>
>>> >> On 3 September 2014 18:58, Alberto Cordioli <
>>> cordioli.albe...@gmail.com>
>>> >> wrote:
>>> >> > Hi all,
>>> >> >
>>> >> > I searched for similar problems without any luck.
>>> >> > I implemented a spout that continuously get this exception when
>>> >> > emitting "more than a certain number of tuples". I was not able to
>>> >> > understand how much this amount is, but I emit tuples in the order
>>> of
>>> >> > millions per seconds.
>>> >> > I've seen that other people had my same problem and resolved tuning
>>> >> > the ack executors parameter. In my case I don't have ackers
>>> (disabled
>>> >> > at spout level) and hence it couldn't related to this problem.
>>> >> >
>>> >> > The supervisor and nimbus logs look fine. The only problem I have is
>>> >> > in the spout worker:
>>> >> >
>>> >> > java.io.IOException: Connection reset by peer
>>> >> > at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_65]
>>> >> > at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>> >> > ~[na:1.7.0_65]
>>> >> > at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>> >> > ~[na:1.7.0_65]
>>> >> > at sun.nio.ch.IOUtil.read(IOUtil.java:192) ~[na:1.7.0_65]
>>> >> > at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>>> >> > ~[na:1.7.0_65]
>>> >> > at
>>> org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:322)
>>> >> > ~[netty-3.2.2.Final.jar:na]
>>> >> > at
>>> >> >
>>> org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:281)
>>> >> > ~[netty-3.2.2.Final.jar:na]
>>> >> > at
>>> org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:201)
>>> >> > ~[netty-3.2.2.Final.jar:na]
>>> >> > at
>>> >> >
>>> org.jboss.netty.util.internal.IoWorkerRunnable.run(IoWorkerRunnable.java:46)
>>> >> > [netty-3.2.2.Final.jar:na]
>>> >> > at
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> >> > [na:1.7.0_65]
>>> >> > at
>>> >> >
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> >> > [na:1.7.0_65]
>>> >> > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_65]
>>> >> >
>>> >> > Does someone have an idea of why this happens?
>>> >> >
>>> >> > Thanks,
>>> >> > Alberto
>>> >> >
>>> >> > --
>>> >> > Alberto Cordioli
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Alberto Cordioli
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Regards,
>>> > Vikas Agarwal
>>> > 91 – 9928301411
>>> >
>>> > InfoObjects, Inc.
>>> > Execution Matters
>>> > http://www.infoobjects.com
>>> > 2041 Mission College Boulevard, #280
>>> > Santa Clara, CA 95054
>>> > +1 (408) 988-2000 Work
>>> > +1 (408) 716-2726 Fax
>>>
>>>
>>>
>>> --
>>> Alberto Cordioli
>>>
>>
>>
>>
>> --
>> Regards,
>> Vikas Agarwal
>> 91 – 9928301411
>>
>> InfoObjects, Inc.
>> Execution Matters
>> http://www.infoobjects.com
>> 2041 Mission College Boulevard, #280
>> Santa Clara, CA 95054
>> +1 (408) 988-2000 Work
>> +1 (408) 716-2726 Fax
>>
>>


-- 
Regards,
Vikas Agarwal
91 – 9928301411

InfoObjects, Inc.
Execution Matters
http://www.infoobjects.com
2041 Mission College Boulevard, #280
Santa Clara, CA 95054
+1 (408) 988-2000 Work
+1 (408) 716-2726 Fax

Reply via email to