I was wondering what could be the root cause of the below ERROR logs in broker
hosts. I see the connection closed / reset as INFO logs but for some shows "due
to error" and it does not provide enough clues to it.
Appreciate any ideas...
[2015-03-12 11:22:02,019] ERROR Closing socket for /{ beca
Processor)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun
minutes). This
> seems to be minor bug to close TCP connection after discovering that seed
> server is not part of brokers list immediately.
>
>
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcher.read0(Native Method)
> at sun.nio.c
idle connection timeout set on LB (which
is 2 minutes and Kafka TCP connection idle is set to 10 minutes). This
seems to be minor bug to close TCP connection after discovering that seed
server is not part of brokers list immediately.
java.io.IOException: Connection reset by peer
at
: Connection reset by peer
Hi, All
I am running a C# producer to send messages to kafka (3 nodes cluster), but
have such errors:
[2015-01-06 16:09:51,143] ERROR Closing socket for /10.100.70.128 because
of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at
Hi, All
I am running a C# producer to send messages to kafka (3 nodes cluster), but
have such errors:
[2015-01-06 16:09:51,143] ERROR Closing socket for /10.100.70.128 because
of error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at
Thank you, Neha. I appreciate your help.
--
*Have a nice day.*
Regards,
Aniket Kulkarni.
k.Processor.write(SocketServer.scala:375)
>
> at kafka.network.Processor.run(SocketServer.scala:247)
>
> at java.lang.Thread.run(Thread.java:744)
>
> Also,
>
> 2014-09-25 11:43:53,770 [kafka-processor-56598-1] ERROR
> kafka.network.Processor - Closing socket for /127.0.0
/127.0.0.1 because of error
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at
Aniket,
Could you provide more context to this email? The previous conversation on
the exception is missing so I'm not sure which exception you are referring
to.
Thanks,
Neha
On Thu, Sep 25, 2014 at 8:52 AM, Aniket Kulkarni <
kulkarnianiket...@gmail.com> wrote:
> @Neha When you say this is an e
@Neha When you say this is an expected exception, does that imply there is
no way of getting rid of those exceptions ?
Thanks.
—
Aniket Kulkarni
Hello,
With reference to this[1] discussion, I am facing a similar issue with the
following stack trace interchangeably with broken pipe:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read
r,
>
>
> [2014-02-01 13:06:43,240] ERROR Closing socket for /127.0.0.1 because of
> error (kafka.network.Processor)java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcher.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java
or in Kafka server.log. I don't see anything in
> >> my
> >>>> producer or consumer log. Any idea what can be going on here?
> >>>>
> >>>> Aparup
> >>>>
> >>>> 2013-10-31 02:37:13,917] INFO Closing socket connec
producer or consumer log. Any idea what can be going on here?
>>>>
>>>> Aparup
>>>>
>>>> 2013-10-31 02:37:13,917] INFO Closing socket connection to /x.x.x.x.
>>>> (kafka.network.Processor)
>>>> [2013-10-31 02:37:21,645] IN
r or consumer log. Any idea what can be going on here?
> >>
> >> Aparup
> >>
> >> 2013-10-31 02:37:13,917] INFO Closing socket connection to /x.x.x.x.
> >> (kafka.network.Processor)
> >> [2013-10-31 02:37:21,645] INFO Closing socket connection to /x.x.x.x.
x.
>> (kafka.network.Processor)
>> [2013-10-31 02:37:21,713] ERROR Closing socket for /x.x.x.x because of
>> error (kafka.network.Processor)
>> java.io.IOException: Connection reset by peer
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>> at sun.nio.ch.SocketDispa
-10-31 02:37:21,713] ERROR Closing socket for /x.x.x.x because of
> error (kafka.network.Processor)
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.
ocket connection to /x.x.x.x.
(kafka.network.Processor)
[2013-10-31 02:37:21,713] ERROR Closing socket for /x.x.x.x because of error
(kafka.network.Processor)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher
Not sure why re-registering in broker fails. Normall, when the broker
registers, the ZK path should already be gone.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:31 AM, Yonghui Zhao wrote:
> Will do a check, I just wonder why broker need re-regiester and it failed,
> so broker service is stopped.
>
>
Will do a check, I just wonder why broker need re-regiester and it failed,
so broker service is stopped.
2013/3/28 Jun Rao
> Do you see lots of ZK session expiration in the broker too? If so, that
> suggests a GC issue in the broker too. So, you may need to tune the GC in
> the broker as well.
>
Do you see lots of ZK session expiration in the broker too? If so, that
suggests a GC issue in the broker too. So, you may need to tune the GC in
the broker as well.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:20 AM, Yonghui Zhao wrote:
> Thanks Jun.
>
> But I can't understand how consumer GC trigge
Thanks Jun.
But I can't understand how consumer GC trigger kafka server issue:
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably indicates that you either have configured a
brokerid that is already in use, or else you have shutdown this broker and
The zk session timeout only kicks in if you force kill the consumer.
Otherwise, consumer will close ZK session properly on clean shutdown.
The problem with GC is that if the consumer pauses for a long time, ZK
server won't receive pings from the client and thus can expire a still
existing session.
I used zookeeper-3.3.4 in kafka.
Default tickTime is 3 seconds, minSesstionTimeOut is 6 seconds.
Now I change tickTime to 5 seconds. minSesstionTimeOut to 10 seconds
But if we change timeout to a larger one,
"you have shutdown this broker and restarted it faster than the zookeeper
timeout so it ap
Not sure why the re-registration fails. Are you using ZK 3.3.4 or above?
It seems that you consumer still GCs, which is the root cause. So, you will
need to tune the GC setting further. Another way to avoid ZK session
timeout is to increase the session timeout config.
Thanks,
Jun
On Wed, Mar 27
Now I used GC like this:
-server -Xms1536m -Xmx1536m -XX:NewSize=128m -XX:MaxNewSize=128m
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC
-XX:CMSInitiatingOccupancyFraction=70
But it still happened. It seems kafka server reconnect with zk, but the
old node was still there. So kafka server stopped.
Can
The kafka-server-start.sh script doesn't have the mentioned GC
settings and heap size configured. However, probably doing that is a
good idea.
Thanks,
Neha
On Tue, Mar 26, 2013 at 9:47 AM, Yonghui Zhao wrote:
> kafka server is started by bin/kafka-server-start.sh. No gc setting.
> 在 2013-3-26 下
kafka server is started by bin/kafka-server-start.sh. No gc setting.
在 2013-3-26 下午11:40,"Neha Narkhede" 写道:
> Did you have a gc pause around that time on the server ? What are your
> server's current gc settings ?
>
> Thanks,
> Neha
>
> On Mon, Mar 25, 2013 at 8:48 PM, Yonghui Zhao
> wrote:
> >
Did you have a gc pause around that time on the server ? What are your
server's current gc settings ?
Thanks,
Neha
On Mon, Mar 25, 2013 at 8:48 PM, Yonghui Zhao wrote:
> Thanks Neha, btw have you seen this exception. We didn't restart any
> service it happens in deep night.
>
> java.lang.Runtim
Thanks Neha, btw have you seen this exception. We didn't restart any
service it happens in deep night.
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably indicates that you either have configured a
brokerid that is already in use, or else you have
That really depends on your consumer application's memory allocation
patterns. If it is a thin wrapper over a Kafka consumer, I would imagine
you can get away with using CMS for the tenured generation and parallel
collector for the new generation with a small heap like 1gb or so.
Thanks,
Neha
On
Any suggestion on consumer side?
在 2013-3-25 下午9:49,"Neha Narkhede" 写道:
> For Kafka 0.7 in production at Linkedin, we use a heap of size 3G, new gen
> 256 MB, CMS collector with occupancy of 70%.
>
> Thanks,
> Neha
>
> On Sunday, March 24, 2013, Yonghui Zhao wrote:
>
> > Hi Jun,
> >
> > I used kaf
For Kafka 0.7 in production at Linkedin, we use a heap of size 3G, new gen
256 MB, CMS collector with occupancy of 70%.
Thanks,
Neha
On Sunday, March 24, 2013, Yonghui Zhao wrote:
> Hi Jun,
>
> I used kafka-server-start.sh to start kafka, there is only one jvm setting
> "-Xmx512M“
>
> Do you hav
eive.readCompletely(BoundedByteBufferReceive.scala:29)
> > > > > > > > at
> > > > > > > >
> > > kafka.consumer.SimpleConsumer.getResponse(SimpleConsumer.scala:177)
> > > > > > > >
>>
>>>> kafka.consumer.SimpleConsumer.liftedTree2$1(SimpleConsumer.scala:117)
>>>>>>>>at
>>>>>>>>
>>> kafka.consumer.SimpleConsumer.multifetch(SimpleConsumer.scala:115)
>>>>>>>>at
>>
er.SimpleConsumer.multifetch(SimpleConsumer.scala:115)
> > > > > > > at
> > > > kafka.consumer.FetcherRunnable.run(FetcherRunnable.scala:60)
> > > > > > >
> > > > > > > 2013/03/21* 12:07:18.176* INFO [Simpl
> > > > > >
>> > kafka.consumer.SimpleConsumer.liftedTree2$1(SimpleConsumer.scala:117)
>> > > > > > at
>> > > > > >
>> kafka.consumer.SimpleConsumer.multifetch(SimpleConsumer.scala:115)
>> > > > > >
t in
> > > > > multifetch
> > > > > > due to socket error:
> > > > > > java.nio.channels.ClosedByInterruptException
> > > > > > at
> > > > > >
> > >
ptException
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:201)
> > > > > at
> > > sun.nio.ch.SocketChannelImpl.read(So
at
> > > >
> > > >
> > >
> >
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:67)
> > > > at
> > > > kafka.network.Receive$class.readCompletely(Transmission.scala:5
at kafka.server.MessageSetSend.writeTo(MessageSetSend.scala:51)
> > > at kafka.network.MultiSend.writeTo(Transmission.scala:91)
> > > at kafka.network.Processor.write(SocketServer.scala:339)
> > > at kafka.network.Processor.
9)
> > at
> > kafka.consumer.SimpleConsumer.getResponse(SimpleConsumer.scala:177)
> > at
> > kafka.consumer.SimpleConsumer.liftedTree2$1(SimpleConsumer.scala:117)
> > at
> > kafka.consumer.SimpleConsumer.multifetch(SimpleConsumer.scala:115)
> >
e.run(FetcherRunnable.scala:60)
>
>
> *Exceptions in kafka server:*
>
> [2013-03-21 *12:07:18,128*] ERROR Closing socket for /127.0.0.1 because
> of
> error (kafka.network.Processor)
> java.io.IOException: Connection reset by peer
> at sun.nio.ch.FileChanne
xception: Connection reset by peer
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
at
sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:456)
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:557)
at kafka.message.FileMessageSet
nection reset exception reproed.
>
> [2013-03-19 16:30:45,814] INFO Closing socket connection to /127.0.0.1.
> (kafka.network.Processor)
> [2013-03-19 16:30:55,253] ERROR Closing socket for /127.0.0.1 because of
> error (kafka.network.Processor)
> java.io.IOException: Co
Connection reset exception reproed.
[2013-03-19 16:30:45,814] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
[2013-03-19 16:30:55,253] ERROR Closing socket for /127.0.0.1 because of
error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at
o kafka
> > server.
> >
> > After sent 100 million this exception happend:
> >
> > In producer:
> >
> > Exception in thread "main" java.io.IOException: Connection reset by peer
> > at sun.nio.ch.FileDispatcher.writev0(Native Method)
>
> at kafka.producer.Producer.zkSend(Producer.scala:137)
> at kafka.producer.Producer.send(Producer.scala:99)
> at kafka.javaapi.producer.Producer.send(Producer.scala:103)
>
>
> In kafka server:
>
> [2013-03-16 06:59:49,491] ERROR Closing socket for /10.2.201.201 because
50 matches
Mail list logo