many open files in kafka 0.9
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES
difference with the others brokers., so is it safe to remove these
directories __consumer_offsets-XX if not acceded since one day ?
-Message d'origine-
De : Ted Yu [mailto:yuzhih...@gmail.com]
Envoyé : mercredi 29 novembre 2017 19:41
À : users@kafka.apache.org
Objet : Re: Too many open files
There is KAFKA-3317 which is still open.
Have you seen this ?
http://search-hadoop.com/m/Kafka/uyzND1KvOlt1p5UcE?subj=Re+Brokers+is+down+by+java+io+IOException+Too+many+open+files+
On Wed, Nov 29, 2017 at 8:55 AM, REYMOND Jean-max (BPCE-IT - SYNCHRONE
TECHNOLOGIES) <jean-max.reymond.prest
We have a cluster with 3 brokers and kafka 0.9.0.1. One week ago, we decide to
adjust log.retention.hours from 10 days to 2 days. Stop and go the cluster and
it is ok. But for one broker, we have every day more and more datas and two
days later crash with message too many open files. lsof
I’ve seen where setting network configurations within the OS can help mitigate
some of the “Too many open files” issue as well.
Try changing the following items on the OS to try to have used network
connections close as quickly as possible in order to keep file handle use down:
sysctl -w
this should work:
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui <y...@freewheel.tv> wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open files” three time
/security/limits.conf
> * - nofile 65536
>
>
>
>
> On Fri, May 12, 2017 at 6:34 PM, Yang Cui <y...@freewheel.tv> wrote:
>
> > Our Kafka cluster is broken down by the problem “java.io.IOException:
> Too
> > many open files” three times in 3 weeks.
> >
You need to up your OS open file limits, something like this should work:
# /etc/security/limits.conf
* - nofile 65536
On Fri, May 12, 2017 at 6:34 PM, Yang Cui <y...@freewheel.tv> wrote:
> Our Kafka cluster is broken down by the problem “java.io.IOException: Too
> many open f
Our Kafka cluster is broken down by the problem “java.io.IOException: Too many
open files” three times in 3 weeks.
We encounter these problem on both 0.9.0.1 and 0.10.2.1 version.
The error is like:
java.io.IOException: Too many open files
:
[2016-09-12 09:34:49,522] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422
] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422
5-node Kafka cluster, bare metal, Ubuntu 14.04.x LTS with 64GB RAM, 8-core,
960GB SSD boxes and a single node in cluster is filling logs with the following:
[2016-09-12 09:34:49,522] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
ul 31, 2016 at 4:14 PM, Kessiler Rodrigues <kessi...@callinize.com>
wrote:
> I’m still experiencing this issue…
>
> Here are the kafka logs.
>
> [2016-07-31 20:10:35,658] ERROR Error while accepting connection
> (kafka.network.Acceptor)
&g
0:35,658] ERROR Error while accepting connection
> (kafka.network.Acceptor)
> java.io.IOException: Too many open files
> at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
> at
> sun.nio.ch.ServerSocketChannelImpl.acc
t give you a
>> place to start figuring out what's open and why.
>>
>>-Steve
>>
>>> On Jul 31, 2016, at 4:14 PM, Kessiler Rodrigues <kessi...@callinize.com>
>> wrote:
>>>
>>> I’m still experiencing this issue…
>
;
> > I’m still experiencing this issue…
> >
> > Here are the kafka logs.
> >
> > [2016-07-31 20:10:35,658] ERROR Error while accepting connection
> (kafka.network.Acceptor)
> > java.io.IOException: Too many open files
> >at sun.nio.ch.ServerSocketChan
ror while accepting connection
> (kafka.network.Acceptor)
> java.io.IOException: Too many open files
>at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
>at
> sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
>
maybe 4000 per broker and most
> clusters have a lot fewer, so consider adding brokers and spreading
> partitions around a bit.
>
> Gwen
>
> On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
> <kessi...@callinize.com> wrote:
> > Hi guys,
> >
> > I have
Maybe you are exhausting your sockets, not file handles for some reason?
From: Kessiler Rodrigues [kessi...@callinize.com]
Sent: 31 July 2016 22:14
To: users@kafka.apache.org
Subject: Re: Too Many Open Files
I’m still experiencing this issue…
Here
I’m still experiencing this issue…
Here are the kafka logs.
[2016-07-31 20:10:35,658] ERROR Error while accepting connection
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method
e.com> wrote:
>> Hi guys,
>>
>> I have been experiencing some issues on kafka, where its throwing too many
>> open files.
>>
>> I have around of 6k topics and 5 partitions each.
>>
>> My cluster was made with 6 brokers. All of them are run
and spreading
partitions around a bit.
Gwen
On Fri, Jul 29, 2016 at 12:00 PM, Kessiler Rodrigues
<kessi...@callinize.com> wrote:
> Hi guys,
>
> I have been experiencing some issues on kafka, where its throwing too many
> open files.
>
> I have around of 6k topics and 5 partiti
Hi guys,
I have been experiencing some issues on kafka, where its throwing too many open
files.
I have around of 6k topics and 5 partitions each.
My cluster was made with 6 brokers. All of them are running Ubuntu 16 and the
file limits settings are:
`cat /proc/sys/fs/file-max`
200
Hi, all
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method
/sysctl.conf
Gwen
On Thu, Jan 15, 2015 at 12:30 PM, Sa Li sal...@gmail.com wrote:
Hi, all
We test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
test our production kafka, and getting such error
[2015-01-15 19:03:45,057] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(
ServerSocketChannelImpl.java:241)
at kafka.network.Acceptor.accept(SocketServer.scala:200
does seem like a magic number, since that¹s seems to
be
the number that the Kafka broker machine can handle before it gives me
the
³Too Many Open Files² error and eventually crashes.
Paul Lung
On 7/8/14, 9:29 PM, Jun Rao jun...@gmail.com wrote:
Does your test program run as the same user
can handle before it gives me the
³Too Many Open Files² error and eventually crashes.
Paul Lung
On 7/8/14, 9:29 PM, Jun Rao jun...@gmail.com wrote:
Does your test program run as the same user as Kafka broker?
Thanks,
Jun
On Tue, Jul 8, 2014 at 1:42 PM, Lung, Paul pl...@ebay.com
me
the
³Too Many Open Files² error and eventually crashes.
Paul Lung
On 7/8/14, 9:29 PM, Jun Rao jun...@gmail.com wrote:
Does your test program run as the same user as Kafka broker?
Thanks,
Jun
On Tue, Jul 8, 2014 at 1:42 PM, Lung, Paul pl...@ebay.com wrote:
Hi
(FdTest.java:108)
But all I have to do is sleep for a bit on the client, and then retry
again. However, 4K does seem like a magic number, since that¹s seems to be
the number that the Kafka broker machine can handle before it gives me the
³Too Many Open Files² error and eventually crashes.
Paul Lung
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
at kafka.network.Acceptor.accept(SocketServer.scala:200)
at kafka.network.Acceptor.run
)
at java.lang.Thread.run(Thread.java:679)
[2014-06-24 21:43:44,711] ERROR Error in acceptor
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept
cluster from 0.8.1 to 0.8.1.1. I¹m seeing the
following
error messages on the same 3 brokers once in a while:
[2014-06-24 21:43:44,711] ERROR Error in acceptor
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method
:
Hi All,
I just upgraded my cluster from 0.8.1 to 0.8.1.1. I¹m seeing the
following
error messages on the same 3 brokers once in a while:
[2014-06-24 21:43:44,711] ERROR Error in acceptor
(kafka.network.Acceptor)
java.io.IOException: Too many open files
Hi All,
I just upgraded my cluster from 0.8.1 to 0.8.1.1. I’m seeing the following
error messages on the same 3 brokers once in a while:
[2014-06-24 21:43:44,711] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
once in a while:
[2014-06-24 21:43:44,711] ERROR Error in acceptor (kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163
:154)
at java.lang.Thread.run(Thread.java:679)
[2014-06-24 21:43:44,711] ERROR Error in acceptor
(kafka.network.Acceptor)
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method
)
at kafka.server.AbstractFetcherThread.doWork(Unknown Source)
at kafka.utils.ShutdownableThread.run(Unknown Source)
Caused by: java.io.FileNotFoundException:
/disk1/kafka-logs/perf1-4/00010558.index (Too many open
files)
at java.io.RandomAccessFile.open(Native Method
)
at kafka.server.AbstractFetcherThread.doWork(Unknown Source)
at kafka.utils.ShutdownableThread.run(Unknown Source)
Caused by: java.io.FileNotFoundException:
/disk1/kafka-logs/perf1-4/00010558.index (Too many open
files)
at java.io.RandomAccessFile.open(Native Method
Subject: Re: Too many open files
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 4:48 PM, Mark static.void@gmail.com wrote:
FYI if I kill all producers I don't see the number
Berthet
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Thursday, September 26, 2013 12:39 PM
To: users@kafka.apache.org
Subject: Re: Too many open files
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load
in the same datacenter, we didn't see this issue, the
socket count matches on both ends.
Nicolas Berthet
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Thursday, September 26, 2013 12:39 PM
To: users@kafka.apache.org
Subject: Re: Too many open files
.
Are you also experiencing the issue in a cross data center context ?
Best regards,
Nicolas Berthet
-Original Message-
From: Mark [mailto:static.void@gmail.com]
Sent: Friday, September 27, 2013 6:08 AM
To: users@kafka.apache.org
Subject: Re: Too many open files
What OS settings
No. We are using the kafka-rb ruby gem producer.
https://github.com/acrosa/kafka-rb
Now that you asked that question I need to ask. Is there a problem with the
java producer?
Sent from my iPhone
On Sep 24, 2013, at 9:01 PM, Jun Rao jun...@gmail.com wrote:
Are you using the java producer
We haven't seen any socket leaks with the java producer. If you have lots
of unexplained socket connections in established mode, one possible cause
is that the client created new producer instances, but didn't close the old
ones.
Thanks,
Jun
On Wed, Sep 25, 2013 at 6:08 AM, Mark
Any other ideas?
On Sep 25, 2013, at 9:06 AM, Jun Rao jun...@gmail.com wrote:
We haven't seen any socket leaks with the java producer. If you have lots
of unexplained socket connections in established mode, one possible cause
is that the client created new producer instances, but didn't close
FYI if I kill all producers I don't see the number of open files drop. I still
see all the ESTABLISHED connections.
Is there a broker setting to automatically kill any inactive TCP connections?
On Sep 25, 2013, at 4:30 PM, Mark static.void@gmail.com wrote:
Any other ideas?
On Sep 25,
If a client is gone, the broker should automatically close those broken
sockets. Are you using a hardware load balancer?
Thanks,
Jun
On Wed, Sep 25, 2013 at 4:48 PM, Mark static.void@gmail.com wrote:
FYI if I kill all producers I don't see the number of open files drop. I
still see all
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open
The obvious fix is to bump up the number of open files but I'm wondering if
there is a leak on the Kafka side and/or our
Are you using the java producer client?
Thanks,
Jun
On Tue, Sep 24, 2013 at 5:33 PM, Mark static.void@gmail.com wrote:
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
acceptor
java.io.IOException: Too many open
connections for and what state are
those connections in?
Thanks,
Jun
On Thu, Aug 1, 2013 at 9:04 AM, Nandigam, Sujitha snandi...@verisign.com
wrote:
Hi,
In producer I was continuously getting this exception
java.net.SocketException: Too many open files
even though I added
Hi,
In producer I was continuously getting this exception java.net.SocketException:
Too many open files
even though I added the below line to /etc/security/limits.conf
kafka-0.8.0-beta1-src-nofile983040
ERROR Producer connection to localhost:9093 unsuccessful
files
even though I added the below line to /etc/security/limits.conf
kafka-0.8.0-beta1-src-nofile983040
ERROR Producer connection to localhost:9093 unsuccessful
(kafka.producer.SyncProducer)
java.net.SocketException: Too many open files
Please help me how to resolve
54 matches
Mail list logo