[
https://issues.apache.org/jira/browse/KAFKA-2580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14963998#comment-14963998
]
Grant Henke commented on KAFKA-2580:
------------------------------------
If we decide not to implement this and recommend setting a high FD limit, how
gracefully does Kafka handle hitting that limit today? Has anyone seen this
happen is a production environment? If data is spread evenly across the
cluster, I would suspect many brokers would hit this around the same time.
> Kafka Broker keeps file handles open for all log files (even if its not
> written to/read from)
> ---------------------------------------------------------------------------------------------
>
> Key: KAFKA-2580
> URL: https://issues.apache.org/jira/browse/KAFKA-2580
> Project: Kafka
> Issue Type: Bug
> Components: core
> Affects Versions: 0.8.2.1
> Reporter: Vinoth Chandar
> Assignee: Grant Henke
>
> We noticed this in one of our clusters where we stage logs for a longer
> amount of time. It appears that the Kafka broker keeps file handles open even
> for non active (not written to or read from) files. (in fact, there are some
> threads going back to 2013
> http://grokbase.com/t/kafka/users/132p65qwcn/keeping-logs-forever)
> Needless to say, this is a problem and forces us to either artificially bump
> up ulimit (its already at 100K) or expand the cluster (even if we have
> sufficient IO and everything).
> Filing this ticket, since I could find anything similar. Very interested to
> know if there are plans to address this (given how Samza's changelog topic is
> meant to be a persistent large state use case).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)