Thanks for correcting me, Tom. I got confused with warn log message.
On Tue, Jul 19, 2016 at 5:45 PM, Tom Crayford wrote:
> Manikumar,
>
> How will that help? Increasing the number of log cleaner threads will lead
> to *less* memory for the buffer per thread, as it's divided up among
> availabl
Manikumar,
How will that help? Increasing the number of log cleaner threads will lead
to *less* memory for the buffer per thread, as it's divided up among
available threads.
Lawrence, I'm reasonably sure you're hitting KAFKA-3587 here, and should
upgrade to 0.10 ASAP. As far as I'm aware Kafka do
Try increasing log cleaner threads.
On Tue, Jul 19, 2016 at 1:40 AM, Lawrence Weikum
wrote:
> It seems that the log-cleaner is still failing no matter what settings I
> give it.
>
> Here is the full output from one of our brokers:
> [2016-07-18 13:00:40,726] ERROR [kafka-log-cleaner-thread-0], E
It seems that the log-cleaner is still failing no matter what settings I give
it.
Here is the full output from one of our brokers:
[2016-07-18 13:00:40,726] ERROR [kafka-log-cleaner-thread-0], Error due to
(kafka.log.LogCleaner)
java.lang.IllegalArgumentException: requirement failed: 19205321
Hi,
You're running into the issue in
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-3894 and
possibly
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-3587
(which is fixed in 0.10). Sadly right now there's no way to know how high a
dedupe buffer size you need -
We ran into this as well, and I ended up with the following that works for us.
log.cleaner.dedupe.buffer.size=536870912
log.cleaner.io.buffer.size=2000
On 13/07/2016 14:01, "Lawrence Weikum" wrote:
>Apologies. Here is the full trace from a broker:
>
>[2016-06-24 09:57:39,881] ERROR [kaf
Apologies. Here is the full trace from a broker:
[2016-06-24 09:57:39,881] ERROR [kafka-log-cleaner-thread-0], Error due to
(kafka.log.LogCleaner)
java.lang.IllegalArgumentException: requirement failed: 9730197928 messages in
segment __consumer_offsets-36/.log but offset map
Can you post the complete error stack trace? Yes, you need to
restart the affected
brokers.
You can tweak log.cleaner.dedupe.buffer.size, log.cleaner.io.buffer.size
configs.
Some related JIRAs:
https://issues.apache.org/jira/browse/KAFKA-3587
https://issues.apache.org/jira/browse/KAFKA-3894
htt
Oh interesting. I didn’t know about that log file until now.
The only error that has been populated among all brokers showing this behavior
is:
ERROR [kafka-log-cleaner-thread-0], Error due to (kafka.log.LogCleaner)
Then we see many messages like this:
INFO Compaction for partition [__consume
Hi,
Are you seeing any errors in log-cleaner.log? The log-cleaner thread can
crash on certain errors.
Thanks
Manikumar
On Wed, Jul 13, 2016 at 9:54 PM, Lawrence Weikum
wrote:
> Hello,
>
> We’re seeing a strange behavior in Kafka 0.9.0.1 which occurs about every
> other week. I’m curious if o
Hello,
We’re seeing a strange behavior in Kafka 0.9.0.1 which occurs about every other
week. I’m curious if others have seen it and know of a solution.
Setup and Scenario:
- Brokers initially setup with log compaction turned off
- After 30 days, log compaction was turned on
11 matches
Mail list logo