I know I'm reviving an old thread but did the original poster ever find the
cause of this issue and figure out what the fix was?
I am running a cluster of 18 Kafka .9 brokers and three of them are having
behaving exactly this way once a week.
Pretty scary because they are doing a full resign
This is somewhat specific to your runtime environment, you can check out
whatever script is getting used for bringing up Kafka, and where the stderr
of the java command is being redirected (hopefully not /dev/null!).
On Thu, Jun 30, 2016 at 5:24 PM allen chan
wrote:
Hi Shikhar,
I do not see stderr log file anywhere. Can you point me to where kafka
would write such a file?
On Thu, Jun 30, 2016 at 5:10 PM, Shikhar Bhushan
wrote:
> Perhaps it's a JVM crash? You might not see anything in the standard
> application-level logs, you'd need
Perhaps it's a JVM crash? You might not see anything in the standard
application-level logs, you'd need to look for the stderr.
On Thu, Jun 30, 2016 at 5:07 PM allen chan
wrote:
> Anyone else have ideas?
>
> This is still happening. I moved off zookeeper from the
Anyone else have ideas?
This is still happening. I moved off zookeeper from the server to its own
dedicated VMs.
Kakfa starts with 4G of heap and gets nowhere near that much consumed when
it crashed.
i bumped up the zookeeper timeout settings but that has not solved it.
I also disconnected all
What about in dmesg? I have run into this issue and it was the OOM
killer. I also ran into a heap issue using too much of the direct memory
(JVM). Reducing the fetcher threads helped with that problem.
On Jun 2, 2016 12:19 PM, "allen chan" wrote:
> Hi Tom,
>
>
Hi Tom,
That is one of the first things that i checked. Active memory never goes
above 50% of overall available. File cache uses the rest of the memory but
i do not think that causes OOM killer.
Either way there is no entries in /var/log/messages (centos) to show OOM is
happening.
Thanks
On
That looks like somebody is killing the process. I'd suspect either the
linux OOM killer or something else automatically killing the JVM for some
reason.
For the OOM killer, assuming you're on ubuntu, it's pretty easy to find in
/var/log/syslog (depending on your setup). I don't know about other
I have an issue where my brokers would randomly shut itself down.
I turned on debug in log4j.properties but still do not see a reason why the
shutdown is happening.
Anyone seen this behavior before?
version 0.10.0
log4j.properties
log4j.rootLogger=DEBUG, kafkaAppender
* I tried TRACE level