Re: Out of memory - Java Heap space

2016-04-27 Thread Spico Florin
HI!
  You can set up your kafka process to dump the stack trace in case of the
OOM by providing the flags:(
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html
)

   -

   xx:HeapDumpPath=path

   This option can be used to specify a location for the heap dump, see The
   -XX:HeapDumpOnOutOfMemoryError Option
   
<https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html#CHDFDIJI>
   .
   -

   -XX:MaxPermSize=size

   This option can be used to specify the size of the permanent generation
   memory, see Exception in thread thread_name: java.lang.OutOfMemoryError:
   GC Overhead limit exceeded
   
<https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html#tahiti1150092>
   .


 What Java version are you using?
I suggest to work with jdk 8 and the G1 garbage collector. I've heard some
Kafka engineers that promoted this advice.

I hope that these help.
  Regards,
  Florin



On Wed, Apr 27, 2016 at 12:02 PM, Jaikiran Pai <jai.forums2...@gmail.com>
wrote:

> Have you tried getting the memory usage output using tool like jmap and
> seeing what's consuming the memory? Also, what are you heap sizes for the
> process?
>
> -Jaikiran
>
>
> On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:
>
>> To follow up with my last email, I have been looking into
>> socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it
>> help to increase the buffer for OOM issue?
>>
>> All help is appreciated!
>>
>> Thanks!
>>
>>
>> -nick
>>
>>
>> From: "McKoy, Nick" <nicholas.mc...@washpost.com> nicholas.mc...@washpost.com>>
>> Date: Monday, April 18, 2016 at 3:41 PM
>> To: "users@kafka.apache.org<mailto:users@kafka.apache.org>" <
>> users@kafka.apache.org<mailto:users@kafka.apache.org>>
>> Subject: Out of memory - Java Heap space
>>
>> Hey all,
>>
>> I have a kafka cluster of 5 nodes that’s working really hard. CPU is
>> around 40% idle daily.
>>
>> I looked at the file descriptor note on this documentation page
>> http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap
>> and decided to give it a shot on one instance in the cluster just to see
>> how it performed. I increased this number to 1048576.
>>
>> I kept getting this error from the kafka logs:
>> ERROR [ReplicaFetcherThread--1-6], Error due to
>> (kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap
>> space
>>
>> I increased heap to see if that would help and I kept seeing these
>> errors. Could the file descriptor change have something related to this?
>>
>>
>>
>> —
>> Nicholas McKoy
>> Engineering – Big Data and Personalization
>> Washington Post Media
>>
>> One Franklin Square, Washington, DC 20001
>> Email: nicholas.mc...@washpost.com<mailto:nicholas.mc...@washpost.com>
>>
>>
>


Re: Out of memory - Java Heap space

2016-04-27 Thread Jaikiran Pai
Have you tried getting the memory usage output using tool like jmap and 
seeing what's consuming the memory? Also, what are you heap sizes for 
the process?


-Jaikiran

On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:

To follow up with my last email, I have been looking into 
socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help 
to increase the buffer for OOM issue?

All help is appreciated!

Thanks!


-nick


From: "McKoy, Nick" 
<nicholas.mc...@washpost.com<mailto:nicholas.mc...@washpost.com>>
Date: Monday, April 18, 2016 at 3:41 PM
To: "users@kafka.apache.org<mailto:users@kafka.apache.org>" 
<users@kafka.apache.org<mailto:users@kafka.apache.org>>
Subject: Out of memory - Java Heap space

Hey all,

I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% 
idle daily.

I looked at the file descriptor note on this documentation page 
http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap 
and decided to give it a shot on one instance in the cluster just to see how it 
performed. I increased this number to 1048576.

I kept getting this error from the kafka logs:
ERROR [ReplicaFetcherThread--1-6], Error due to 
(kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space

I increased heap to see if that would help and I kept seeing these errors. 
Could the file descriptor change have something related to this?



—
Nicholas McKoy
Engineering – Big Data and Personalization
Washington Post Media

One Franklin Square, Washington, DC 20001
Email: nicholas.mc...@washpost.com<mailto:nicholas.mc...@washpost.com>





Re: Out of memory - Java Heap space

2016-04-18 Thread McKoy, Nick
To follow up with my last email, I have been looking into 
socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help 
to increase the buffer for OOM issue?

All help is appreciated!

Thanks!


-nick


From: "McKoy, Nick" 
<nicholas.mc...@washpost.com<mailto:nicholas.mc...@washpost.com>>
Date: Monday, April 18, 2016 at 3:41 PM
To: "users@kafka.apache.org<mailto:users@kafka.apache.org>" 
<users@kafka.apache.org<mailto:users@kafka.apache.org>>
Subject: Out of memory - Java Heap space

Hey all,

I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% 
idle daily.

I looked at the file descriptor note on this documentation page 
http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap 
and decided to give it a shot on one instance in the cluster just to see how it 
performed. I increased this number to 1048576.

I kept getting this error from the kafka logs:
ERROR [ReplicaFetcherThread--1-6], Error due to 
(kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space

I increased heap to see if that would help and I kept seeing these errors. 
Could the file descriptor change have something related to this?



—
Nicholas McKoy
Engineering – Big Data and Personalization
Washington Post Media

One Franklin Square, Washington, DC 20001
Email: nicholas.mc...@washpost.com<mailto:nicholas.mc...@washpost.com>



Out of memory - Java Heap space

2016-04-18 Thread McKoy, Nick
Hey all,

I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% 
idle daily.

I looked at the file descriptor note on this documentation page 
http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap 
and decided to give it a shot on one instance in the cluster just to see how it 
performed. I increased this number to 1048576.

I kept getting this error from the kafka logs:
ERROR [ReplicaFetcherThread--1-6], Error due to 
(kafka.server.ReplicaFetcherThread) java.lang.OutOfMemoryError: Java heap space

I increased heap to see if that would help and I kept seeing these errors. 
Could the file descriptor change have something related to this?



—
Nicholas McKoy
Engineering – Big Data and Personalization
Washington Post Media

One Franklin Square, Washington, DC 20001
Email: nicholas.mc...@washpost.com