[jira] [Commented] (KAFKA-493) High CPU usage on inactive server

2016-05-19 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15292129#comment-15292129
 ] 

Cosmin Marginean commented on KAFKA-493:


[~manoj.ramakrish...@plantronics.com] We are using the Oracle JVM 1.8 (1.8.0_65 
to be specific) in production.
However, after the 0.9.0.1 upgrade this didn't reproduce at all (just checked 
the infra monitoring this evening and is fairly flat ever since).
Hope this helps.

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: 0.9.0.1-upgrade.png, Kafka-2014-11-10.snapshot.zip, 
> Kafka-sampling1.zip, Kafka-sampling2.zip, Kafka-sampling3.zip, 
> Kafka-trace1.zip, Kafka-trace2.zip, Kafka-trace3.zip, backtraces.txt, 
> stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD END (id = 200030)
> THREAD END (id = 200028)
> TRACE 301281:
> sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown 
> line)
> sun.nio.ch

[jira] [Updated] (KAFKA-493) High CPU usage on inactive server

2016-03-24 Thread Cosmin Marginean (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cosmin Marginean updated KAFKA-493:
---
Attachment: 0.9.0.1-upgrade.png

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: 0.9.0.1-upgrade.png, Kafka-2014-11-10.snapshot.zip, 
> Kafka-sampling1.zip, Kafka-sampling2.zip, Kafka-sampling3.zip, 
> Kafka-trace1.zip, Kafka-trace2.zip, Kafka-trace3.zip, backtraces.txt, 
> stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD END (id = 200030)
> THREAD END (id = 200028)
> TRACE 301281:
> sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown 
> line)
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:81)
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
> sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
> 
> sun.nio.ch.SocketAdap

[jira] [Comment Edited] (KAFKA-493) High CPU usage on inactive server

2016-03-24 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15210092#comment-15210092
 ] 

Cosmin Marginean edited comment on KAFKA-493 at 3/24/16 11:19 AM:
--

I can confirm that we're been running 0.9.0.1 for over a week now and none of 
the CPU issues are present anymore (See 0.9.0.1-upgrade.png). I will make sure 
to update this if it regresses.


was (Author: cosmin.marginean):
I can confirm that we're been running 0.9.0.1 for over a week now and none of 
the CPU issues are present anymore (See 0.9.0.1-upgrade.png. I will make sure 
to update this if it regresses.

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD EN

[jira] [Commented] (KAFKA-493) High CPU usage on inactive server

2016-03-24 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15210092#comment-15210092
 ] 

Cosmin Marginean commented on KAFKA-493:


I can confirm that we're been running 0.9.0.1 for over a week now and none of 
the CPU issues are present anymore (See 0.9.0.1-upgrade.png. I will make sure 
to update this if it regresses.

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD END (id = 200030)
> THREAD END (id = 200028)
> TRACE 301281:
> sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown 
> line)
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSel

[jira] [Commented] (KAFKA-493) High CPU usage on inactive server

2016-03-14 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193245#comment-15193245
 ] 

Cosmin Marginean commented on KAFKA-493:


Thanks Andrew. Will do this in the next few weeks.

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 29)
> THREAD END (id = 200010)
> THREAD END (id = 200030)
> THREAD END (id = 200028)
> TRACE 301281:
> sun.nio.ch.EPollArrayWrapper.epollWait(EPollArrayWrapper.java:Unknown 
> line)
> sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:228)
> sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:81)
> sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
> sun.nio.ch.SelectorImpl.select(Selecto

[jira] [Commented] (KAFKA-493) High CPU usage on inactive server

2016-02-24 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15163367#comment-15163367
 ] 

Cosmin Marginean commented on KAFKA-493:


Does anyone have any updates on this or would you require any additional 
information to diagnose this?
I can confirm that a cluster of 2 Kafka nodes running for the past few weeks 
has been constantly at over 70% CPU utilisation. with a very small amount of 
messages (a few thousand a day).
What's worse, the CPU still stays very high when there's no activity on any of 
the topics.

Is there any other JIRA ticket that has followed up on this separately?

FYI, using 0.9.0.0

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Fix For: 0.10.1.0
>
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28, name="Thread-3", group="main")
> THREAD START (obj=53ae, id = 29, name="kafka-processor-9092-0", 
> group="main")
> THREAD START (obj=53ae, id = 200010, name="kafka-processor-9092-1", 
> group="main")
> THREAD START (obj=53ae, id = 200011, name="kafka-acceptor", group="main")
> THREAD START (obj=574b, id = 200012, 
> name="ZkClient-EventThread-20-localhost:2181", group="main")
> THREAD START (obj=576e, id = 200014, name="main-SendThread()", 
> group="main")
> THREAD START (obj=576d, id = 200013, name="main-EventThread", 
> group="main")
> THREAD START (obj=53ae, id = 200015, name="metrics-meter-tick-thread-1", 
> group="main")
> THREAD START (obj=53ae, id = 200016, name="metrics-meter-tick-thread-2", 
> group="main")
> THREAD START (obj=53ae, id = 200017, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200018, name="request-expiration-task", 
> group="main")
> THREAD START (obj=53ae, id = 200019, name="kafka-request-handler-0", 
> group="main")
> THREAD START (obj=53ae, id = 200020, name="kafka-request-handler-1", 
> group="main")
> THREAD START (obj=53ae, id = 200021, name="Thread-6", group="main")
> THREAD START (obj=53ae, id = 200022, name="Thread-7", group="main")
> THREAD START (obj=5899, id = 200023, name="ReplicaFetcherThread-0-2 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200024, name="ReplicaFetcherThread-0-3 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200025, name="ReplicaFetcherThread-0-0 on 
> broker 1, ", group="main")
> THREAD START (obj=5899, id = 200026, name="ReplicaFetcherThread-0-1 on 
> broker 1, ", group="main")
> THREAD START (obj=53ae, id = 200028, name="SIGINT handler", 
> group="system")
> THREAD START (obj=53ae, id = 200029, name="Thread-5", group="main")
> THREAD START (obj=574b, id = 200030, name="Thread-1", group="main")
> THREAD START (obj=574b, id = 200031, name="Thread-0", group="main")
> THREAD END (id = 200031)
> THREAD END (id = 200029)
> THREAD END (id = 200020)
> THREAD END (id = 200019)
> THREAD END (id = 28)
> THREAD END (id = 200021)
> THREAD END (id = 27)
> THREAD END (id = 200022)
> THREAD END (id = 200018)
> THREAD END (id = 200017)
> THREAD END (id = 200012)
> THREAD END (id = 200013)
> THREAD END (id = 200014)
> THREAD END (id = 200025)
> THREAD END (id = 200023)
> THREAD END (id = 200026)
> THREAD END (id = 200024)
> THREAD END (id = 200011)
> THREAD END (id = 

[jira] [Comment Edited] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-19 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106758#comment-15106758
 ] 

Cosmin Marginean edited comment on KAFKA-2985 at 1/19/16 2:05 PM:
--

I can confirm that our processing takes tens of seconds so then it's probably 
why we have run into this. Thanks Michal for further diagnosis.

I read some of the content in these linked discussions. Large number of 
messages is one thing that seems it will be addressed with max.poll.records. 
However long processing (of a single message) still seems like something that 
will need to go out in a separate thread, which adds complexity to the design.


was (Author: cosmin.marginean):
I can confirm that our processing takes tens of seconds so then it's probably 
why we have run into this. Thanks Michal for further diagnosis.

I read some of the content in this linked discussions. Large number of messages 
is one thing that seems it will be addressed with max.poll.records. However 
long processing (of a single messages) still seems like something that will 
need to go out in a separate thread, which adds complexity to the design.

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group d

[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-19 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15106758#comment-15106758
 ] 

Cosmin Marginean commented on KAFKA-2985:
-

I can confirm that our processing takes tens of seconds so then it's probably 
why we have run into this. Thanks Michal for further diagnosis.

I read some of the content in this linked discussions. Large number of messages 
is one thing that seems it will be addressed with max.poll.records. However 
long processing (of a single messages) still seems like something that will 
need to go out in a separate thread, which adds complexity to the design.

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 

[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-08 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089993#comment-15089993
 ] 

Cosmin Marginean commented on KAFKA-2985:
-

I have't yet, no, as I'm struggling to build a complete server distro on that 
(and reverting now to old consumer to solve the immediate issue was a priority).


> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 1

[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-08 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15089962#comment-15089962
 ] 

Cosmin Marginean commented on KAFKA-2985:
-

I can confirm that I can also reproduce this with a very similar setup.
This is not the case thought when using the old (pre 0.9.0.0) consumer on a 
0.9.0.0 server

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-1

[jira] [Commented] (KAFKA-493) High CPU usage on inactive server

2015-11-23 Thread Cosmin Marginean (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15022364#comment-15022364
 ] 

Cosmin Marginean commented on KAFKA-493:


Hi guys

This seems to have been inactive for a while but I can confirm that I met 
similar conditions (high number of partitions - 256 per topic) and get a high 
CPU usage, but only while having producers/consumers wired. As soon as I 
shutdown the client apps, CPU usage goes considerably low. Just wanted to know 
if this is related or if this is something I should investigate and provide 
diagnosis for in a separate ticket.

Below I managed to identify the threads that generate this high CPU usage.

{code}
"kafka-network-thread-9092-2" #24 prio=5 os_prio=0 tid=0x7f3f085a 
nid=0xe96 runnable [0x7f3ee8ef3000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0xd4d6e6c8> (a sun.nio.ch.Util$2)
- locked <0xd4d6e6b0> (a java.util.Collections$UnmodifiableSet)
- locked <0xd4d77ac8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at kafka.network.Processor.run(SocketServer.scala:320)
at java.lang.Thread.run(Thread.java:745)

"kafka-network-thread-9092-1" #23 prio=5 os_prio=0 tid=0x7f3f08585000 
nid=0xe95 runnable [0x7f3ee8ff4000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0xd4d6e6f8> (a sun.nio.ch.Util$2)
- locked <0xd4d6e6e0> (a java.util.Collections$UnmodifiableSet)
- locked <0xd4d77b58> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at kafka.network.Processor.run(SocketServer.scala:320)
at java.lang.Thread.run(Thread.java:745)

"kafka-network-thread-9092-0" #22 prio=5 os_prio=0 tid=0x7f3f0856a800 
nid=0xe94 runnable [0x7f3ee90f5000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0xd4d7c4e8> (a sun.nio.ch.Util$2)
- locked <0xd4d7c4d0> (a java.util.Collections$UnmodifiableSet)
- locked <0xd4d77be8> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at kafka.network.Processor.run(SocketServer.scala:320)
at java.lang.Thread.run(Thread.java:745)

{code}

> High CPU usage on inactive server
> -
>
> Key: KAFKA-493
> URL: https://issues.apache.org/jira/browse/KAFKA-493
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.0
>Reporter: Jay Kreps
> Fix For: 0.10.0.0
>
> Attachments: Kafka-2014-11-10.snapshot.zip, Kafka-sampling1.zip, 
> Kafka-sampling2.zip, Kafka-sampling3.zip, Kafka-trace1.zip, Kafka-trace2.zip, 
> Kafka-trace3.zip, backtraces.txt, stacktrace.txt
>
>
> > I've been playing with the 0.8 branch of Kafka and noticed that idle CPU 
> > usage is fairly high (13% of a 
> > core). Is that to be expected? I did look at the stack, but didn't see 
> > anything obvious. A background 
> > task?
> > I wanted to mention how I am getting into this state. I've set up two 
> > machines with the latest 0.8 
> > code base and am using a replication factor of 2. On starting the brokers 
> > there is no idle CPU activity. 
> > Then I run a test that essential does 10k publish operations followed by 
> > immediate consume operations 
> > (I was measuring latency). Once this has run the kafka nodes seem to 
> > consistently be consuming CPU 
> > essentially forever.
> hprof results:
> THREAD START (obj=53ae, id = 24, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 25, name="RMI TCP Accept-", 
> group="system")
> THREAD START (obj=53ae, id = 26, name="RMI TCP Accept-0", 
> group="system")
> THREAD START (obj=53ae, id = 21, name="main", group="main")
> THREAD START (obj=53ae, id = 27, name="Thread-2", group="main")
> THREAD START (obj=53ae, id = 28