Re: 0.9.0.1 RC1

2016-02-15 Thread Neha Narkhede
+1 (binding).

Verified source and binary artifacts, ran ./gradlew testAll, quick start on
source artifact and Scala 2.11 binary artifact.

On Mon, Feb 15, 2016 at 7:43 PM, Ewen Cheslack-Postava 
wrote:

> Yeah, I saw
>
> kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
> java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:138)
> at java.io.DataOutputStream.writeShort(DataOutputStream.java:168)
> at
> kafka.network.SocketServerTest.sendRequest(SocketServerTest.scala:62)
> at
>
> kafka.network.SocketServerTest.tooBigRequestIsRejected(SocketServerTest.scala:132)
>
> from the source artifacts one time.
>
> For reference, quick check of Confluent's kafka-trunk build
> http://jenkins.confluent.io/job/kafka-trunk/ doesn't show these specific
> transient errors (but it's also, obviously, a different branch). However,
> it does show other unrelated transient test errors.
>
> I've run the tests on trunk probably a dozen times in the past week for
> various PRs and not seen these test failures. The fact that I saw one the
> first time I ran tests on 0.9.0 has me a bit worried, though a couple of
> more test runs didn't have the same result.
>
> Also, for this specific test, I reopened
> https://issues.apache.org/jira/browse/KAFKA-2398 in August and then
> haven't
> seen it much since and we released the last version with that bug open...
>
> I guess I'm a wary +1 since we have a system test run passing, I've only
> seen this once, and it seems to be an existing transient test issue that
> had no impact in practice.
>
> -Ewen
>
>
> On Mon, Feb 15, 2016 at 8:39 PM, Ismael Juma  wrote:
>
> > +1 (non-binding).
> >
> > Verified source and binary artifacts, ran ./gradlew testAll with JDK
> 7u80,
> > quick start on source artifact and Scala 2.11 binary artifact.
> >
> > Ismael
> >
> > On Fri, Feb 12, 2016 at 2:55 AM, Jun Rao  wrote:
> >
> > > This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> > bug
> > > fix release that fixes 70 issues.
> > >
> > > Release Notes for the 0.9.0.1 release
> > >
> >
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > > and sha2 (SHA256) checksum.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
> > >
> > > * Maven artifacts to be voted upon prior to release:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * scala-doc
> > > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
> > >
> > > * java-doc
> > > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
> > >
> > > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
> > >
> > > * Documentation
> > > http://kafka.apache.org/090/documentation.html
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> >
>
>
>
> --
> Thanks,
> Ewen
>



-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148095#comment-15148095
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/920

KAFKA-3093 [WIP]: Add status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/920.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #920


commit 9cf01b36020816d034af539b6e91109a073c3c4c
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3093 [WIP]: Add status tracking API




> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3093 [WIP]: Add status tracking API

2016-02-15 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/920

KAFKA-3093 [WIP]: Add status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3093

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/920.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #920


commit 9cf01b36020816d034af539b6e91109a073c3c4c
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3093 [WIP]: Add status tracking API




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3093 [WIP]: Add Connect status tracking ...

2016-02-15 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/919


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148093#comment-15148093
 ] 

ASF GitHub Bot commented on KAFKA-3093:
---

Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/919


> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3214 [WIP]: Add Connect status tracking ...

2016-02-15 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/919

KAFKA-3214 [WIP]: Add Connect status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/919.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #919


commit 1a7e3adf19d230792baedd68d2609e58ec7a137d
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3214 [WIP]: Add status tracking API




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-15 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148085#comment-15148085
 ] 

Jason Gustafson commented on KAFKA-3214:


Disregard PR. Put the wrong JIRA by mistake.

> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3214) Add consumer system tests for compressed topics

2016-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148076#comment-15148076
 ] 

ASF GitHub Bot commented on KAFKA-3214:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/919

KAFKA-3214 [WIP]: Add Connect status tracking API



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3214

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/919.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #919


commit 1a7e3adf19d230792baedd68d2609e58ec7a137d
Author: Jason Gustafson 
Date:   2016-02-10T23:25:25Z

KAFKA-3214 [WIP]: Add status tracking API




> Add consumer system tests for compressed topics
> ---
>
> Key: KAFKA-3214
> URL: https://issues.apache.org/jira/browse/KAFKA-3214
> Project: Kafka
>  Issue Type: Test
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> As far as I can tell, we don't have any ducktape tests which verify 
> correctness when compression is enabled. If we did, we might have caught 
> KAFKA-3179 earlier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.9.0.1 RC1

2016-02-15 Thread Ewen Cheslack-Postava
Yeah, I saw

kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at
java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:138)
at java.io.DataOutputStream.writeShort(DataOutputStream.java:168)
at
kafka.network.SocketServerTest.sendRequest(SocketServerTest.scala:62)
at
kafka.network.SocketServerTest.tooBigRequestIsRejected(SocketServerTest.scala:132)

from the source artifacts one time.

For reference, quick check of Confluent's kafka-trunk build
http://jenkins.confluent.io/job/kafka-trunk/ doesn't show these specific
transient errors (but it's also, obviously, a different branch). However,
it does show other unrelated transient test errors.

I've run the tests on trunk probably a dozen times in the past week for
various PRs and not seen these test failures. The fact that I saw one the
first time I ran tests on 0.9.0 has me a bit worried, though a couple of
more test runs didn't have the same result.

Also, for this specific test, I reopened
https://issues.apache.org/jira/browse/KAFKA-2398 in August and then haven't
seen it much since and we released the last version with that bug open...

I guess I'm a wary +1 since we have a system test run passing, I've only
seen this once, and it seems to be an existing transient test issue that
had no impact in practice.

-Ewen


On Mon, Feb 15, 2016 at 8:39 PM, Ismael Juma  wrote:

> +1 (non-binding).
>
> Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80,
> quick start on source artifact and Scala 2.11 binary artifact.
>
> Ismael
>
> On Fri, Feb 12, 2016 at 2:55 AM, Jun Rao  wrote:
>
> > This is the first candidate for release of Apache Kafka 0.9.0.1. This a
> bug
> > fix release that fixes 70 issues.
> >
> > Release Notes for the 0.9.0.1 release
> >
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS in addition to the md5, sha1
> > and sha2 (SHA256) checksum.
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
> >
> > * Maven artifacts to be voted upon prior to release:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * scala-doc
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
> >
> > * java-doc
> > https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
> >
> > * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
> >
> > * Documentation
> > http://kafka.apache.org/090/documentation.html
> >
> > Thanks,
> >
> > Jun
> >
>



-- 
Thanks,
Ewen


Re: 0.9.0.1 RC1

2016-02-15 Thread Ismael Juma
+1 (non-binding).

Verified source and binary artifacts, ran ./gradlew testAll with JDK 7u80,
quick start on source artifact and Scala 2.11 binary artifact.

Ismael

On Fri, Feb 12, 2016 at 2:55 AM, Jun Rao  wrote:

> This is the first candidate for release of Apache Kafka 0.9.0.1. This a bug
> fix release that fixes 70 issues.
>
> Release Notes for the 0.9.0.1 release
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * scala-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>
> * java-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> Thanks,
>
> Jun
>


[jira] [Commented] (KAFKA-3145) CPU Usage Spike to 100% when network connection is to error port

2016-02-15 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147971#comment-15147971
 ] 

Dong Lin commented on KAFKA-3145:
-

[~zhiwei] Can you want to try to reproduce it using trunk? It is probably 
already fixed.

> CPU Usage Spike to 100% when network connection is to error port
> 
>
> Key: KAFKA-3145
> URL: https://issues.apache.org/jira/browse/KAFKA-3145
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: zhiwei
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> CPU spike to 100% when network connection is to error port.
> It seems network IO thread are very busy logging following error message. 
> [2016-01-25 14:09:12,476] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: Invalid request (size = -1241382912)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,479] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,481] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3145) CPU Usage Spike to 100% when network connection is to error port

2016-02-15 Thread zhiwei (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147953#comment-15147953
 ] 

zhiwei commented on KAFKA-3145:
---

[~lindong]

Use 0.8.2.1 newproducer client API, and config kafka broker port is 9178。
9178 is the port of another service。


> CPU Usage Spike to 100% when network connection is to error port
> 
>
> Key: KAFKA-3145
> URL: https://issues.apache.org/jira/browse/KAFKA-3145
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: zhiwei
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> CPU spike to 100% when network connection is to error port.
> It seems network IO thread are very busy logging following error message. 
> [2016-01-25 14:09:12,476] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: Invalid request (size = -1241382912)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,479] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,481] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3145) CPU Usage Spike to 100% when network connection is to error port

2016-02-15 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147932#comment-15147932
 ] 

Dong Lin commented on KAFKA-3145:
-

[~zhiwei], yeah I understand that CPU usage depends the performance of the 
machine. But I couldn't reproduce the NullPointerException using the latest 
trunk.

I used kafka with git hash d7fc7cf6154592b7fea494a092e42ad9d45b98a0 without 
chaning any file. The zookeeper, broker and producer are started as shown below:

zookeeper-server-start.sh config/zookeeper.properties

kafka-server-start.sh config/server.properties

kafka-producer-perf-test.sh --topic test --num-records 100 --record-size 
100 --throughput 10 --producer-props bootstrap.servers=localhost:9095.

If you are able to reproduce the NullPointerException, can you tell me the 
following information:
- the git hash value of kafka you used
- The commands you used to start producer and broker.

Thanks,





> CPU Usage Spike to 100% when network connection is to error port
> 
>
> Key: KAFKA-3145
> URL: https://issues.apache.org/jira/browse/KAFKA-3145
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: zhiwei
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> CPU spike to 100% when network connection is to error port.
> It seems network IO thread are very busy logging following error message. 
> [2016-01-25 14:09:12,476] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: Invalid request (size = -1241382912)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,479] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,481] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3145) CPU Usage Spike to 100% when network connection is to error port

2016-02-15 Thread zhiwei (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147922#comment-15147922
 ] 

zhiwei commented on KAFKA-3145:
---

[~lindong]

Machine  performance  relatively poor , cpu usage reaches 100%. If the machine 
performance is better , cpu utilization does not reach 100 %. But it will 
Non-stop printing exception .

Abnormal scene is divided into two :
1 ) Configure the wrong port number and the connection is unsuccessful.

2 ) Configured the wrong port number , and can connect successfully , but 
sending acknowledgment can not be resolved .It will Throws NullPointerException

> CPU Usage Spike to 100% when network connection is to error port
> 
>
> Key: KAFKA-3145
> URL: https://issues.apache.org/jira/browse/KAFKA-3145
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: zhiwei
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> CPU spike to 100% when network connection is to error port.
> It seems network IO thread are very busy logging following error message. 
> [2016-01-25 14:09:12,476] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: Invalid request (size = -1241382912)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,479] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,481] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: 0.9.0.1 RC1

2016-02-15 Thread Flavio Junqueira
I'm also getting test failures. Here are tests that failed for me and the jiras 
I could find reporting the corresponding issue:

org.apache.kafka.common.network.SslTransportLayerTest > 
testInvalidEndpointIdentification - KAFKA-2850
kafka.api.QuotasTest > testThrottledProducerConsumer - KAFKA-2444
kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread - 
KAFKA-2363
kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment - 
KAFKA-2933
kafka.network.SocketServerTest > tooBigRequestIsRejected - KAFKA-2398

Digests and signature also match for me and it builds if I don't run tests.

-Flavio 

> On 15 Feb 2016, at 16:03, Phil Steitz  wrote:
> 
> I get the following test failures on both OSX (10.11.3) and Ubuntu
> (14.04), JDK 1.7 and 1.8.
> 
> KafkaConsumerTest. testConstructorClose
> KafkaProducerTest. testConstructorFailureCloseResource
> SelectorTest. testNoRouteToHost
> SslSelectorTest. testNoRouteToHost
> 
> All seem to be expecting exceptions that aren't thrown.   The
> tarball build succeeded, though, so maybe these are expected.  Just
> thought it best to report them.
> 
> My application tests passed, other than an issue shared by 0.9.0.0.
> reported else-thread.
> 
> If there is another RC, two little things that would be good to
> improve are:
> 
> 0) the NOTICE files still have 2015 copyright dates
> 1) the hash files for the tarballs use a non-standard encoding,
> making it harder to verify.  The sig is standard ASCII armor and good.
> 
> Phil
> 
> 
> On 2/11/16 7:55 PM, Jun Rao wrote:
>> This is the first candidate for release of Apache Kafka 0.9.0.1. This a bug
>> fix release that fixes 70 issues.
>> 
>> Release Notes for the 0.9.0.1 release
>> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>> 
>> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>> 
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>> http://kafka.apache.org/KEYS in addition to the md5, sha1
>> and sha2 (SHA256) checksum.
>> 
>> * Release artifacts to be voted upon (source and binary):
>> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>> 
>> * Maven artifacts to be voted upon prior to release:
>> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>> 
>> * scala-doc
>> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>> 
>> * java-doc
>> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>> 
>> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
>> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>> 
>> * Documentation
>> http://kafka.apache.org/090/documentation.html
>> 
>> Thanks,
>> 
>> Jun
>> 
> 
> 



[jira] [Commented] (KAFKA-3145) CPU Usage Spike to 100% when network connection is to error port

2016-02-15 Thread Dong Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147899#comment-15147899
 ] 

Dong Lin commented on KAFKA-3145:
-

[~zhiwei], I am not able to reproduce the problem. I started one zookeeper, 
broker and producer on one desktop. The desktop has 12 threads. I started 
kafka-producer-perf-test.sh with a wrong port. The CPU usage as shown in the 
top command looks fine to me. I couldn't find NullPointerException either.

Maybe it is already fixed in a recent patch. Can you try reproduce the problem?

> CPU Usage Spike to 100% when network connection is to error port
> 
>
> Key: KAFKA-3145
> URL: https://issues.apache.org/jira/browse/KAFKA-3145
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: zhiwei
>Assignee: Dong Lin
> Fix For: 0.9.1.0
>
>
> CPU spike to 100% when network connection is to error port.
> It seems network IO thread are very busy logging following error message. 
> [2016-01-25 14:09:12,476] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.IllegalStateException: Invalid request (size = -1241382912)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,479] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,480] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> [2016-01-25 14:09:12,481] ERROR Uncaught error in kafka producer I/O thread:  
> (org.apache.kafka.clients.producer.internals.Sender)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.common.network.NetworkReceive.complete(NetworkReceive.java:48)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:249)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
>   at java.lang.Thread.run(Thread.java:745)
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-15 Thread Erik Pettersson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147845#comment-15147845
 ] 

Erik Pettersson commented on KAFKA-3007:


No problem! Was just thinking about if I should implement it if you had other 
things to do :) 

> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3133: Add putIfAbsent function to KeyVal...

2016-02-15 Thread kichristensen
GitHub user kichristensen reopened a pull request:

https://github.com/apache/kafka/pull/912

KAFKA-3133: Add putIfAbsent function to KeyValueStore

@guozhangwang

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kichristensen/kafka KAFKA-3133

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/912.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #912


commit db38597a9b14c6a9acd2f1f952b2a6b12bf4f3c8
Author: Kim Christensen 
Date:   2016-02-14T18:38:30Z

Add putIfAbsent function to KeyValueStore

commit 3ba7b313b7dc7c8b180a7603bb08554655b7f7d3
Author: Kim Christensen 
Date:   2016-02-14T20:27:57Z

Use inner.putIfAbsent




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147801#comment-15147801
 ] 

ASF GitHub Bot commented on KAFKA-3133:
---

GitHub user kichristensen reopened a pull request:

https://github.com/apache/kafka/pull/912

KAFKA-3133: Add putIfAbsent function to KeyValueStore

@guozhangwang

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kichristensen/kafka KAFKA-3133

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/912.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #912


commit db38597a9b14c6a9acd2f1f952b2a6b12bf4f3c8
Author: Kim Christensen 
Date:   2016-02-14T18:38:30Z

Add putIfAbsent function to KeyValueStore

commit 3ba7b313b7dc7c8b180a7603bb08554655b7f7d3
Author: Kim Christensen 
Date:   2016-02-14T20:27:57Z

Use inner.putIfAbsent




> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3133) Add putIfAbsent function to KeyValueStore

2016-02-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147800#comment-15147800
 ] 

ASF GitHub Bot commented on KAFKA-3133:
---

Github user kichristensen closed the pull request at:

https://github.com/apache/kafka/pull/912


> Add putIfAbsent function to KeyValueStore
> -
>
> Key: KAFKA-3133
> URL: https://issues.apache.org/jira/browse/KAFKA-3133
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>
> Since a local store will only be accessed by a single stream thread, there is 
> no atomicity concerns and hence this API should be easy to add.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3133: Add putIfAbsent function to KeyVal...

2016-02-15 Thread kichristensen
Github user kichristensen closed the pull request at:

https://github.com/apache/kafka/pull/912


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Add expected Error Codes to ProduceResponse do...

2016-02-15 Thread dpkp
GitHub user dpkp opened a pull request:

https://github.com/apache/kafka/pull/918

Add expected Error Codes to ProduceResponse documentation

This is a documentation-only patch discussed on the mailing list. The 
intent is to have these changes propagated to the protocol wiki 
(https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol)
 .

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dpkp/kafka produce_response_errors

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/918.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #918


commit 03db2c60b4cb394ba96112f5ec106816f316de87
Author: Dana Powers 
Date:   2016-02-15T19:51:13Z

Add expected Error Codes to ProduceResponse documentation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Properly quote $JAVA in order to avoid failure...

2016-02-15 Thread pbaille
GitHub user pbaille opened a pull request:

https://github.com/apache/kafka/pull/917

Properly quote $JAVA in order to avoid failure due to space char in PATH



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pbaille/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/917.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #917


commit d7b5fab904f9261cc4771a974164ef7d229f0e27
Author: Pierre Baille 
Date:   2016-02-15T19:28:36Z

Properly quote $JAVA in order to avoid failure due to space char in PATH




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Properly quote $JAVA in bin/kafka-run-class.sh...

2016-02-15 Thread pbaille
Github user pbaille closed the pull request at:

https://github.com/apache/kafka/pull/916


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Properly quote $JAVA in bin/kafka-run-class.sh...

2016-02-15 Thread pbaille
GitHub user pbaille opened a pull request:

https://github.com/apache/kafka/pull/916

Properly quote $JAVA in bin/kafka-run-class.sh in order to avoid failure 
due to space char in PATH



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/kafka 0.9.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/916.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #916


commit 7710b367fd26a0c41565f35200748c23616b4477
Author: Gwen Shapira 
Date:   2015-11-07T03:46:30Z

Changing version to 0.9.0.0

commit 27d44afe664bff45d62f72335fdbb56671561512
Author: Jason Gustafson 
Date:   2015-11-08T19:38:50Z

KAFKA-2723: new consumer exception cleanup (0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #452 from hachikuji/KAFKA-2723

commit 32cd3e35f1ea8251a51860cc48a44fb2fbfd7c0e
Author: Jason Gustafson 
Date:   2015-11-08T20:36:42Z

HOTFIX: fix group coordinator edge cases around metadata storage callback 
(0.9.0)

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #453 from hachikuji/hotfix-group-coordinator-0.9

commit 1fd79f57b4a73308c59b797974086ca09af19b98
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T04:41:35Z

KAFKA-2480: Handle retriable and non-retriable exceptions thrown by sink 
tasks.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #450 from ewencp/kafka-2480-unrecoverable-task-errors

(cherry picked from commit f4b87deefecf4902992a84d4a3fe3b99a94ff72b)
Signed-off-by: Gwen Shapira 

commit 48013222fd426685d2907a760290d2e7c7d25aea
Author: Geoff Anderson 
Date:   2015-11-09T04:52:16Z

KAFKA-2773; 0.9.0 branch)Fixed broken vagrant provision scripts for static 
zk/broker cluster

Author: Geoff Anderson 

Reviewers: Gwen Shapira

Closes #455 from granders/KAFKA-2773-0.9.0-vagrant-fix

commit 417e283d643d8865aa3e79dffa373c8cc853d78f
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T06:11:03Z

KAFKA-2774: Rename Copycat to Kafka Connect

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #456 from ewencp/kafka-2774-rename-copycat

(cherry picked from commit f2031d40639ef34c1591c22971394ef41c87652c)
Signed-off-by: Gwen Shapira 

commit 02fbdaa4475fd12a0fdccaa103bf27cbc1bfd077
Author: Rajini Sivaram 
Date:   2015-11-09T15:23:47Z

KAFKA-2779; Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

Author: Rajini Sivaram 

Reviewers: Ismael Juma , Jun Rao 

Closes #460 from rajinisivaram/KAFKA-2779

(cherry picked from commit efbebc6e843850b7ed9a1d015413c99f114a7d92)
Signed-off-by: Jun Rao 

commit fdefef9536acf8569607a980a25237ef4794f645
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T17:10:20Z

KAFKA-2781; Only require signing artifacts when uploading archives.

Author: Ewen Cheslack-Postava 

Reviewers: Jun Rao 

Closes #461 from ewencp/kafka-2781-no-signing-for-install

(cherry picked from commit a24f9a23a6d8759538e91072e8d96d158d03bb63)
Signed-off-by: Jun Rao 

commit 7471394c5485a2114d35c6345d95e161a0ee6586
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:19:27Z

KAFKA-2776: Fix lookup of schema conversion cache size in JsonConverter.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #458 from ewencp/kafka-2776-json-converter-cache-config-fix

(cherry picked from commit e9fc7b8c84908ae642339a2522a79f8bb5155728)
Signed-off-by: Gwen Shapira 

commit 3aa3e85d942b514cbe842a6b3c3fe214c0ecf401
Author: Jason Gustafson 
Date:   2015-11-09T18:26:17Z

HOTFIX: bug updating cache when loading group metadata

The bug causes only the first instance of group metadata in the topic to be 
written to the cache (because of the putIfNotExists in addGroup). Coordinator 
fail-over won't work properly unless the cache is loaded with the right 
metadata.

Author: Jason Gustafson 

Reviewers: Guozhang Wang

Closes #462 from hachikuji/hotfix-group-loading

(cherry picked from commit 2b04004de878823fe631af1f3f85108c0b38caec)
Signed-off-by: Guozhang Wang 

commit e627558a5e62d185c88650af845d7b74e9c290f8
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:27:18Z

KAFKA-2775: Move exceptions into API package for Kafka Connect.

Author: Ewen Cheslack-Postava 

Reviewers: Gwen Shapira

Closes #457 from ewencp/kafka-2775-exceptions-in-api-package

(cherry picked from commit bc76e6704e8f14d59bb5d4fcf9bdf544c9e463bf)
Signed-off-by: Gwen Shapira 

commit 4069011ee

[jira] [Commented] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-15 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147713#comment-15147713
 ] 

Jason Gustafson commented on KAFKA-3007:


[~eripe...@gmail.com] Apologies for the delay. I was planning to implement this 
week.

> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3007) Implement max.poll.records for new consumer (KIP-41)

2016-02-15 Thread Erik Pettersson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147662#comment-15147662
 ] 

Erik Pettersson commented on KAFKA-3007:


Has any work been done to this issue? 

> Implement max.poll.records for new consumer (KIP-41)
> 
>
> Key: KAFKA-3007
> URL: https://issues.apache.org/jira/browse/KAFKA-3007
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: aarti gupta
>Assignee: Jason Gustafson
>
> Currently, the consumer.poll(timeout)
> returns all messages that have not been acked since the last fetch
> The only way to process a single message, is to throw away all but the first 
> message in the list
> This would mean we are required to fetch all messages into memory, and this 
> coupled with the client being not thread-safe, (i.e. we cannot use a 
> different thread to ack messages, makes it hard to consume messages when the 
> order of message arrival is important, and a large number of messages are 
> pending to be consumed)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-15 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147590#comment-15147590
 ] 

Jason Gustafson commented on KAFKA-3093:


[~jinxing6...@126.com] I should have a patch available later today so it'd be 
great if you could help review. For this ticket, I haven't tried to implement 
any failure handling other than exposing the failure through the API. I 
definitely think it would make sense for Connect to implement failure policies 
(such as rolling back and restarting), but maybe we can do that in a separate 
JIRA? 

> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3172) Consumer threads stay in 'Watiting' status and are blocked at consumer poll method

2016-02-15 Thread Dany Benjamin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147563#comment-15147563
 ] 

Dany Benjamin edited comment on KAFKA-3172 at 2/15/16 4:44 PM:
---

[~gwenshap] Yes. This is the jstack dump I am getting. I started my 
multi-threaded consumer with a jmx port in it and I am seeing the same 
'kafka-producer-network-thread | producer n' via Jrockit. I am running the 
ConsumerGroupExample modified to read my topic.


was (Author: dany.benja...@gmail.com):
Yes. This is the jstack dump I am getting. I started my multi-threaded consumer 
with a jmx port in it and I am seeing the same 'kafka-producer-network-thread | 
producer n' via Jrockit. I am running the ConsumerGroupExample modified to read 
my topic.

> Consumer threads stay in 'Watiting' status and are blocked at consumer poll 
> method
> --
>
> Key: KAFKA-3172
> URL: https://issues.apache.org/jira/browse/KAFKA-3172
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: linux
>Reporter: Dany Benjamin
>Assignee: Neha Narkhede
>Priority: Critical
> Fix For: 0.9.0.0
>
> Attachments: jmx_info.png, jstack.png, lagSample.png
>
>
> When running multiple consumers on same group (400 - for a 400 partitioned 
> topic), the application for all threads blocks at consumer.poll() method. The 
> timeout parameter sent in is 1.
> Stack dump:
> "pool-1-thread-198" #424 prio=5 os_prio=0 tid=0x7f6bb6d53800 nid=0xc349 
> waiting on condition [0x7f63df8f7000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000605812710> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "kafka-producer-network-thread | producer-198" #423 daemon prio=5 os_prio=0 
> tid=0x7f6bb6d52000 nid=0xc348 runnable [0x7f63df9f8000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0006058283e8> (a sun.nio.ch.Util$2)
> - locked <0x0006058283d8> (a 
> java.util.Collections$UnmodifiableSet)
> - locked <0x000605828390> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(Selector.java:425)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:254)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-15 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147565#comment-15147565
 ] 

Rajini Sivaram commented on KAFKA-3218:
---

[~lowerobert] [~joconnor] Have you tried with the PR in 
https://github.com/apache/kafka/pull/888 which loads default config using 
static classloading to overcome this issue?

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" is false because the classes 
> were loaded by different classloaders.
> Testcase attached, see README.txt for instructions.
> See also SM-2743



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3172) Consumer threads stay in 'Watiting' status and are blocked at consumer poll method

2016-02-15 Thread Dany Benjamin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147563#comment-15147563
 ] 

Dany Benjamin commented on KAFKA-3172:
--

Yes. This is the jstack dump I am getting. I started my multi-threaded consumer 
with a jmx port in it and I am seeing the same 'kafka-producer-network-thread | 
producer n' via Jrockit. I am running the ConsumerGroupExample modified to read 
my topic.

> Consumer threads stay in 'Watiting' status and are blocked at consumer poll 
> method
> --
>
> Key: KAFKA-3172
> URL: https://issues.apache.org/jira/browse/KAFKA-3172
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: linux
>Reporter: Dany Benjamin
>Assignee: Neha Narkhede
>Priority: Critical
> Fix For: 0.9.0.0
>
> Attachments: jmx_info.png, jstack.png, lagSample.png
>
>
> When running multiple consumers on same group (400 - for a 400 partitioned 
> topic), the application for all threads blocks at consumer.poll() method. The 
> timeout parameter sent in is 1.
> Stack dump:
> "pool-1-thread-198" #424 prio=5 os_prio=0 tid=0x7f6bb6d53800 nid=0xc349 
> waiting on condition [0x7f63df8f7000]
>java.lang.Thread.State: WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0x000605812710> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
> at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
> at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> "kafka-producer-network-thread | producer-198" #423 daemon prio=5 os_prio=0 
> tid=0x7f6bb6d52000 nid=0xc348 runnable [0x7f63df9f8000]
>java.lang.Thread.State: RUNNABLE
> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
> - locked <0x0006058283e8> (a sun.nio.ch.Util$2)
> - locked <0x0006058283d8> (a 
> java.util.Collections$UnmodifiableSet)
> - locked <0x000605828390> (a sun.nio.ch.EPollSelectorImpl)
> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
> at org.apache.kafka.common.network.Selector.select(Selector.java:425)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:254)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-15 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147559#comment-15147559
 ] 

jin xing edited comment on KAFKA-3093 at 2/15/16 4:43 PM:
--

[~hachikuji]
It's great if you can work on this;
I like the idea to differentiate connector's target state from runtime state;
It make sense to catch exceptions thrown when running tasks and avoid 
connector's jvm from falling down and mark the connector to be failed state;
One question, do we take some measures when the connector's state is 'failed';
e.g. when send a wrong connector config, the connector failed, do we just leave 
it there or roll back the connector's config and restart the connector?
(I was on a vacation and failed to track this jira, it's great to help this if 
possible, or do you wanna separate this ticket into small ones?)


was (Author: jinxing6...@126.com):
[~hachikuji]
It's great if you can work on this;
I like the idea to differentiate connector's target state from runtime state;
It make sense to catch exceptions thrown when running tasks and avoid 
connector's jvm from falling down and mark the connector to be failed state;
One question, do we take some measures when the connector's state is 'failed';
e.g. when send a wrong connector config, the connector failed, do we just leave 
it there or roll back the connector's config and restart the connector?

> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3093) Keep track of connector and task status info, expose it via the REST API

2016-02-15 Thread jin xing (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147559#comment-15147559
 ] 

jin xing commented on KAFKA-3093:
-

[~hachikuji]
It's great if you can work on this;
I like the idea to differentiate connector's target state from runtime state;
It make sense to catch exceptions thrown when running tasks and avoid 
connector's jvm from falling down and mark the connector to be failed state;
One question, do we take some measures when the connector's state is 'failed';
e.g. when send a wrong connector config, the connector failed, do we just leave 
it there or roll back the connector's config and restart the connector?

> Keep track of connector and task status info, expose it via the REST API
> 
>
> Key: KAFKA-3093
> URL: https://issues.apache.org/jira/browse/KAFKA-3093
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: jin xing
>Assignee: Jason Gustafson
>
> Relate to KAFKA-3054;
> We should keep track of the status of connector and task during their 
> startup, execution, and handle exceptions thrown by connector and task;
> Users should be able to fetch these informations by REST API and send some 
> necessary commands(reconfiguring, restarting, pausing, unpausing) to 
> connectors and tasks by REST API;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3224) Add timestamp-based log deletion policy

2016-02-15 Thread Bill Warshaw (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Warshaw updated KAFKA-3224:

Description: 
One of Kafka's officially-described use cases is a distributed commit log 
(http://kafka.apache.org/documentation.html#uses_commitlog). In this case, for 
a distributed service that needed a commit log, there would be a topic with a 
single partition to guarantee log order. This service would use the commit log 
to re-sync failed nodes. Kafka is generally an excellent fit for such a system, 
but it does not expose an adequate mechanism for log cleanup in such a case. 
With a distributed commit log, data can only be deleted when the client 
application determines that it is no longer needed; this creates completely 
arbitrary ranges of time and size for messages, which the existing cleanup 
mechanisms can't handle smoothly.
A new deletion policy based on the absolute timestamp of a message would work 
perfectly for this case.  The client application will periodically update the 
minimum timestamp of messages to retain, and Kafka will delete all messages 
earlier than that timestamp using the existing log cleaner thread mechanism.
This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
message.

h3. Initial Approach
https://github.com/apache/kafka/compare/trunk...bill-warshaw:KAFKA-3224

  was:
One of Kafka's officially-described use cases is a distributed commit log 
(http://kafka.apache.org/documentation.html#uses_commitlog). In this case, for 
a distributed service that needed a commit log, there would be a topic with a 
single partition to guarantee log order. This service would use the commit log 
to re-sync failed nodes. Kafka is generally an excellent fit for such a system, 
but it does not expose an adequate mechanism for log cleanup in such a case. 
With a distributed commit log, data can only be deleted when the client 
application determines that it is no longer needed; this creates completely 
arbitrary ranges of time and size for messages, which the existing cleanup 
mechanisms can't handle smoothly.
A new deletion policy based on the absolute timestamp of a message would work 
perfectly for this case.  The client application will periodically update the 
minimum timestamp of messages to retain, and Kafka will delete all messages 
earlier than that timestamp using the existing log cleaner thread mechanism.
This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
message.

h3. Initial Approach
https://github.com/apache/kafka/commit/2c51ae3cead99432ebf19f0303f8cc797723c939


> Add timestamp-based log deletion policy
> ---
>
> Key: KAFKA-3224
> URL: https://issues.apache.org/jira/browse/KAFKA-3224
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Warshaw
>  Labels: kafka
>
> One of Kafka's officially-described use cases is a distributed commit log 
> (http://kafka.apache.org/documentation.html#uses_commitlog). In this case, 
> for a distributed service that needed a commit log, there would be a topic 
> with a single partition to guarantee log order. This service would use the 
> commit log to re-sync failed nodes. Kafka is generally an excellent fit for 
> such a system, but it does not expose an adequate mechanism for log cleanup 
> in such a case. With a distributed commit log, data can only be deleted when 
> the client application determines that it is no longer needed; this creates 
> completely arbitrary ranges of time and size for messages, which the existing 
> cleanup mechanisms can't handle smoothly.
> A new deletion policy based on the absolute timestamp of a message would work 
> perfectly for this case.  The client application will periodically update the 
> minimum timestamp of messages to retain, and Kafka will delete all messages 
> earlier than that timestamp using the existing log cleaner thread mechanism.
> This is based off of the work being done in KIP-32 - Add timestamps to Kafka 
> message.
> h3. Initial Approach
> https://github.com/apache/kafka/compare/trunk...bill-warshaw:KAFKA-3224



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-15 Thread Joe O'Connor (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147536#comment-15147536
 ] 

Joe O'Connor commented on KAFKA-3218:
-

The problem with the workaround seems to be that the "partitioner.class" config 
value has its default value specified as a string (as opposed to a 
class):{code}//ProducerConfig.java:278
.define(PARTITIONER_CLASS_CONFIG,
Type.CLASS,
DefaultPartitioner.class.getName(),
Importance.MEDIUM, 
PARTITIONER_CLASS_DOC)
{code}
ConfigDef.java is then calling Class.forName() on the 
"DefaultPartitioner.class.getName()" string, which is triggering the bug

{code}Caused by: org.apache.kafka.common.config.ConfigException: Invalid value 
org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
configuration partitioner.class: Class 
org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
found.^M
at 
org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)^M
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)^M
at org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)^M
at 
org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)^M
... 35 more^M
{code}

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
> ... 12 more
> Caused by: java.lang.ExceptionInInitializerError
> at 
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
> at com.openet.testcase.Activator.start(Activator.java:16)
> at 
> org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2226)
> ... 16 more
> *Caused by: org.apache.kafka.common.config.ConfigException: Invalid 
> value org.apache.kafka.clients.producer.internals.DefaultPartitioner for 
> configuration partitioner.class: Class* 
> *org.apache.kafka.clients.producer.internals.DefaultPartitioner could not be 
> found.*
> at 
> org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:255)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:78)
> at 
> org.apache.kafka.common.config.ConfigDef.define(ConfigDef.java:94)
> at 
> org.apache.kafka.clients.producer.ProducerConfig.(ProducerConfig.java:206)
> {quote}
> Workaround is to call "currentThread().setContextClassLoader(null)" before 
> initializing the kafka producer.
> Possible fix is to catch ClassNotFoundException at ConfigDef.java:247 and 
> retry the Class.forName() call with the default classloader. However with 
> this fix there is still a problem at AbstractConfig.java:206,  where the 
> newInstance() call succeeds but "instanceof" 

Re: New Consumer Poll bug?

2016-02-15 Thread Phil Steitz
On 2/13/16 6:16 AM, Ismael Juma wrote:
> Hi Damian,
>
> KAFKA-2978, which would cause consumption to stop would happen after a
> consumer group rebalance. Was this the case for you?
>
> It would be great if you could upgrade the client to 0.9.0.1 RC1 in order
> to check if the problem still happens. There were other bugs fixed in the
> 0.9.0 branch and it simplifies the analysis if we can rule them out. See
> the following for the full list:
>
> https://github.com/apache/kafka/compare/0.9.0.0...0.9.0.1

I was having a similar problem with 0.9.0.0.  I have a very simple
setup with just one broker, clients on the same host and manual
partition assignment.  The clients use poll(100) in a loop. 
Sometimes there are periods of an hour or more when there are no
messages to consume.  With 0.9.0.0, this would result in a slew of
"marking the coordinator ... dead" messages and then the clients
would just stop consuming.  With 0.9.0.1-RC1, I still get the log
messages every 9 minutes; but the clients now seem to recover or are
not affected.  I notice that 9 minutes is the default for the new
property, connections.max.idle.ms.  Do I need to increase the value
of this property? 

Phil
>
> Thanks,
> Ismael
>
> On Sat, Feb 13, 2016 at 9:43 AM, Damian Guy  wrote:
>
>> I've been having some issues with the New Consumer. I'm aware there is a
>> bug that has been fixed for 0.9.0.1, but is this the same thing?
>> I'm using manual partition assignment due to latency issues making it near
>> impossible to work with the group management features.
>>
>> So, my consumer was going along fine for most of the day - it just consumes
>> from a topic with a single partition. However it has just stopped receiving
>> messages and I can see there is a backlog of around 100k messages to get
>> through. Since message consumption has stopped i get the below "Marking the
>> coordinator dead" log messages every 9 minutes. I have done multiple stack
>> dumps to see what is happening, one of which is below, and it is always
>> appears to be in the consumer.poll
>>
>> So.. same bug as the one i believe is fixed on 0.9.0.1? In which case i'll
>> upgrade my client to the latest from the branch. Or is this something
>> different?
>>
>> Thanks,
>> Damian
>>
>> 2016/02/13 00:07:57 131.73 MB/1.8 GB INFO
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator-
>> Marking the coordinator 2147479630 dead.
>> 2016/02/13 00:16:57 151.75 MB/1.79 GB INFO
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator-
>> Marking the coordinator 2147479630 dead.
>> 2016/02/13 00:25:57 181.07 MB/1.76 GB INFO
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator-
>> Marking the coordinator 2147479630 dead.
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator-
>> Marking the coordinator 2147479630 dead.
>>
>> "poll-kafka-1" #45 prio=5 os_prio=0 tid=0x7f7dba9da800 nid=0x52fd
>> runnable [0x7f7cecbe3000]
>>java.lang.Thread.State: RUNNABLE
>> at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>> at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>> at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
>> at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>> - locked <0xac6df4c8> (a sun.nio.ch.Util$2)
>> - locked <0xac6df4b0> (a
>> java.util.Collections$UnmodifiableSet)
>> - locked <0xac53db20> (a sun.nio.ch.EPollSelectorImpl)
>> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>> at
>> org.apache.kafka.common.network.Selector.select(Selector.java:425)
>> at org.apache.kafka.common.network.Selector.poll(Selector.java:254)
>> at
>> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>> at
>>
>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:303)
>> at
>>
>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:197)
>> at
>>
>> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:187)
>> at
>>
>> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:877)
>> at
>>
>> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:829)
>>




Re: 0.9.0.1 RC1

2016-02-15 Thread Phil Steitz
I get the following test failures on both OSX (10.11.3) and Ubuntu
(14.04), JDK 1.7 and 1.8.

KafkaConsumerTest. testConstructorClose
KafkaProducerTest. testConstructorFailureCloseResource
SelectorTest. testNoRouteToHost
SslSelectorTest. testNoRouteToHost

All seem to be expecting exceptions that aren't thrown.   The
tarball build succeeded, though, so maybe these are expected.  Just
thought it best to report them.

My application tests passed, other than an issue shared by 0.9.0.0.
reported else-thread.

If there is another RC, two little things that would be good to
improve are:

0) the NOTICE files still have 2015 copyright dates
1) the hash files for the tarballs use a non-standard encoding,
making it harder to verify.  The sig is standard ASCII armor and good.

Phil


On 2/11/16 7:55 PM, Jun Rao wrote:
> This is the first candidate for release of Apache Kafka 0.9.0.1. This a bug
> fix release that fixes 70 issues.
>
> Release Notes for the 0.9.0.1 release
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, Feb. 16, 7pm PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS in addition to the md5, sha1
> and sha2 (SHA256) checksum.
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/
>
> * Maven artifacts to be voted upon prior to release:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * scala-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/scaladoc/
>
> * java-doc
> https://home.apache.org/~junrao/kafka-0.9.0.1-candidate1/javadoc/
>
> * The tag to be voted upon (off the 0.9.0 branch) is the 0.9.0.1 tag
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=2c17685a45efe665bf5f24c0296cb8f9e1157e89
>
> * Documentation
> http://kafka.apache.org/090/documentation.html
>
> Thanks,
>
> Jun
>




[jira] [Comment Edited] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-15 Thread Robert Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147425#comment-15147425
 ] 

Robert Lowe edited comment on KAFKA-3218 at 2/15/16 2:45 PM:
-

Suggested workaround throws error on bundle:start
{code}
org.apache.karaf.shell.support.MultiException: Error executing command on 
bundles:
Error starting bundle66: Activator start error in bundle 
com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)
at 
org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)
at 
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)
at 
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)
at 
org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)
at 
org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.lang.Exception: Error starting bundle66: Activator 
start error in bundle com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
... 12 more
Caused by: org.osgi.framework.BundleException: Activator start error in 
bundle com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.felix.framework.Felix.activateBundle(Felix.java:2270)
at org.apache.felix.framework.Felix.startBundle(Felix.java:2138)
at 
org.apache.felix.framework.BundleImpl.start(BundleImpl.java:977)
at 
org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
... 12 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.kafka.clients.producer.ProducerConfig
at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
at com.openet.testcase.Activator.start(Activator.java:18)
at 
org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
at 
org.apache.felix.framework.Felix.activateBundle(Felix.java:2220)
... 16 more

{code}


was (Author: lowerobert):
Suggested fix throws error on bundle:start
{code}
org.apache.karaf.shell.support.MultiException: Error executing command on 
bundles:
Error starting bundle66: Activator start error in bundle 
com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)
at 
org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)
at 
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)
at 
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)
at 
org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)
at 
org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.lang.Exception: Error starting bundle66: Activator 
start error in bundle com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
... 12 more
Caused by: org.osgi.framework.BundleException: Activator 

[jira] [Commented] (KAFKA-3218) Kafka-0.9.0.0 does not work as OSGi module

2016-02-15 Thread Robert Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147425#comment-15147425
 ] 

Robert Lowe commented on KAFKA-3218:


Suggested fix throws error on bundle:start
{code}
org.apache.karaf.shell.support.MultiException: Error executing command on 
bundles:
Error starting bundle66: Activator start error in bundle 
com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.shell.support.MultiException.throwIf(MultiException.java:61)
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:69)
at 
org.apache.karaf.bundle.command.BundlesCommand.execute(BundlesCommand.java:54)
at 
org.apache.karaf.shell.impl.action.command.ActionCommand.execute(ActionCommand.java:83)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:67)
at 
org.apache.karaf.shell.impl.console.osgi.secured.SecuredCommand.execute(SecuredCommand.java:87)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:480)
at 
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:406)
at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:182)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:119)
at 
org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:94)
at 
org.apache.karaf.shell.impl.console.ConsoleSessionImpl.run(ConsoleSessionImpl.java:270)
at java.lang.Thread.run(Thread.java:745)
Suppressed: java.lang.Exception: Error starting bundle66: Activator 
start error in bundle com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
... 12 more
Caused by: org.osgi.framework.BundleException: Activator start error in 
bundle com.openet.testcase.ContextClassLoaderBug [66].
at 
org.apache.felix.framework.Felix.activateBundle(Felix.java:2270)
at org.apache.felix.framework.Felix.startBundle(Felix.java:2138)
at 
org.apache.felix.framework.BundleImpl.start(BundleImpl.java:977)
at 
org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
at 
org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:64)
... 12 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.kafka.clients.producer.ProducerConfig
at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:156)
at com.openet.testcase.Activator.start(Activator.java:18)
at 
org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:697)
at 
org.apache.felix.framework.Felix.activateBundle(Felix.java:2220)
... 16 more

{code}

> Kafka-0.9.0.0 does not work as OSGi module
> --
>
> Key: KAFKA-3218
> URL: https://issues.apache.org/jira/browse/KAFKA-3218
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
> Environment: Apache Felix OSGi container
> jdk_1.8.0_60
>Reporter: Joe O'Connor
>Assignee: Rajini Sivaram
> Attachments: ContextClassLoaderBug.tar.gz
>
>
> KAFKA-2295 changed all Class.forName() calls to use 
> currentThread().getContextClassLoader() instead of the default "classloader 
> that loaded the current class". 
> OSGi loads each module's classes using a separate classloader so this is now 
> broken.
> Steps to reproduce: 
> # install the kafka-clients servicemix OSGi module 0.9.0.0_1
> # attempt to initialize the Kafka producer client from Java code 
> Expected results: 
> - call to "new KafkaProducer()" succeeds
> Actual results: 
> - "new KafkaProducer()" throws ConfigException:
> {quote}Suppressed: java.lang.Exception: Error starting bundle54: 
> Activator start error in bundle com.openet.testcase.ContextClassLoaderBug 
> [54].
> at 
> org.apache.karaf.bundle.command.BundlesCommand.doExecute(BundlesCommand.java:66)
> ... 12 more
> Caused by: org.osgi.framework.BundleException: Activator start error 
> in bundle com.openet.testcase.ContextClassLoaderBug [54].
> at 
> org.apache.felix.framework.Felix.activateBundle(Felix.java:2276)
> at 
> org.apache.felix.framework.Felix.startBundle(Felix.java:2144)
> at 
> org.apache.felix.framework.BundleImpl.start(BundleImpl.java:998)
> at 
> org.apache.karaf.bundle.command.Start.executeOnBundle(Start.java:38)
> at 
> org.apache.

[jira] [Commented] (KAFKA-3135) Unexpected delay before fetch response transmission

2016-02-15 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147379#comment-15147379
 ] 

Ismael Juma commented on KAFKA-3135:


[~hachikuji], do you know why the receive buffer size was changed to 32k in the 
new consumer?

> Unexpected delay before fetch response transmission
> ---
>
> Key: KAFKA-3135
> URL: https://issues.apache.org/jira/browse/KAFKA-3135
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> From the user list, Krzysztof Ciesielski reports the following:
> {quote}
> Scenario description:
> First, a producer writes 50 elements into a topic
> Then, a consumer starts to read, polling in a loop.
> When "max.partition.fetch.bytes" is set to a relatively small value, each
> "consumer.poll()" returns a batch of messages.
> If this value is left as default, the output tends to look like this:
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 0 elements
> Poll returned 13793 elements
> Poll returned 13793 elements
> As we can see, there are weird "gaps" when poll returns 0 elements for some
> time. What is the reason for that? Maybe there are some good practices
> about setting "max.partition.fetch.bytes" which I don't follow?
> {quote}
> The gist to reproduce this problem is here: 
> https://gist.github.com/kciesielski/054bb4359a318aa17561.
> After some initial investigation, the delay appears to be in the server's 
> networking layer. Basically I see a delay of 5 seconds from the time that 
> Selector.send() is invoked in SocketServer.Processor with the fetch response 
> to the time that the send is completed. Using netstat in the middle of the 
> delay shows the following output:
> {code}
> tcp4   0  0  10.191.0.30.55455  10.191.0.30.9092   ESTABLISHED
> tcp4   0 102400  10.191.0.30.9092   10.191.0.30.55454  ESTABLISHED
> {code}
> From this, it looks like the data reaches the send buffer, but needs to be 
> flushed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Doc of topic-level configuration was ma...

2016-02-15 Thread sasakitoa
GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/915

MINOR: Doc of topic-level configuration was made understandable.

Some configuration of Broker topic-level conf should be specified Int or 
Long.
Currently docs will mislead to allow to use unit such as GB, days and so on.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka conf_unit_not_allowed

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/915.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #915


commit cf21999615e884bc89e34820a03457346d020a5d
Author: Sasaki Toru 
Date:   2016-02-15T08:24:40Z

A doc of topic-level configuration was made understandable.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3129) Console Producer/Consumer Issue

2016-02-15 Thread Rajini Sivaram (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15147050#comment-15147050
 ] 

Rajini Sivaram commented on KAFKA-3129:
---

At the moment {{PlaintextTransportLayer.close()}} closes the socket channel and 
invalidates the selection key without blocking for output stream to be shutdown 
gracefully. Is this intentional?
{{close()}} could either be a blocking version that shuts down gracefully or a 
non-blocking version with abrupt termination as it is now. It will be a bigger 
change to implement {{close()}} with timeouts.

> Console Producer/Consumer Issue
> ---
>
> Key: KAFKA-3129
> URL: https://issues.apache.org/jira/browse/KAFKA-3129
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.9.0.0
>Reporter: Vahid Hashemian
>Assignee: Neha Narkhede
> Attachments: kafka-3129.mov, server.log.abnormal.txt, 
> server.log.normal.txt
>
>
> I have been running a simple test case in which I have a text file 
> {{messages.txt}} with 1,000,000 lines (lines contain numbers from 1 to 
> 1,000,000 in ascending order). I run the console consumer like this:
> {{$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test}}
> Topic {{test}} is on 1 partition with a replication factor of 1.
> Then I run the console producer like this:
> {{$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test < 
> messages.txt}}
> Then the console starts receiving the messages. And about half the times it 
> goes all the way to 1,000,000. But, in other cases, it stops short, usually 
> at 999,735.
> I tried running another console consumer on another machine and both 
> consumers behave the same way. I can't see anything related to this in the 
> logs.
> I also ran the same experiment with a similar file of 10,000 lines, and am 
> getting a similar behavior. When the consumer does not receive all the 10,000 
> messages it usually stops at 9,864.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)