[jira] [Commented] (KAFKA-2779) Kafka SSL transport layer leaks file descriptors

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996495#comment-14996495
 ] 

ASF GitHub Bot commented on KAFKA-2779:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/460

KAFKA-2779: Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2779

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/460.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #460


commit c1ddae064d38f2cda09c1aa0475c142a0937f3f3
Author: Rajini Sivaram 
Date:   2015-11-09T13:04:03Z

KAFKA-2779: Close SSL socket channel on remote connection close to avoid 
file descriptor leak




> Kafka SSL transport layer leaks file descriptors
> 
>
> Key: KAFKA-2779
> URL: https://issues.apache.org/jira/browse/KAFKA-2779
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>
> There is currently no transition from read() to close() in SslTransportLayer 
> to handle graceful shutdown requests. As a result, Kafka SSL connections are 
> never shutdown gracefully. In addition to this, close() does not handle 
> ungraceful termination of connections correctly. If flush() fails because the 
> other end has performed a close (eg. because graceful termination was not 
> handled), Kafka prints out a warning and does not close the socket. This 
> leaks file descriptors.
> We are seeing a large number of open file descriptors because our health 
> checks to Kafka result in connections that are neither terminated gracefully 
> nor closed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2779: Close SSL socket channel on remote...

2015-11-09 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/460

KAFKA-2779: Close SSL socket channel on remote connection close

Close socket channel in finally block to avoid file descriptor leak when 
remote end closes the connection

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-2779

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/460.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #460


commit c1ddae064d38f2cda09c1aa0475c142a0937f3f3
Author: Rajini Sivaram 
Date:   2015-11-09T13:04:03Z

KAFKA-2779: Close SSL socket channel on remote connection close to avoid 
file descriptor leak




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka_system_tests #142

2015-11-09 Thread ewen
See 

--
Started by user benstopford
Building on master in workspace 

[kafka_system_tests] $ /bin/bash /tmp/hudson8577356696053981129.sh
Running command: which virtualenv

Running command: . venv/bin/activate; cd 
 `which pip` 
uninstall -y kafkatest; `which pip` uninstall -y ducktape
Uninstalling kafkatest-0.9.0.0.dev0:
Successfully uninstalled kafkatest-0.9.0.0.dev0
Uninstalling ducktape-0.3.8:
Successfully uninstalled ducktape-0.3.8

Running command: . venv/bin/activate; cd 
 `which 
python` setup.py develop

Running command: cd 
 git pull && 
./gradlew clean jar
error: Your local changes to the following files would be overwritten by merge:
tests/kafkatest/tests/copycat_distributed_test.py
tests/kafkatest/tests/copycat_rest_test.py
tests/kafkatest/tests/copycat_test.py
Please, commit your changes or stash them before you can merge.
Aborting
Updating b4e1bdf..f2031d4

Command failed: cd 
 git pull && 
./gradlew clean jar
Running command: cd 
 vagrant destroy 
-f

Build step 'Execute shell' marked build as failure


[jira] [Updated] (KAFKA-2779) Kafka SSL transport layer leaks file descriptors

2015-11-09 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-2779:
--
Status: Patch Available  (was: Open)

Move socket close to finally block to avoid leaking file descriptors when 
graceful close message cannot be sent because remote end has already closed the 
connection.

> Kafka SSL transport layer leaks file descriptors
> 
>
> Key: KAFKA-2779
> URL: https://issues.apache.org/jira/browse/KAFKA-2779
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>
> There is currently no transition from read() to close() in SslTransportLayer 
> to handle graceful shutdown requests. As a result, Kafka SSL connections are 
> never shutdown gracefully. In addition to this, close() does not handle 
> ungraceful termination of connections correctly. If flush() fails because the 
> other end has performed a close (eg. because graceful termination was not 
> handled), Kafka prints out a warning and does not close the socket. This 
> leaks file descriptors.
> We are seeing a large number of open file descriptors because our health 
> checks to Kafka result in connections that are neither terminated gracefully 
> nor closed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Vector file of logo

2015-11-09 Thread Peter LaPierre
Hello there,

Does anyone have a Vector file of the Apache Kafka logo that we can use?

Kind regards
~Peter


[GitHub] kafka pull request: MINOR: remove old producer in config sections ...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/468


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: Make sure generated docs don't get chec...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/469


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2784) Mirror maker should swallow exceptions during shutdown.

2015-11-09 Thread Jiangjie Qin (JIRA)
Jiangjie Qin created KAFKA-2784:
---

 Summary: Mirror maker should swallow exceptions during shutdown.
 Key: KAFKA-2784
 URL: https://issues.apache.org/jira/browse/KAFKA-2784
 Project: Kafka
  Issue Type: Bug
Reporter: Jiangjie Qin
Assignee: Jiangjie Qin


Mirror maker should swallow exceptions during shutdown to make sure shutdown 
latch is pulled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #120

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: remove old producer in config sections to align with APIs

[wangguoz] MINOR: Improve exception message that gets thrown for non-existent 
group

[wangguoz] MINOR: Make sure generated docs don't get checked in

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-5 (docker Ubuntu ubuntu5 ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision e2794a79c8d2e8ae9cf30bfe10bbed34951a004b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f e2794a79c8d2e8ae9cf30bfe10bbed34951a004b
 > git rev-list 359be3a682951fd469d690df8d9e7a5a89a9d03b # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson2277902522427229263.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 15.803 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson284449076317262977.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 15.067 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Created] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-09 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-2790:
--

 Summary: Kafka 0.9.0 doc improvement
 Key: KAFKA-2790
 URL: https://issues.apache.org/jira/browse/KAFKA-2790
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 0.9.0.0
Reporter: Jun Rao
 Fix For: 0.9.0.0


Observed a few issues after uploading the 0.9.0 docs to the Apache site 
(http://kafka.apache.org/090/documentation.html).

1. There are a few places still mentioning 0.8.2.

docs/api.html:We are in the process of rewritting the JVM clients for Kafka. As 
of 0.8.2 Kafka includes a newly rewritten Java producer. The next release will 
include an equivalent Java consumer. These new clients are meant to supplant 
the existing Scala clients, but for compatability they will co-exist for some 
time. These clients are available in a seperate jar with minimal dependencies, 
while the old Scala clients remain packaged with the server.

docs/api.html:As of the 0.8.2 release we encourage all new development to use 
the new Java producer. This client is production tested and generally both 
faster and more fully featured than the previous Scala client. You can use this 
client by adding a dependency on the client jar using the following example 
maven co-ordinates (you can change the version numbers with new releases):

docs/api.html:  version0.8.2.0/version

docs/ops.html:The partition reassignment tool does not have the ability to 
automatically generate a reassignment plan for decommissioning brokers yet. As 
such, the admin has to come up with a reassignment plan to move the replica for 
all partitions hosted on the broker to be decommissioned, to the rest of the 
brokers. This can be relatively tedious as the reassignment needs to ensure 
that all the replicas are not moved from the decommissioned broker to only one 
other broker. To make this process effortless, we plan to add tooling support 
for decommissioning brokers in 0.8.2.

docs/quickstart.html:https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
 title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
docs/quickstart.html: cd kafka_2.10-0.8.2.0

2. The generated config tables (broker, producer and consumer) don't have the 
proper table frames.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2788) allow comma when specifying principals in AclCommand

2015-11-09 Thread Sriharsha Chintalapani (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sriharsha Chintalapani updated KAFKA-2788:
--
Assignee: Parth Brahmbhatt

> allow comma when specifying principals in AclCommand
> 
>
> Key: KAFKA-2788
> URL: https://issues.apache.org/jira/browse/KAFKA-2788
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Parth Brahmbhatt
> Fix For: 0.9.0.0
>
>
> Currently, comma doesn't seem to be allowed in AclCommand when specifying 
> allow-principals and deny-principals. However, when using ssl authentication, 
> by default, the client will look like the following and one can't pass that 
> in through AclCommand.
> "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2772) Stabilize replication hard bounce test

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997928#comment-14997928
 ] 

ASF GitHub Bot commented on KAFKA-2772:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/481

KAFKA-2772: Stabilize failures on replication with hard bounce



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2772-stabilize-replicationtest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/481.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #481


commit ff171abb0874d7dd53a5dcc8d471c876b45155c9
Author: Geoff Anderson 
Date:   2015-11-09T00:49:34Z

Stabilize failures on replication with hard bounce




> Stabilize replication hard bounce test
> --
>
> Key: KAFKA-2772
> URL: https://issues.apache.org/jira/browse/KAFKA-2772
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> There have been several spurious failures of replication tests during runs of 
> kafka system tests (see for example 
> http://testing.confluent.io/kafka/2015-11-07--001/)
> {code:title=report.txt}
> Expected producer to still be producing.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 65, in run_produce_consume_validate
> self.stop_producer_and_consumer()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 55, in stop_producer_and_consumer
> err_msg="Expected producer to still be producing.")
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Expected producer to still be producing.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2773) Vagrant provision fails if num_brokers or num_zookeepers is nonzero

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997927#comment-14997927
 ] 

ASF GitHub Bot commented on KAFKA-2773:
---

Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/455


> Vagrant provision fails if num_brokers or num_zookeepers is nonzero
> ---
>
> Key: KAFKA-2773
> URL: https://issues.apache.org/jira/browse/KAFKA-2773
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Trivial
> Fix For: 0.9.0.0
>
>
> Changes to vagrant provisioning for kafka system tests updated the default 
> path from "kafka" to "kafka-trunk" on the vagrant virtual machines.
> We neglected to update the corresponding path in vagrant/broker.sh and 
> vagrant/zk.sh. Therefore provisioning a static kafka cluster with Vagrant 
> currently fails.
> The fix here is just to update the corresponding path in vagrant/broker.sh 
> and vagrant/zk.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2772: Stabilize failures on replication ...

2015-11-09 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/481

KAFKA-2772: Stabilize failures on replication with hard bounce



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka 
KAFKA-2772-stabilize-replicationtest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/481.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #481


commit ff171abb0874d7dd53a5dcc8d471c876b45155c9
Author: Geoff Anderson 
Date:   2015-11-09T00:49:34Z

Stabilize failures on replication with hard bounce




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-09 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997824#comment-14997824
 ] 

Jun Rao commented on KAFKA-2790:


A couple of other issues.

3. Section 7.5 ZooKeeper Authentication is missing in the outline at the 
beginning.

4. Section 7.4 Authorization and ACLs doesn't have content. We need to put the 
content from the following wiki.
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Authorization+Command+Line+Interface

> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:version0.8.2.0/version
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html: cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2793: Use ByteArrayDeserializer instead ...

2015-11-09 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/482

KAFKA-2793: Use ByteArrayDeserializer instead of StringDeserializer for 
keys in ConsoleConsumer with new consumer.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2793-console-consumer-new-consumer-deserializer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/482.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #482


commit b437ffa341ff9e1c8e39b74d64ba9a51785172bc
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T05:44:00Z

KAFKA-2793: Use ByteArrayDeserializer instead of StringDeserializer for 
keys in ConsoleConsumer with new consumer.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-2793) ConsoleConsumer crashes with new consumer when using keys because of incorrect deserializer

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2793:


 Summary: ConsoleConsumer crashes with new consumer when using keys 
because of incorrect deserializer
 Key: KAFKA-2793
 URL: https://issues.apache.org/jira/browse/KAFKA-2793
 Project: Kafka
  Issue Type: Bug
  Components: tools
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava


The ConsoleConsumer class uses Array[Byte] everywhere, but the new consumer is 
configured with a string key deserializer, resulting in a class cast exception:

{quote}
java.lang.ClassCastException: java.lang.String cannot be cast to [B
at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:62)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:101)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:64)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:42)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
{quote}

Note that this is an issue whether you are printing the keys or not, it will be 
triggered by any non-null key (and I'd imagine some should also trigger 
serialization exceptions if they are not UTF8-decodeable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #793

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2786: Only respond to SinkTask onPartitionsRevoked after the

--
[...truncated 1722 lines...]

kafka.log.CleanerTest > testCleanSegments PASSED

kafka.log.CleanerTest > testCleaningWithUnkeyedMessages PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.OffsetIndexTest > lookupExtremeCases PASSED

kafka.log.OffsetIndexTest > appendTooMany PASSED

kafka.log.OffsetIndexTest > randomLookupTest PASSED

kafka.log.OffsetIndexTest > testReopen PASSED

kafka.log.OffsetIndexTest > appendOutOfOrder PASSED

kafka.log.OffsetIndexTest > truncate PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.security.auth.ZkAuthorizationTest > testIsZkSecurityEnabled PASSED

kafka.security.auth.ZkAuthorizationTest > testZkUtils PASSED

kafka.security.auth.ZkAuthorizationTest > testZkAntiMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testZkMigration PASSED

kafka.security.auth.ZkAuthorizationTest > testChroot PASSED

kafka.security.auth.ZkAuthorizationTest > testDelete PASSED

kafka.security.auth.ZkAuthorizationTest > testDeleteRecursive PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAllowAllAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFound PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testAclManagementAPIs PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testWildCardAcls PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testTopicAcl PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testSuperUserHasAccess PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testDenyTakesPrecedence PASSED

kafka.security.auth.SimpleAclAuthorizerTest > testNoAclFoundOverride PASSED

kafka.security.auth.SimpleAclAuthorizerTest > 

Build failed in Jenkins: kafka-trunk-jdk7 #791

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2769; Multi-consumer integration tests for consumer assignment

--
[...truncated 96 lines...]
@deprecated
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^

[jira] [Commented] (KAFKA-2790) Kafka 0.9.0 doc improvement

2015-11-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997861#comment-14997861
 ] 

Gwen Shapira commented on KAFKA-2790:
-

If you don't mind, I'd like to take this one :)

> Kafka 0.9.0 doc improvement
> ---
>
> Key: KAFKA-2790
> URL: https://issues.apache.org/jira/browse/KAFKA-2790
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Gwen Shapira
> Fix For: 0.9.0.0
>
>
> Observed a few issues after uploading the 0.9.0 docs to the Apache site 
> (http://kafka.apache.org/090/documentation.html).
> 1. There are a few places still mentioning 0.8.2.
> docs/api.html:We are in the process of rewritting the JVM clients for Kafka. 
> As of 0.8.2 Kafka includes a newly rewritten Java producer. The next release 
> will include an equivalent Java consumer. These new clients are meant to 
> supplant the existing Scala clients, but for compatability they will co-exist 
> for some time. These clients are available in a seperate jar with minimal 
> dependencies, while the old Scala clients remain packaged with the server.
> docs/api.html:As of the 0.8.2 release we encourage all new development to use 
> the new Java producer. This client is production tested and generally both 
> faster and more fully featured than the previous Scala client. You can use 
> this client by adding a dependency on the client jar using the following 
> example maven co-ordinates (you can change the version numbers with new 
> releases):
> docs/api.html:version0.8.2.0/version
> docs/ops.html:The partition reassignment tool does not have the ability to 
> automatically generate a reassignment plan for decommissioning brokers yet. 
> As such, the admin has to come up with a reassignment plan to move the 
> replica for all partitions hosted on the broker to be decommissioned, to the 
> rest of the brokers. This can be relatively tedious as the reassignment needs 
> to ensure that all the replicas are not moved from the decommissioned broker 
> to only one other broker. To make this process effortless, we plan to add 
> tooling support for decommissioning brokers in 0.8.2.
> docs/quickstart.html: href="https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz;
>  title="Kafka downloads">Download the 0.8.2.0 release and un-tar it.
> docs/quickstart.html: tar -xzf kafka_2.10-0.8.2.0.tgz
> docs/quickstart.html: cd kafka_2.10-0.8.2.0
> 2. The generated config tables (broker, producer and consumer) don't have the 
> proper table frames.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Refactor .gitignore

2015-11-09 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/479

MINOR: Refactor .gitignore

Refactor .gitignore with thorough coverage compiled from 
https://github.com/github/gitignore and inline documentation of why files are 
ignored.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka gitignore

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/479.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #479


commit 5c545762b97dd0e6a96e800a4b77857247036a29
Author: Grant Henke 
Date:   2015-11-10T02:44:59Z

MINOR: Refactor .gitignore

Refactor .gitignore with thorough coverage compiled from 
https://github.com/github/gitignore and inline documentation of why files are 
ingored.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-2274) Add integration test for consumer coordinator

2015-11-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2274.
--
Resolution: Fixed

Issue resolved by pull request 465
[https://github.com/apache/kafka/pull/465]

> Add integration test for consumer coordinator
> -
>
> Key: KAFKA-2274
> URL: https://issues.apache.org/jira/browse/KAFKA-2274
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> As discussed with Onur offline, here are some things we could test / simulate:
> - consumer kill -9 (tested in ConsumerTest)
> - broker kill -9 (tested in ConsumerTest)
> - consumer very long GC
> - broker very long GC
> - consumer network cable unplugged
> - broker network cable unplugged
> - consumer power cord unplugged
> - broker power cord unplugged
> Quoting Onur: " Another motivating factor is to verify if 
> response.wasDisconnected is good enough or if we actually need consumers to 
> detect coordinator failures with timeouts.
> GC’s can be simulated with SIGSTOP and SIGCONT. I think we might be able to 
> simulate network cable being unplugged with "ifconfig eth0 down”, but I’m not 
> sure."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2274) Add integration test for consumer coordinator

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997885#comment-14997885
 ] 

ASF GitHub Bot commented on KAFKA-2274:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/465


> Add integration test for consumer coordinator
> -
>
> Key: KAFKA-2274
> URL: https://issues.apache.org/jira/browse/KAFKA-2274
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> As discussed with Onur offline, here are some things we could test / simulate:
> - consumer kill -9 (tested in ConsumerTest)
> - broker kill -9 (tested in ConsumerTest)
> - consumer very long GC
> - broker very long GC
> - consumer network cable unplugged
> - broker network cable unplugged
> - consumer power cord unplugged
> - broker power cord unplugged
> Quoting Onur: " Another motivating factor is to verify if 
> response.wasDisconnected is good enough or if we actually need consumers to 
> detect coordinator failures with timeouts.
> GC’s can be simulated with SIGSTOP and SIGCONT. I think we might be able to 
> simulate network cable being unplugged with "ifconfig eth0 down”, but I’m not 
> sure."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #792

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2274: verifiable consumer and integration testing

--
[...truncated 96 lines...]
@deprecated
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^

Build failed in Jenkins: kafka-trunk-jdk8 #123

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2274: verifiable consumer and integration testing

--
[...truncated 125 lines...]
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^

[GitHub] kafka pull request: KAFKA-2773: (0.9.0 branch)Fixed broken vagrant...

2015-11-09 Thread granders
Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/455


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2793) ConsoleConsumer crashes with new consumer when using keys because of incorrect deserializer

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997993#comment-14997993
 ] 

ASF GitHub Bot commented on KAFKA-2793:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/482

KAFKA-2793: Use ByteArrayDeserializer instead of StringDeserializer for 
keys in ConsoleConsumer with new consumer.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2793-console-consumer-new-consumer-deserializer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/482.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #482


commit b437ffa341ff9e1c8e39b74d64ba9a51785172bc
Author: Ewen Cheslack-Postava 
Date:   2015-11-10T05:44:00Z

KAFKA-2793: Use ByteArrayDeserializer instead of StringDeserializer for 
keys in ConsoleConsumer with new consumer.




> ConsoleConsumer crashes with new consumer when using keys because of 
> incorrect deserializer
> ---
>
> Key: KAFKA-2793
> URL: https://issues.apache.org/jira/browse/KAFKA-2793
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> The ConsoleConsumer class uses Array[Byte] everywhere, but the new consumer 
> is configured with a string key deserializer, resulting in a class cast 
> exception:
> {quote}
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:62)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:101)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:64)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:42)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}
> Note that this is an issue whether you are printing the keys or not, it will 
> be triggered by any non-null key (and I'd imagine some should also trigger 
> serialization exceptions if they are not UTF8-decodeable).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2792) KafkaConsumer.close() can block unnecessarily due to leave group waiting for a reply

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2792:


 Summary: KafkaConsumer.close() can block unnecessarily due to 
leave group waiting for a reply
 Key: KAFKA-2792
 URL: https://issues.apache.org/jira/browse/KAFKA-2792
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
Priority: Blocker
 Fix For: 0.9.0.0


The current implementation of close() waits for a response to LeaveGroup. 
However, if we have an outstanding rebalance in the works, this can cause the 
close() operation to have to wait for the entire rebalance process to complete, 
which is annoying since the goal is to get rid of the consumer object anyway. 
This is at best surprising and at worst can cause unexpected bugs due to 
close() taking excessively long -- this was found due to exceeding timeouts 
unexpectedly causing other operations in Kafka Connect to timeout.

Waiting for a response isn't necessary since as soon as the data is in the TCP 
buffer, it'll be delivered to the broker. The client doesn't benefit at all 
from seeing the close group. So we can instead just always send the request 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #124

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2786: Only respond to SinkTask onPartitionsRevoked after the

--
[...truncated 125 lines...]
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  props.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "false")
   ^
:74:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
 ^

Re: Hadoop Contrib Modules

2015-11-09 Thread Grant Henke
If everyone is onboard. I will send a jira/pr shortly.

On Mon, Nov 9, 2015 at 12:12 PM, Gwen Shapira  wrote:

> +1 for dropping them. They didn't work properly in ages, everyone uses
> Camus and will probably move to Kafka connectors anyway.
>
> On Mon, Nov 9, 2015 at 10:10 AM, Grant Henke  wrote:
>
> > Are the hadoop contrib ("hadoop-consumer", "hadoop-producer") modules
> still
> > relevant going forward? Especially with the addition of Copycat (Kafka
> > Connector)?
> >
> > I only ask because if they are going to be dropped, we may want to before
> > this major release. Otherwise, we may want to consider updating them to
> use
> > the new producers/consumers and removing the dependency on core Kafka.
> >
> > Thanks,
> > Grant
> >
> > --
> > Grant Henke
> > Software Engineer | Cloudera
> > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[GitHub] kafka pull request: HOTFIX: bug updating cache when loading group ...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/462


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2776: Fix lookup of schema conversion ca...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/458


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2274: verifiable consumer and integratio...

2015-11-09 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/465

KAFKA-2274: verifiable consumer and integration testing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2274

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/465.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #465


commit adb51dab49d358ee31c94ddcde97fd1f5025a761
Author: Jason Gustafson 
Date:   2015-11-07T01:02:22Z

KAFKA-2274: verifiable consumer and integration testing




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Hadoop Contrib Modules

2015-11-09 Thread Ashish Singh
+1 on dropping them in 0.9.

On Mon, Nov 9, 2015 at 10:13 AM, Grant Henke  wrote:

> If everyone is onboard. I will send a jira/pr shortly.
>
> On Mon, Nov 9, 2015 at 12:12 PM, Gwen Shapira  wrote:
>
> > +1 for dropping them. They didn't work properly in ages, everyone uses
> > Camus and will probably move to Kafka connectors anyway.
> >
> > On Mon, Nov 9, 2015 at 10:10 AM, Grant Henke 
> wrote:
> >
> > > Are the hadoop contrib ("hadoop-consumer", "hadoop-producer") modules
> > still
> > > relevant going forward? Especially with the addition of Copycat (Kafka
> > > Connector)?
> > >
> > > I only ask because if they are going to be dropped, we may want to
> before
> > > this major release. Otherwise, we may want to consider updating them to
> > use
> > > the new producers/consumers and removing the dependency on core Kafka.
> > >
> > > Thanks,
> > > Grant
> > >
> > > --
> > > Grant Henke
> > > Software Engineer | Cloudera
> > > gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
> > >
> >
>
>
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>



-- 

Regards,
Ashish


[jira] [Commented] (KAFKA-2783) Drop outdated hadoop contrib modules

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997074#comment-14997074
 ] 

ASF GitHub Bot commented on KAFKA-2783:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/466

KAFKA-2783: Drop outdated hadoop contrib modules



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka drop-contrib

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/466.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #466


commit 37837d09e003943232874a8cfc08cf7c5d9019de
Author: Grant Henke 
Date:   2015-11-09T18:29:50Z

KAFKA-2783: Drop outdated hadoop contrib modules




> Drop outdated hadoop contrib modules
> 
>
> Key: KAFKA-2783
> URL: https://issues.apache.org/jira/browse/KAFKA-2783
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The hadoop contrib modules are not functional and drastically outdated. They 
> should be dropped from the build.
> If re-implemented in the future, adding them back can be considered via the 
> KIP process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2783) Drop outdated hadoop contrib modules

2015-11-09 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke updated KAFKA-2783:
---
Status: Patch Available  (was: Open)

> Drop outdated hadoop contrib modules
> 
>
> Key: KAFKA-2783
> URL: https://issues.apache.org/jira/browse/KAFKA-2783
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The hadoop contrib modules are not functional and drastically outdated. They 
> should be dropped from the build.
> If re-implemented in the future, adding them back can be considered via the 
> KIP process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2782) Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2782:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 463
[https://github.com/apache/kafka/pull/463]

> Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd
> -
>
> Key: KAFKA-2782
> URL: https://issues.apache.org/jira/browse/KAFKA-2782
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> One of the assertions is incorrect. This is a bit hard to spot because that 
> assertion is performed in a callback on a different thread, so if the 
> assertion fails it can just kill the other thread and the test times out 
> without a clear indication of what's going wrong. We can rearrange the 
> assertion a bit to test the same thing but make the assertions execute in the 
> main test thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2258) Port mirrormaker_testsuite

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2258:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 427
[https://github.com/apache/kafka/pull/427]

> Port mirrormaker_testsuite
> --
>
> Key: KAFKA-2258
> URL: https://issues.apache.org/jira/browse/KAFKA-2258
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> Port mirrormaker_testsuite to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2258) Port mirrormaker_testsuite

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997110#comment-14997110
 ] 

ASF GitHub Bot commented on KAFKA-2258:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/427


> Port mirrormaker_testsuite
> --
>
> Key: KAFKA-2258
> URL: https://issues.apache.org/jira/browse/KAFKA-2258
> Project: Kafka
>  Issue Type: Sub-task
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.0.0
>
>
> Port mirrormaker_testsuite to run on ducktape



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #117

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2779; Close SSL socket channel on remote connection close

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision efbebc6e843850b7ed9a1d015413c99f114a7d92 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f efbebc6e843850b7ed9a1d015413c99f114a7d92
 > git rev-list f2031d40639ef34c1591c22971394ef41c87652c # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3961661058356753946.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.774 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson8663357515496337607.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 11.001 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Build failed in Jenkins: kafka-trunk-jdk8 #118

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2781; Only require signing artifacts when uploading archives.

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision a24f9a23a6d8759538e91072e8d96d158d03bb63 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a24f9a23a6d8759538e91072e8d96d158d03bb63
 > git rev-list efbebc6e843850b7ed9a1d015413c99f114a7d92 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1062139058854447520.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 18.415 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson6004471617335806534.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.106 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


Build failed in Jenkins: kafka-trunk-jdk7 #782

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2781; Only require signing artifacts when uploading archives.

--
[...truncated 110 lines...]
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:392:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^
:121:
 value BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
corresponding Javadoc for more information.
  

[jira] [Created] (KAFKA-2783) Drop outdated hadoop contrib modules

2015-11-09 Thread Grant Henke (JIRA)
Grant Henke created KAFKA-2783:
--

 Summary: Drop outdated hadoop contrib modules
 Key: KAFKA-2783
 URL: https://issues.apache.org/jira/browse/KAFKA-2783
 Project: Kafka
  Issue Type: Task
Affects Versions: 0.8.2.2
Reporter: Grant Henke
Assignee: Grant Henke
Priority: Blocker
 Fix For: 0.9.0.0


The hadoop contrib modules are not functional and drastically outdated. They 
should be dropped from the build.

If re-implemented in the future, adding them back can be considered via the KIP 
process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: HOTFIX: bug updating cache when loading group ...

2015-11-09 Thread hachikuji
Github user hachikuji closed the pull request at:

https://github.com/apache/kafka/pull/464


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2783) Drop outdated hadoop contrib modules

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997146#comment-14997146
 ] 

ASF GitHub Bot commented on KAFKA-2783:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/466


> Drop outdated hadoop contrib modules
> 
>
> Key: KAFKA-2783
> URL: https://issues.apache.org/jira/browse/KAFKA-2783
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The hadoop contrib modules are not functional and drastically outdated. They 
> should be dropped from the build.
> If re-implemented in the future, adding them back can be considered via the 
> KIP process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2783) Drop outdated hadoop contrib modules

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2783:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 466
[https://github.com/apache/kafka/pull/466]

> Drop outdated hadoop contrib modules
> 
>
> Key: KAFKA-2783
> URL: https://issues.apache.org/jira/browse/KAFKA-2783
> Project: Kafka
>  Issue Type: Task
>Affects Versions: 0.8.2.2
>Reporter: Grant Henke
>Assignee: Grant Henke
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> The hadoop contrib modules are not functional and drastically outdated. They 
> should be dropped from the build.
> If re-implemented in the future, adding them back can be considered via the 
> KIP process. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2674: clarify onPartitionsRevoked behavi...

2015-11-09 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/467

KAFKA-2674: clarify onPartitionsRevoked behavior



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2674

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/467.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #467


commit 7922adfe74278f7f003d4053c7b6e06f618ab1a6
Author: Jason Gustafson 
Date:   2015-11-09T19:02:44Z

KAFKA-2674: clarify onPartitionsRevoked behavior




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997148#comment-14997148
 ] 

ASF GitHub Bot commented on KAFKA-2674:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/467

KAFKA-2674: clarify onPartitionsRevoked behavior



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2674

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/467.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #467


commit 7922adfe74278f7f003d4053c7b6e06f618ab1a6
Author: Jason Gustafson 
Date:   2015-11-09T19:02:44Z

KAFKA-2674: clarify onPartitionsRevoked behavior




> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2783: Drop outdated hadoop contrib modul...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/466


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Contribution

2015-11-09 Thread Jun Rao
Sylwester,

Thanks for your interest. Just added you to the contributor list.

Jun

On Mon, Nov 9, 2015 at 4:07 AM, Sylwester Klocek 
wrote:

> Hello,
>
> I would like to start contributing to kafka project. That is why I am
> asking for credentials to jira. My username: Zixxy.
>
>
> I am looking forward to hearing from you.
>
> Best,
> Sylwester Klocek
>


[jira] [Commented] (KAFKA-2778) Use zero loss settings for producer in Kafka Connect

2015-11-09 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997016#comment-14997016
 ] 

Gwen Shapira commented on KAFKA-2778:
-

https://github.com/apache/kafka/pull/459

> Use zero loss settings for producer in Kafka Connect
> 
>
> Key: KAFKA-2778
> URL: https://issues.apache.org/jira/browse/KAFKA-2778
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The settings for the producer in Kafka Connect are not using zero loss 
> settings (and was written before the client timeout patch, so would have been 
> using outdated settings regardless). It should be updated to use settings 
> that, by default, guarantee zero loss but can be overridden. In the case we 
> do see an error, the task already exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop Contrib Modules

2015-11-09 Thread Grant Henke
Are the hadoop contrib ("hadoop-consumer", "hadoop-producer") modules still
relevant going forward? Especially with the addition of Copycat (Kafka
Connector)?

I only ask because if they are going to be dropped, we may want to before
this major release. Otherwise, we may want to consider updating them to use
the new producers/consumers and removing the dependency on core Kafka.

Thanks,
Grant

-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


[jira] [Created] (KAFKA-2782) Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2782:


 Summary: Incorrect assertion in 
KafkaBasedLogTest.testSendAndReadToEnd
 Key: KAFKA-2782
 URL: https://issues.apache.org/jira/browse/KAFKA-2782
 Project: Kafka
  Issue Type: Bug
  Components: copycat
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


One of the assertions is incorrect. This is a bit hard to spot because that 
assertion is performed in a callback on a different thread, so if the assertion 
fails it can just kill the other thread and the test times out without a clear 
indication of what's going wrong. We can rearrange the assertion a bit to test 
the same thing but make the assertions execute in the main test thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2782: Fix KafkaBasedLogTest assertion an...

2015-11-09 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/463

KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main test 
thread.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2782-fix-kafka-based-log-test-assertion

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/463.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #463


commit 06b2ac5f5a374bd3d46926d865f951aabc16ccfb
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:18:29Z

KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main test 
thread.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2782) Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997050#comment-14997050
 ] 

ASF GitHub Bot commented on KAFKA-2782:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/463

KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main test 
thread.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
kafka-2782-fix-kafka-based-log-test-assertion

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/463.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #463


commit 06b2ac5f5a374bd3d46926d865f951aabc16ccfb
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T18:18:29Z

KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main test 
thread.




> Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd
> -
>
> Key: KAFKA-2782
> URL: https://issues.apache.org/jira/browse/KAFKA-2782
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> One of the assertions is incorrect. This is a bit hard to spot because that 
> assertion is performed in a callback on a different thread, so if the 
> assertion fails it can just kill the other thread and the test times out 
> without a clear indication of what's going wrong. We can rearrange the 
> assertion a bit to test the same thing but make the assertions execute in the 
> main test thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2274) Add integration test for consumer coordinator

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997067#comment-14997067
 ] 

ASF GitHub Bot commented on KAFKA-2274:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/465

KAFKA-2274: verifiable consumer and integration testing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2274

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/465.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #465


commit adb51dab49d358ee31c94ddcde97fd1f5025a761
Author: Jason Gustafson 
Date:   2015-11-07T01:02:22Z

KAFKA-2274: verifiable consumer and integration testing




> Add integration test for consumer coordinator
> -
>
> Key: KAFKA-2274
> URL: https://issues.apache.org/jira/browse/KAFKA-2274
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.9.0.0
>
>
> As discussed with Onur offline, here are some things we could test / simulate:
> - consumer kill -9 (tested in ConsumerTest)
> - broker kill -9 (tested in ConsumerTest)
> - consumer very long GC
> - broker very long GC
> - consumer network cable unplugged
> - broker network cable unplugged
> - consumer power cord unplugged
> - broker power cord unplugged
> Quoting Onur: " Another motivating factor is to verify if 
> response.wasDisconnected is good enough or if we actually need consumers to 
> detect coordinator failures with timeouts.
> GC’s can be simulated with SIGSTOP and SIGCONT. I think we might be able to 
> simulate network cable being unplugged with "ifconfig eth0 down”, but I’m not 
> sure."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2782) Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997092#comment-14997092
 ] 

ASF GitHub Bot commented on KAFKA-2782:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/463


> Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd
> -
>
> Key: KAFKA-2782
> URL: https://issues.apache.org/jira/browse/KAFKA-2782
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> One of the assertions is incorrect. This is a bit hard to spot because that 
> assertion is performed in a callback on a different thread, so if the 
> assertion fails it can just kill the other thread and the test times out 
> without a clear indication of what's going wrong. We can rearrange the 
> assertion a bit to test the same thing but make the assertions execute in the 
> main test thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2782: Fix KafkaBasedLogTest assertion an...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/463


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2747) Message loss if mirror maker is killed with hard kill and then restarted

2015-11-09 Thread Jiangjie Qin (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997087#comment-14997087
 ] 

Jiangjie Qin commented on KAFKA-2747:
-

[~geoffra] Not sure how this happens. I dumped the log segments from source and 
target cluster, found the following messages are missing:
offset 34291 (target) > offset 34079 (source)
offset 34292 (target) > offset 57421 (source)
{noformat}
>From mirror maker log it does not seems have any problem.
[2015-11-09 00:18:43,870] INFO Starting mirror maker (kafka.tools.MirrorMaker$)

[2015-11-09 00:19:17,744] DEBUG Committed offset 28549 for partition topic-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
. (hard kill)
[2015-11-09 00:19:23,772] INFO Starting mirror maker (kafka.tools.MirrorMaker$)

[2015-11-09 00:19:17,744] DEBUG Committed offset 28549 for partition topic-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
...
[2015-11-09 00:19:49,099] DEBUG Committed offset 57421 for partition topic-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
... (hard kill)
[2015-11-09 00:19:55,351] INFO Starting mirror maker (kafka.tools.MirrorMaker$)

[2015-11-09 00:20:16,608] DEBUG Resetting offset for partition topic-0 to the 
committed offset 57421 (org.apache.kafka.clients.consumer.internals.Fetcher)

[2015-11-09 00:20:19,076] DEBUG Committed offset 87423 for partition topic-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
(Another hard kill)
[2015-11-09 00:20:26,268] INFO Starting mirror maker (kafka.tools.MirrorMaker$)

[2015-11-09 00:20:46,612] DEBUG Resetting offset for partition topic-0 to the 
committed offset 87423 (org.apache.kafka.clients.consumer.internals.Fetcher)
...
[2015-11-09 00:21:52,803] DEBUG Committed offset 120567 for partition topic-0 
(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
...
[2015-11-09 00:21:52,829] INFO [mirrormaker-thread-0] Mirror maker thread 
shutdown complete (kafka.tools.MirrorMaker$MirrorMakerThread)
{noformat}

It looks mirror maker committed offset 57421 when the messages between 34079 
and 57421 haven't been sent successfully. I am not sure how this happened 
because in mirror maker we always call producer.flush() before committing 
offsets. Can you turn on trace level logging on MirrorMaker and broker for 
further debugging?

> Message loss if mirror maker is killed with hard kill and then restarted
> 
>
> Key: KAFKA-2747
> URL: https://issues.apache.org/jira/browse/KAFKA-2747
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
> Attachments: 2015-11-08--004-dataloss-newconsumer.tar.gz
>
>
> I recently added simple failover to the existing mirror maker test 
> (https://github.com/apache/kafka/pull/427) and found that killing mirror 
> maker process with a hard kill resulted in message loss.
> The test here has two single-node broker clusters, one producer producing to 
> the source cluster, one consumer consuming from the target cluster, and a 
> single mirror maker instance mirroring data between the two clusters.
> mirror maker is using old consumer, zookeeper for offset storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2258: add failover to mirrormaker test

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/427


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2747) Message loss if mirror maker is killed with hard kill and then restarted

2015-11-09 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997108#comment-14997108
 ] 

Jason Gustafson commented on KAFKA-2747:


It would also be helpful to add some logging to 
o.a.k.clients.consumer.internals.Fetcher, in particular for each offset fetched 
by the client and each offset returned to the user.

> Message loss if mirror maker is killed with hard kill and then restarted
> 
>
> Key: KAFKA-2747
> URL: https://issues.apache.org/jira/browse/KAFKA-2747
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
> Attachments: 2015-11-08--004-dataloss-newconsumer.tar.gz
>
>
> I recently added simple failover to the existing mirror maker test 
> (https://github.com/apache/kafka/pull/427) and found that killing mirror 
> maker process with a hard kill resulted in message loss.
> The test here has two single-node broker clusters, one producer producing to 
> the source cluster, one consumer consuming from the target cluster, and a 
> single mirror maker instance mirroring data between the two clusters.
> mirror maker is using old consumer, zookeeper for offset storage



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hadoop Contrib Modules

2015-11-09 Thread Neha Narkhede
They aren't relevant and are also not functional. I'd recommend removing
those and agree that we can do that as part of 0.9.

On Mon, Nov 9, 2015 at 10:10 AM, Grant Henke  wrote:

> Are the hadoop contrib ("hadoop-consumer", "hadoop-producer") modules still
> relevant going forward? Especially with the addition of Copycat (Kafka
> Connector)?
>
> I only ask because if they are going to be dropped, we may want to before
> this major release. Otherwise, we may want to consider updating them to use
> the new producers/consumers and removing the dependency on core Kafka.
>
> Thanks,
> Grant
>
> --
> Grant Henke
> Software Engineer | Cloudera
> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke
>



-- 
Thanks,
Neha


[jira] [Commented] (KAFKA-2776) JsonConverter uses wrong key to look up schema conversion cache size configuration

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997052#comment-14997052
 ] 

ASF GitHub Bot commented on KAFKA-2776:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/458


> JsonConverter uses wrong key to look up schema conversion cache size 
> configuration
> --
>
> Key: KAFKA-2776
> URL: https://issues.apache.org/jira/browse/KAFKA-2776
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> There's a typo where the wrong variable is used as the key when looking up 
> this config. Instead of SCHEMAS_CACHE_CONFIG, we have 
> SCHEMAS_CACHE_SIZE_DEFAULT. We should a) fix the naming of 
> SCHEMAS_CACHE_CONFIG to SCHEMAS_CACHE_SIZE_CONFIG and b) fix the key used 
> when looking up that configuration options. The current code results in 
> always using the default cache size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2782) Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2782:
-
Status: Patch Available  (was: Open)

> Incorrect assertion in KafkaBasedLogTest.testSendAndReadToEnd
> -
>
> Key: KAFKA-2782
> URL: https://issues.apache.org/jira/browse/KAFKA-2782
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> One of the assertions is incorrect. This is a bit hard to spot because that 
> assertion is performed in a callback on a different thread, so if the 
> assertion fails it can just kill the other thread and the test times out 
> without a clear indication of what's going wrong. We can rearrange the 
> assertion a bit to test the same thing but make the assertions execute in the 
> main test thread.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2783: Drop outdated hadoop contrib modul...

2015-11-09 Thread granthenke
GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/466

KAFKA-2783: Drop outdated hadoop contrib modules



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka drop-contrib

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/466.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #466


commit 37837d09e003943232874a8cfc08cf7c5d9019de
Author: Grant Henke 
Date:   2015-11-09T18:29:50Z

KAFKA-2783: Drop outdated hadoop contrib modules




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2778) Use zero loss settings for producer in Kafka Connect

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2778:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 459
[https://github.com/apache/kafka/pull/459]

> Use zero loss settings for producer in Kafka Connect
> 
>
> Key: KAFKA-2778
> URL: https://issues.apache.org/jira/browse/KAFKA-2778
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The settings for the producer in Kafka Connect are not using zero loss 
> settings (and was written before the client timeout patch, so would have been 
> using outdated settings regardless). It should be updated to use settings 
> that, by default, guarantee zero loss but can be overridden. In the case we 
> do see an error, the task already exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #783

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2776: Fix lookup of schema conversion cache size in 
JsonConverter.

[wangguoz] HOTFIX: bug updating cache when loading group metadata

[cshapi] KAFKA-2775: Move exceptions into API package for Kafka Connect.

[cshapi] KAFKA-2778: Use zero loss settings by default for Connect source

--
[...truncated 110 lines...]
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.uncleanLeaderElectionRate
^
:294:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
:115:
 value METADATA_FETCH_TIMEOUT_CONFIG in object ProducerConfig is deprecated: 
see corresponding Javadoc for more information.
props.put(ProducerConfig.METADATA_FETCH_TIMEOUT_CONFIG, 
config.metadataFetchTimeoutMs.toString)
 ^
:117:
 value TIMEOUT_CONFIG in object ProducerConfig is deprecated: see corresponding 
Javadoc for more information.
props.put(ProducerConfig.TIMEOUT_CONFIG, config.requestTimeoutMs.toString)
 ^

[GitHub] kafka pull request: HOTFIX: bug updating cache when loading group ...

2015-11-09 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/462

HOTFIX: bug updating cache when loading group metadata

The bug causes only the first instance of group metadata in the topic to be 
written to the cache (because of the putIfNotExists in addGroup). Coordinator 
fail-over won't work properly unless the cache is loaded with the right 
metadata.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-group-loading

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/462.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #462


commit fcd943584cc458ec9c775d39b95f738d083a007a
Author: Jason Gustafson 
Date:   2015-11-09T17:24:55Z

HOTFIX: bug updating cache when loading group metadata




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: HOTFIX: bug updating cache when loading group ...

2015-11-09 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/464

HOTFIX: bug updating cache when loading group metadata (0.9.0)



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka hotfix-group-loading-0.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/464.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #464


commit 3c1e7a372aaf1451651aa6bbb483dba5e0648c7d
Author: Jason Gustafson 
Date:   2015-11-09T17:24:55Z

HOTFIX: bug updating cache when loading group metadata




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2776) JsonConverter uses wrong key to look up schema conversion cache size configuration

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2776:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 458
[https://github.com/apache/kafka/pull/458]

> JsonConverter uses wrong key to look up schema conversion cache size 
> configuration
> --
>
> Key: KAFKA-2776
> URL: https://issues.apache.org/jira/browse/KAFKA-2776
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> There's a typo where the wrong variable is used as the key when looking up 
> this config. Instead of SCHEMAS_CACHE_CONFIG, we have 
> SCHEMAS_CACHE_SIZE_DEFAULT. We should a) fix the naming of 
> SCHEMAS_CACHE_CONFIG to SCHEMAS_CACHE_SIZE_CONFIG and b) fix the key used 
> when looking up that configuration options. The current code results in 
> always using the default cache size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2775) Copycat exceptions should be in api package so they can be caught by user code without any dependencies other than api

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997063#comment-14997063
 ] 

ASF GitHub Bot commented on KAFKA-2775:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/457


> Copycat exceptions should be in api package so they can be caught by user 
> code without any dependencies other than api
> --
>
> Key: KAFKA-2775
> URL: https://issues.apache.org/jira/browse/KAFKA-2775
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Some of these were created in runtime because that's where the code that used 
> them originated, but locating them there requires depending on the entire 
> runtime jar. Instead, we should have a set of exceptions defined in the API 
> that users can both rely on and use in their own code without additional 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2775: Move exceptions into API package f...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/457


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2775) Copycat exceptions should be in api package so they can be caught by user code without any dependencies other than api

2015-11-09 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-2775:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 457
[https://github.com/apache/kafka/pull/457]

> Copycat exceptions should be in api package so they can be caught by user 
> code without any dependencies other than api
> --
>
> Key: KAFKA-2775
> URL: https://issues.apache.org/jira/browse/KAFKA-2775
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> Some of these were created in runtime because that's where the code that used 
> them originated, but locating them there requires depending on the entire 
> runtime jar. Instead, we should have a set of exceptions defined in the API 
> that users can both rely on and use in their own code without additional 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2778) Use zero loss settings for producer in Kafka Connect

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997081#comment-14997081
 ] 

ASF GitHub Bot commented on KAFKA-2778:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/459


> Use zero loss settings for producer in Kafka Connect
> 
>
> Key: KAFKA-2778
> URL: https://issues.apache.org/jira/browse/KAFKA-2778
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> The settings for the producer in Kafka Connect are not using zero loss 
> settings (and was written before the client timeout patch, so would have been 
> using outdated settings regardless). It should be updated to use settings 
> that, by default, guarantee zero loss but can be overridden. In the case we 
> do see an error, the task already exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2778: Use zero loss settings by default ...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/459


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-11-09 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997155#comment-14997155
 ] 

Jason Gustafson commented on KAFKA-2674:


[~guozhang] [~becket_qin] I added a commit to clarify the behavior. I think the 
documentation was already fairly clear, so I just added a comment to emphasize 
that onPartitionsRevoked() is not called before close(). I also reordered the 
methods to try to suggest the order they are actually invoked. I don't think 
this is a blocker for 0.9.0.

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #781

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-2779; Close SSL socket channel on remote connection close

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-2 (docker Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision efbebc6e843850b7ed9a1d015413c99f114a7d92 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f efbebc6e843850b7ed9a1d015413c99f114a7d92
 > git rev-list f2031d40639ef34c1591c22971394ef41c87652c # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5391396605357171575.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 19.87 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson2455080715651071832.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:contrib:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:contrib:hadoop-consumer:clean UP-TO-DATE
:contrib:hadoop-producer:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 17.775 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Updated] (KAFKA-2779) Kafka SSL transport layer leaks file descriptors

2015-11-09 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2779:
---
   Resolution: Fixed
Fix Version/s: 0.9.0.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 460
[https://github.com/apache/kafka/pull/460]

> Kafka SSL transport layer leaks file descriptors
> 
>
> Key: KAFKA-2779
> URL: https://issues.apache.org/jira/browse/KAFKA-2779
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> There is currently no transition from read() to close() in SslTransportLayer 
> to handle graceful shutdown requests. As a result, Kafka SSL connections are 
> never shutdown gracefully. In addition to this, close() does not handle 
> ungraceful termination of connections correctly. If flush() fails because the 
> other end has performed a close (eg. because graceful termination was not 
> handled), Kafka prints out a warning and does not close the socket. This 
> leaks file descriptors.
> We are seeing a large number of open file descriptors because our health 
> checks to Kafka result in connections that are neither terminated gracefully 
> nor closed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2781) Signing jars shouldn't be required for install task

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2781:
-
Status: Patch Available  (was: Open)

> Signing jars shouldn't be required for install task
> ---
>
> Key: KAFKA-2781
> URL: https://issues.apache.org/jira/browse/KAFKA-2781
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> This was a regression when we added support for a flag that allows skipping 
> signing. That patch unintentionally broadened the default tasks that require 
> signing to Gradle's normal defaults, which include install when the build is 
> not a -SNAPSHOT. However, we really only want to sign when uploading archives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2781) Signing jars shouldn't be required for install task

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996890#comment-14996890
 ] 

ASF GitHub Bot commented on KAFKA-2781:
---

GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/461

KAFKA-2781: Only require signing artifacts when uploading archives.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2781-no-signing-for-install

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/461.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #461


commit 9ffbf55dcb1f0a38f62debb7225fd3a84829a3da
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T16:57:34Z

KAFKA-2781: Only require signing artifacts when uploading archives.




> Signing jars shouldn't be required for install task
> ---
>
> Key: KAFKA-2781
> URL: https://issues.apache.org/jira/browse/KAFKA-2781
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> This was a regression when we added support for a flag that allows skipping 
> signing. That patch unintentionally broadened the default tasks that require 
> signing to Gradle's normal defaults, which include install when the build is 
> not a -SNAPSHOT. However, we really only want to sign when uploading archives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2781) Signing jars shouldn't be required for install task

2015-11-09 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-2781:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 461
[https://github.com/apache/kafka/pull/461]

> Signing jars shouldn't be required for install task
> ---
>
> Key: KAFKA-2781
> URL: https://issues.apache.org/jira/browse/KAFKA-2781
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.0
>
>
> This was a regression when we added support for a flag that allows skipping 
> signing. That patch unintentionally broadened the default tasks that require 
> signing to Gradle's normal defaults, which include install when the build is 
> not a -SNAPSHOT. However, we really only want to sign when uploading archives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2780) kafka-server-start.sh globbing issue

2015-11-09 Thread Jon Bringhurst (JIRA)
Jon Bringhurst created KAFKA-2780:
-

 Summary: kafka-server-start.sh globbing issue
 Key: KAFKA-2780
 URL: https://issues.apache.org/jira/browse/KAFKA-2780
 Project: Kafka
  Issue Type: Bug
Reporter: Jon Bringhurst


When spaces are present in the path leading up to kafka-server-start.sh, 
kafka-server-start.sh has trouble finding kafka-run-class.sh.

The path that is executed should be quoted.

{noformat}
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2> /Volumes/NO\ 
NAME/LISA\ 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh
 ./config/server.properties 
usage: dirname path
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: /kafka-run-class.sh: No such file or directory
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: exec: /kafka-run-class.sh: cannot execute: No such file or directory
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2>
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-2781) Signing jars shouldn't be required for install task

2015-11-09 Thread Ewen Cheslack-Postava (JIRA)
Ewen Cheslack-Postava created KAFKA-2781:


 Summary: Signing jars shouldn't be required for install task
 Key: KAFKA-2781
 URL: https://issues.apache.org/jira/browse/KAFKA-2781
 Project: Kafka
  Issue Type: Bug
Reporter: Ewen Cheslack-Postava
Assignee: Ewen Cheslack-Postava
 Fix For: 0.9.0.0


This was a regression when we added support for a flag that allows skipping 
signing. That patch unintentionally broadened the default tasks that require 
signing to Gradle's normal defaults, which include install when the build is 
not a -SNAPSHOT. However, we really only want to sign when uploading archives.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2781: Only require signing artifacts whe...

2015-11-09 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/461

KAFKA-2781: Only require signing artifacts when uploading archives.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka kafka-2781-no-signing-for-install

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/461.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #461


commit 9ffbf55dcb1f0a38f62debb7225fd3a84829a3da
Author: Ewen Cheslack-Postava 
Date:   2015-11-09T16:57:34Z

KAFKA-2781: Only require signing artifacts when uploading archives.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2781: Only require signing artifacts whe...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/461


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2779) Kafka SSL transport layer leaks file descriptors

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14996702#comment-14996702
 ] 

ASF GitHub Bot commented on KAFKA-2779:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/460


> Kafka SSL transport layer leaks file descriptors
> 
>
> Key: KAFKA-2779
> URL: https://issues.apache.org/jira/browse/KAFKA-2779
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
> Fix For: 0.9.0.0
>
>
> There is currently no transition from read() to close() in SslTransportLayer 
> to handle graceful shutdown requests. As a result, Kafka SSL connections are 
> never shutdown gracefully. In addition to this, close() does not handle 
> ungraceful termination of connections correctly. If flush() fails because the 
> other end has performed a close (eg. because graceful termination was not 
> handled), Kafka prints out a warning and does not close the socket. This 
> leaks file descriptors.
> We are seeing a large number of open file descriptors because our health 
> checks to Kafka result in connections that are neither terminated gracefully 
> nor closed correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Logging System

2015-11-09 Thread Nathan Christie
Do you have any plans to migrate from log4j 1 to log4j 2?


Thanks,

Nathan Christie  /  Software Developer
AdTheorent  /  'The Intelligent Impression'
155 Avenue of the Americas  /  6th Floor  /  New York, NY 10013
Mobile: 615-727-3099
Skype: nathan.christie_4

www.adtheorent.com

[cid:0D2C3260-2CC7-46AC-80A3-45266CD94B01] 
[cid:11C9CFDD-1D98-4422-88ED-096F9C651C7F]   
[cid:AA97E048-E1CA-40A8-A860-F0A40F486A05] 
  
[cid:B126AC0C-02B1-41DD-87DE-7E78B4E0E9C5] 
[cid:0633E44D-012F-4797-8D9F-C1BBE0B4E34E]

This message contains information which may be confidential and privileged. 
Unless you are intended recipient (or authorized to receive this message for 
the intended recipient), you may not copy, disseminate or disclose to anyone 
the message or any information contained in the message. If you have received 
the message in error, please notify the sender by reply e-mail or forward the 
message to i...@adtheorent.com and delete the 
message. Thank you.



[GitHub] kafka pull request: KAFKA-2779: Close SSL socket channel on remote...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/460


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2674: clarify onPartitionsRevoked behavi...

2015-11-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/467


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-2770: Catch and ignore WakeupException f...

2015-11-09 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/470

KAFKA-2770: Catch and ignore WakeupException for commit upon closing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2770

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/470.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #470






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2770) Race condition causes Mirror Maker to hang during shutdown (new consumer)

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997287#comment-14997287
 ] 

ASF GitHub Bot commented on KAFKA-2770:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/470

KAFKA-2770: Catch and ignore WakeupException for commit upon closing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2770

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/470.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #470






> Race condition causes Mirror Maker to hang during shutdown (new consumer)
> -
>
> Key: KAFKA-2770
> URL: https://issues.apache.org/jira/browse/KAFKA-2770
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Guozhang Wang
> Fix For: 0.9.0.1
>
>
> I recently added clean bounce with new consumer to the mirror maker tests 
> (https://github.com/apache/kafka/pull/427), and noticed that in this case the 
> mirror maker process (with new consumer) sometimes hangs and fails to stop 
> when stopped with kill -15
> {code:title=mirror_maker.log|borderStyle=solid}
> [2015-11-06 22:06:04,213] INFO Start clean shutdown. 
> (kafka.tools.MirrorMaker$)
> [2015-11-06 22:06:04,221] INFO Shutting down consumer threads. 
> (kafka.tools.MirrorMaker$)
> [2015-11-06 22:06:04,239] INFO [mirrormaker-thread-0] mirrormaker-thread-0 
> shutting down (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2015-11-06 22:06:04,253] INFO [mirrormaker-thread-0] Flushing producer. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> [2015-11-06 22:06:04,254] INFO [mirrormaker-thread-0] Committing consumer 
> offsets. (kafka.tools.MirrorMaker$MirrorMakerThread)
> Exception in thread "mirrormaker-thread-0" 
> org.apache.kafka.common.errors.WakeupException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:304)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:194)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:184)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:154)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:347)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:895)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:869)
>   at 
> kafka.tools.MirrorMaker$MirrorMakerNewConsumer.commit(MirrorMaker.scala:522)
>   at kafka.tools.MirrorMaker$.commitOffsets(MirrorMaker.scala:338)
>   at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:406)
> [2015-11-06 22:06:29,448] DEBUG Connection with worker4/192.168.50.104 
> disconnected (org.apache.kafka.common.network.Selector)
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:71)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:160)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:141)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:288)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:216)
>   at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:128)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The current working hypothesis is this:
> a WakeupException is being triggered during the finally block in mirror maker 
> by the call to commitOffsets, and the mirror maker thread dies before the 
> call to shutdownLatch.countDown(). Therefore the shutdownLatch.await() call 
> in awaitShutdown() blocks forever and the process never exits.
> Why can commitOffsets trigger a wakeup exception?
> The shutdown hook is triggered in another thread, and does this:
> shuttingDown = true
> mirrorMakerConsumer.stop()  # Calls consumer.wakeup()
> If the timing is right (wrong), the wakeup flag is set, but the mirrormaker 
> produce/consume loop exits without triggering the WakeupException, and the 
> WakeupException isn't thrown until commitOffsets() is called in the finally 
> block.




Build failed in Jenkins: kafka-trunk-jdk8 #119

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2776: Fix lookup of schema conversion cache size in 
JsonConverter.

[wangguoz] HOTFIX: bug updating cache when loading group metadata

[cshapi] KAFKA-2775: Move exceptions into API package for Kafka Connect.

[cshapi] KAFKA-2778: Use zero loss settings by default for Connect source

[cshapi] KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main 
test

[cshapi] KAFKA-2258: add failover to mirrormaker test

[cshapi] KAFKA-2783; Drop outdated hadoop contrib modules

[wangguoz] KAFKA-2674: clarify onPartitionsRevoked behavior

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (Ubuntu ubuntu) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 359be3a682951fd469d690df8d9e7a5a89a9d03b 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 359be3a682951fd469d690df8d9e7a5a89a9d03b
 > git rev-list a24f9a23a6d8759538e91072e8d96d158d03bb63 # timeout=10
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson7007485757860032672.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.1/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:downloadWrapper UP-TO-DATE

BUILD SUCCESSFUL

Total time: 12.335 secs
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1655375939896570564.sh
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll docsJarAll 
testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.8/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.5
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:jar_core_2_10_5
Building project 'core' with Scala version 2.10.5
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 10.803 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_1_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.1
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45


[jira] [Updated] (KAFKA-2773) Vagrant provision fails if num_brokers or num_zookeepers is nonzero

2015-11-09 Thread Geoff Anderson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoff Anderson updated KAFKA-2773:
--
Status: Patch Available  (was: Open)

> Vagrant provision fails if num_brokers or num_zookeepers is nonzero
> ---
>
> Key: KAFKA-2773
> URL: https://issues.apache.org/jira/browse/KAFKA-2773
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Trivial
>
> Changes to vagrant provisioning for kafka system tests updated the default 
> path from "kafka" to "kafka-trunk" on the vagrant virtual machines.
> We neglected to update the corresponding path in vagrant/broker.sh and 
> vagrant/zk.sh. Therefore provisioning a static kafka cluster with Vagrant 
> currently fails.
> The fix here is just to update the corresponding path in vagrant/broker.sh 
> and vagrant/zk.sh



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Improve exception message that gets thr...

2015-11-09 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/471

MINOR: Improve exception message that gets thrown for non-existent group



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka ExceptionMessage

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/471.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #471


commit 4bc9167fba19d9dc538fb090ef0e58ca241365e4
Author: Ashish Singh 
Date:   2015-11-09T20:46:20Z

MINOR: Improve exception message that gets thrown for non-existent group




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #784

2015-11-09 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-2782: Fix KafkaBasedLogTest assertion and move it to the main 
test

[cshapi] KAFKA-2258: add failover to mirrormaker test

[cshapi] KAFKA-2783; Drop outdated hadoop contrib modules

[wangguoz] KAFKA-2674: clarify onPartitionsRevoked behavior

--
[...truncated 97 lines...]
 ^
:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
17 warnings found
:kafka-trunk-jdk7:core:processResources UP-TO-DATE
:kafka-trunk-jdk7:core:classes
:kafka-trunk-jdk7:clients:compileTestJavaNote: 

 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.

:kafka-trunk-jdk7:clients:processTestResources
:kafka-trunk-jdk7:clients:testClasses
:kafka-trunk-jdk7:core:copyDependantLibs
:kafka-trunk-jdk7:core:copyDependantTestLibs
:kafka-trunk-jdk7:core:jar
:jar_core_2_11_7
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:core:compileJava UP-TO-DATE
:kafka-trunk-jdk7:core:compileScala
:78:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:399:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:273:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if (offsetAndMetadata.commitTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

  ^
:293:
 a pure expression does nothing in statement position; you 

Build failed in Jenkins: kafka-trunk-jdk7 #785

2015-11-09 Thread Apache Jenkins Server
See 

--
[...truncated 2400 lines...]

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[0] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[0] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[0] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[0] PASSED

kafka.zk.ZKEphemeralTest > testOverlappingSessions[1] PASSED

kafka.zk.ZKEphemeralTest > testEphemeralNodeCleanup[1] PASSED

kafka.zk.ZKEphemeralTest > testZkWatchedEphemeral[1] PASSED

kafka.zk.ZKEphemeralTest > testSameSession[1] PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentSequentialExists PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathExists PASSED

kafka.zk.ZKPathTest > testCreatePersistentPath PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExistsThrowsException PASSED

kafka.zk.ZKPathTest > testCreateEphemeralPathThrowsException PASSED

kafka.zk.ZKPathTest > testCreatePersistentPathThrowsException PASSED

kafka.zk.ZKPathTest > testMakeSurePersistsPathExists PASSED

kafka.metrics.KafkaTimerTest > testKafkaTimer PASSED

kafka.message.MessageCompressionTest > testSimpleCompressDecompress PASSED

kafka.message.MessageWriterTest > testWithNoCompressionAttribute PASSED

kafka.message.MessageWriterTest > testWithCompressionAttribute PASSED

kafka.message.MessageWriterTest > testBufferingOutputStream PASSED

kafka.message.MessageWriterTest > testWithKey PASSED

kafka.message.ByteBufferMessageSetTest > testOffsetAssignment PASSED

kafka.message.ByteBufferMessageSetTest > testValidBytes PASSED


[GitHub] kafka pull request: MINOR: remove old producer in config sections ...

2015-11-09 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/468

MINOR: remove old producer in config sections to align with APIs



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka WikiUpdate

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/468.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #468


commit 2507edc967a74c2328d803d129d7d6834463944a
Author: Guozhang Wang 
Date:   2015-11-09T20:09:34Z

v1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2780) kafka-server-start.sh globbing issue

2015-11-09 Thread Jon Bringhurst (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jon Bringhurst updated KAFKA-2780:
--
Description: 
When spaces are present in the path leading up to kafka-server-start.sh, 
kafka-server-start.sh has trouble finding kafka-run-class.sh.

The path that is executed should be quoted.

{noformat}
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2> /Volumes/NO\ 
NAME/LISA\ 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh
 ./config/server.properties 
usage: dirname path
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: /kafka-run-class.sh: No such file or directory
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: exec: /kafka-run-class.sh: cannot execute: No such file or directory
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2>
{noformat}

This was discovered during LISA15. After I'm back from the conference, I'll try 
to edit this and add simple instructions to reproduce (and maybe a patch).

  was:
When spaces are present in the path leading up to kafka-server-start.sh, 
kafka-server-start.sh has trouble finding kafka-run-class.sh.

The path that is executed should be quoted.

{noformat}
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2> /Volumes/NO\ 
NAME/LISA\ 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh
 ./config/server.properties 
usage: dirname path
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: /kafka-run-class.sh: No such file or directory
/Volumes/NO NAME/LISA 
15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
 line 44: exec: /kafka-run-class.sh: cannot execute: No such file or directory
jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2>
{noformat}


> kafka-server-start.sh globbing issue
> 
>
> Key: KAFKA-2780
> URL: https://issues.apache.org/jira/browse/KAFKA-2780
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jon Bringhurst
>
> When spaces are present in the path leading up to kafka-server-start.sh, 
> kafka-server-start.sh has trouble finding kafka-run-class.sh.
> The path that is executed should be quoted.
> {noformat}
> jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2> /Volumes/NO\ 
> NAME/LISA\ 
> 15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh
>  ./config/server.properties 
> usage: dirname path
> /Volumes/NO NAME/LISA 
> 15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
>  line 44: /kafka-run-class.sh: No such file or directory
> /Volumes/NO NAME/LISA 
> 15/training_materials/training-program/materials/M4/kafka_2.11-0.8.2.2/bin/kafka-server-start.sh:
>  line 44: exec: /kafka-run-class.sh: cannot execute: No such file or directory
> jbringhu@jbringhu-mn3 /V/N/L/t/t/m/M/kafka_2.11-0.8.2.2>
> {noformat}
> This was discovered during LISA15. After I'm back from the conference, I'll 
> try to edit this and add simple instructions to reproduce (and maybe a patch).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-11-09 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2674.
--
   Resolution: Fixed
Fix Version/s: 0.9.0.0

Issue resolved by pull request 467
[https://github.com/apache/kafka/pull/467]

> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2674) ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer close

2015-11-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997165#comment-14997165
 ] 

ASF GitHub Bot commented on KAFKA-2674:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/467


> ConsumerRebalanceListener.onPartitionsRevoked() is not called on consumer 
> close
> ---
>
> Key: KAFKA-2674
> URL: https://issues.apache.org/jira/browse/KAFKA-2674
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
> Fix For: 0.9.0.0
>
>
> Hi, I'm investigating and testing behavior of new consumer from the planned 
> release 0.9 and found an inconsistency in calling of rebalance callbacks.
> I noticed that ConsumerRebalanceListener.onPartitionsRevoked() is NOT called 
> during consumer close and application shutdown. It's JavaDoc contract says:
> - "This method will be called before a rebalance operation starts and after 
> the consumer stops fetching data."
> - "It is recommended that offsets should be committed in this callback to 
> either Kafka or a custom offset store to prevent duplicate data."
> I believe calling consumer.close() is a start of rebalance operation and even 
> the local consumer that is actually closing should be notified to be able to 
> process any rebalance logic including offsets commit (e.g. if auto-commit is 
> disabled).
> There are commented logs of current and expected behaviors.
> {noformat}
> // Application start
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka version : 0.9.0.0-SNAPSHOT 
> (AppInfoParser.java:82)
> 2015-10-20 15:14:02.208 INFO  o.a.k.common.utils.AppInfoParser
> [TestConsumer-worker-0]: Kafka commitId : 241b9ab58dcbde0c 
> (AppInfoParser.java:83)
> // Consumer started (the first one in group), rebalance callbacks are called 
> including empty onPartitionsRevoked()
> 2015-10-20 15:14:02.333 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:02.343 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:100)
> // Another consumer joined the group, rebalancing
> 2015-10-20 15:14:17.345 INFO  o.a.k.c.c.internals.Coordinator 
> [TestConsumer-worker-0]: Attempt to heart beat failed since the group is 
> rebalancing, try to re-join group. (Coordinator.java:714)
> 2015-10-20 15:14:17.346 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, revoked: [testB-1, testA-0, 
> testB-0, testB-3, testA-2, testB-2, testA-1, testA-4, testB-4, testA-3] 
> (TestConsumer.java:95)
> 2015-10-20 15:14:17.349 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Rebalance callback, assigned: [testB-3, testA-4, 
> testB-4, testA-3] (TestConsumer.java:100)
> // Consumer started closing, there SHOULD be onPartitionsRevoked() callback 
> to commit offsets like during standard rebalance, but it is missing
> 2015-10-20 15:14:39.280 INFO  c.a.e.kafka.newapi.TestConsumer [main]: 
> Closing instance (TestConsumer.java:42)
> 2015-10-20 15:14:40.264 INFO  c.a.e.kafka.newapi.TestConsumer 
> [TestConsumer-worker-0]: Worker thread stopped (TestConsumer.java:89)
> {noformat}
> Workaround is to call onPartitionsRevoked() explicitly and manually just 
> before calling consumer.close() but it seems dirty and error prone for me. It 
> can be simply forgotten be someone without such experience.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >