[jira] [Commented] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063856#comment-15063856
 ] 

ASF GitHub Bot commented on KAFKA-2455:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/694

KAFKA-2455: Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

@guozhangwang 
I reproduced this;
Every time this test failed, I found extra "type=DelayedFetchMetrics, 
name=ExpiresperSec, fetcherType=consumer/follower" inside 
Metrics.defaultRegistry().allMetrics().keySet();
that is because for "DelayedFetch", object DelayedFetchMetrics are loaded 
dynamically in function "onExpiration";
To fix, I explicitly load this object;
Is it appropriate?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2455

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/694.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #694


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 5e6284e034fc8e92745f1ebdda2fa228b8beaf4f
Author: jinxing 
Date:   2015-12-18T11:17:17Z

KAFKA-2455: Test Failure: kafka.consumer.MetricsTest > testMetricsLeak




> Test Failure: kafka.consumer.MetricsTest > testMetricsLeak 
> ---
>
> Key: KAFKA-2455
> URL: https://issues.apache.org/jira/browse/KAFKA-2455
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>
> I've seen this failure in builds twice recently:
> kafka.consumer.MetricsTest > testMetricsLeak FAILED
> java.lang.AssertionError: expected:<174> but was:<176>
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.failNotEquals(Assert.java:689)
> at org.junit.Assert.assertEquals(Assert.java:127)
> at org.junit.Assert.assertEquals(Assert.java:514)
> at org.junit.Assert.assertEquals(Assert.java:498)
> at 
> kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063862#comment-15063862
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/671


> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Guozhang Wang
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063863#comment-15063863
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/671

KAFKA-2977: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

Hi @guozhangwang 

Code is as below:
val appends = writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
cleaner.startup()
val firstDirty = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty)

val appends2 = appends ++ writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
val firstDirty2 = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty2)

log cleaner and writeDups are two different threads;
log cleaner do cleaning every 15s, timeout in "cleaner.awaitCleaned" is 60s;
there is a filtering condition for a log to be chosen to become a cleaning 
target: cleanableRatio> 0.5(configured log.cleaner.min.cleanable.ratio) by 
default;
It may happen that, during "val appends2 = appends ++ writeDups(numKeys = 
100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))", log is also 
undergoing a cleaning process; 
Since the segment size configured in this test is quite small: 100, there 
is possibility that before the end of 'writeDups', some 'dirty segment' of the 
log is already cleaned;
With tiny dirty part left,  cleanableRatio> 0.5 cannot be satisfied;
thus firstDirty2>lastCleaned2, which leads to this test failed;

Does it make sense?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2977

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #671


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit 0070c2d71d06ee8baa1cddb3451cd5af6c6b1d4a
Author: ZoneMayor 
Date:   2015-12-11T14:50:30Z

Merge pull request #8 from apache/trunk

2015-12-11

commit 09908ac646d4c84f854dad63b8c99213b74a7063
Author: ZoneMayor 
Date:   2015-12-13T14:17:19Z

Merge pull request #9 from apache/trunk

2015-12-13

commit ff1e68bb7101d12624c189174ef1dceb21ed9798
Author: jinxing 
Date:   2015-12-13T14:31:34Z

KAFKA-2054: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

commit 6321ab6599cb7a981fac2a4eea64a5f2ea805dd6
Author: jinxing 
Date:   2015-12-13T14:36:11Z

removed unnecessary maven repo

commit 05cae52c72a02c0ed40fd4e3be03e1cb19f33f7a
Author: jinxing 
Date:   2015-12-17T12:21:12Z

removed the semicolon

commit 651de48663cf375ea714cdbeb34650d75f1f4d71
Author: jinxing 
Date:   2015-12-18T07:43:38Z

KAFKA-2977: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

commit ba03eee18045dc4aabc56ff17907036c238b1f7d
Author: jinxing 
Date:   2015-12-18T07:50:57Z

KAFKA-2977: fix




> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Guozhang Wang
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at 

[jira] [Commented] (KAFKA-2547) Make DynamicConfigManager to use the ZkNodeChangeNotificationListener introduced as part of KAFKA-2211

2015-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058846#comment-15058846
 ] 

ASF GitHub Bot commented on KAFKA-2547:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/679

KAFKA-2547: Make DynamicConfigManager to use the ZkNodeChangeNotifica…

…tionListener introduced as part of KAFKA-2211

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2547

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/679.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #679


commit 10071378d38bab1bcf00aefa0ef678fdb77c8f5b
Author: Parth Brahmbhatt 
Date:   2015-12-15T21:15:22Z

KAFKA-2547: Make DynamicConfigManager to use the 
ZkNodeChangeNotificationListener introduced as part of KAFKA-2211




> Make DynamicConfigManager to use the ZkNodeChangeNotificationListener 
> introduced as part of KAFKA-2211
> --
>
> Key: KAFKA-2547
> URL: https://issues.apache.org/jira/browse/KAFKA-2547
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Parth Brahmbhatt
>Assignee: Parth Brahmbhatt
>
> As part of KAFKA-2211 (https://github.com/apache/kafka/pull/195/files) we 
> introduced a reusable ZkNodeChangeNotificationListener to ensure node changes 
> can be processed in a loss less way. This was pretty much the same code in 
> DynamicConfigManager with little bit of refactoring so it can be reused. We 
> now need to make DynamicConfigManager itself to use this new class once 
> KAFKA-2211 is committed to avoid code duplication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2509) Replace LeaderAndIsr{Request,Response} with org.apache.kafka.common.network.requests equivalent

2015-12-15 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15058897#comment-15058897
 ] 

ASF GitHub Bot commented on KAFKA-2509:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/647


> Replace LeaderAndIsr{Request,Response} with 
> org.apache.kafka.common.network.requests equivalent
> ---
>
> Key: KAFKA-2509
> URL: https://issues.apache.org/jira/browse/KAFKA-2509
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Ismael Juma
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2979) Enable authorizer and ACLs in ducktape tests

2015-12-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1506#comment-1506
 ] 

ASF GitHub Bot commented on KAFKA-2979:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/683

KAFKA-2979: Enable authorizer and ACLs in ducktape tests

Patch by @fpj and @benstopford.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-2979

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/683.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #683


commit 5586e3950442060e5c5dc19b89381e86c6d4a04f
Author: flavio junqueira 
Date:   2015-11-27T20:14:35Z

First cut of the ducktape test.

commit 15c23475ae440c0b77048510b12c69f4941950e1
Author: flavio junqueira 
Date:   2015-11-28T00:00:05Z

Fixes to references in zookeeper.py.

commit b0ff7f97fee552c6266cc3f5ce09f7e99db97d23
Author: flavio junqueira 
Date:   2015-11-28T04:00:45Z

Test case passes.

commit 885b42a7e1a8b42b00ed1c8cbb76ba2dc8930757
Author: flavio junqueira 
Date:   2015-11-28T12:35:44Z

KAFKA-2905: Make zookeeper replicated.

commit ff4e8f75845259d755cc0a0a11008115f5aff7e3
Author: flavio junqueira 
Date:   2015-11-30T15:24:58Z

KAFKA-2905: Clean up - moved config file, removed warns, moved jaas 
generation.

commit d78656e6faed82f2c8616f00c1ed1cfed97d2f3f
Author: flavio junqueira 
Date:   2015-12-01T16:29:11Z

KAFKA-2905: jaas reference and generation improvements.

commit 2628db2223818143c1509397aa6c384484525ff4
Author: flavio junqueira 
Date:   2015-12-01T16:38:11Z

KAFKA-2905: Changes to kafka.properties.

commit e78c9b4f3a5bd30bc8cd501076618f0642c5972a
Author: flavio junqueira 
Date:   2015-12-01T16:39:18Z

KAFKA-2905: Increased timeout for producer to get it to pass in my local 
machine.

commit abb09c007aaf3144853060efdb65cab74a0bd790
Author: flavio junqueira 
Date:   2015-12-01T16:41:50Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit b9d3be240743c0541aaa9369d381562f5dd2969c
Author: flavio junqueira 
Date:   2015-12-01T17:01:43Z

KAFKA-2905: Adding plain_jaas.conf.

commit 85aa0713d86fb6783cbd29709834d2013aa61822
Author: flavio junqueira 
Date:   2015-12-01T23:14:55Z

KAFKA-2905: Removing commented code.

commit 70a21a4c10e474ae5f7996ee3badcfc448494917
Author: flavio junqueira 
Date:   2015-12-02T00:14:43Z

KAFKA-2905: Removed unnecessary sleep.

commit 21fb8ec5ce6711704dfe2217c47040fae7bad323
Author: flavio junqueira 
Date:   2015-12-02T00:41:26Z

KAFKA-2905: Removing PLAIN.

commit dcf76bf3d49680bbd2a07d102d7855d2b08ee6d1
Author: flavio junqueira 
Date:   2015-12-02T01:45:52Z

KAFKA-2905: Removed missing instance of PLAIN.

commit d66dae448a61bc6c12a7c61f9ae9bdf6b75057c2
Author: flavio junqueira 
Date:   2015-12-02T09:14:15Z

KAFKA-2905: Corrected the min isr configuration.

commit de068a2bbfe863ee4be3799fffdcfadff00ba67e
Author: flavio junqueira 
Date:   2015-12-02T13:28:50Z

KAFKA-2905: Changed to Kerberos auth.

commit 95bc8a938af8078fa907c64f1c1983402f19ad48
Author: flavio junqueira 
Date:   2015-12-02T18:14:08Z

KAFKA-2905: Moving system properties to zookeeper.py.

commit fc6ff2eb0578767ea278742a12f26c675b6cfc28
Author: flavio junqueira 
Date:   2015-12-02T18:22:05Z

KAFKA-2905: Remove changes to timeouts in produce_consume_validate.py.

commit 755959504cae1441046c131e5c921cc18c5d5b4b
Author: flavio junqueira 
Date:   2015-12-03T00:06:24Z

KAFKA-2905: Removed change in minikdc.py.

commit 548043593a2ff5711a631452f9e0732420a22dd6
Author: flavio junqueira 
Date:   2015-12-03T00:09:04Z

KAFKA-2905: Missing colon in zookeeper.py.

commit 0d200b7700924e059083e9d8d70d0f9ad7339bd1
Author: flavio junqueira 
Date:   2015-12-03T00:13:46Z

Merge remote-tracking branch 'upstream/trunk' into KAFKA-2905

commit a2b710c97d8bb50d4f7e4336bf81296b7a08
Author: flavio junqueira 
Date:   2015-12-03T01:19:12Z

KAFKA-2905: Fixed bug in the generation of the principals string.

commit e820d0cd6ff34e7bbf481bcab4fe371f44110828
Author: flavio junqueira 
Date:   2015-12-03T11:41:13Z

KAFKA-2905: Increased zk connect time out and made zk jaas config 
conditional.

commit 6b279ba4202ad339bc9575c7e5c1a3f9e6085c8f
Author: flavio junqueira 
Date:   2015-12-03T18:25:49Z


[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051830#comment-15051830
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/665

KAFKA-2653 Phase I: Stateful Operation API Design



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2653r

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/665.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #665


commit e46d649c2e40078ed161c83fdc1690456f09f43a
Author: Guozhang Wang 
Date:   2015-12-10T04:31:25Z

v1

commit 2167f29ff630577fe63abc93fd8a58aa6c7d3c1c
Author: Guozhang Wang 
Date:   2015-12-10T19:32:34Z

option 1 of windowing opeartions




> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2928) system tests: failures in version-related sanity checks

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051849#comment-15051849
 ] 

ASF GitHub Bot commented on KAFKA-2928:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/656


> system tests: failures in version-related sanity checks
> ---
>
> Key: KAFKA-2928
> URL: https://issues.apache.org/jira/browse/KAFKA-2928
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> There have been a few consecutive failures of version-related sanity checks 
> in nightly system test runs:
> kafkatest.sanity_checks.test_verifiable_producer
> kafkatest.sanity_checks.test_kafka_version
> assert is_version(...) is failing
> utils.util.is_version is a fairly rough heuristic, so most likely this needs 
> to be updated.
> E.g., see
> http://testing.confluent.io/kafka/2015-12-01--001/
> (if this is broken, use 
> http://testing.confluent.io/kafka/2015-12-01--001.tar.gz)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2896) System test for partition re-assignment

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051954#comment-15051954
 ] 

ASF GitHub Bot commented on KAFKA-2896:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/655


> System test for partition re-assignment
> ---
>
> Key: KAFKA-2896
> URL: https://issues.apache.org/jira/browse/KAFKA-2896
> Project: Kafka
>  Issue Type: Task
>Reporter: Gwen Shapira
>Assignee: Anna Povzner
> Fix For: 0.9.1.0
>
>
> Lots of users depend on partition re-assignment tool to manage their cluster. 
> Will be nice to have a simple system tests that creates a topic with few 
> partitions and few replicas, reassigns everything and validates the ISR 
> afterwards. 
> Just to make sure we are not breaking anything. Especially since we have 
> plans to improve (read: modify) this area.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2893) Add Negative Partition Seek Check

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051449#comment-15051449
 ] 

ASF GitHub Bot commented on KAFKA-2893:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/628


> Add Negative Partition Seek Check
> -
>
> Key: KAFKA-2893
> URL: https://issues.apache.org/jira/browse/KAFKA-2893
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jesse Anderson
>Assignee: jin xing
> Fix For: 0.9.0.1
>
>
> When adding add seek that is a negative number, there isn't a check. When you 
> do give a negative number, you get the following output:
> {{2015-11-25 13:54:16 INFO  Fetcher:567 - Fetch offset null is out of range, 
> resetting offset}}
> Code to replicate:
> KafkaConsumer consumer = new KafkaConsumer String>(props);
> TopicPartition partition = new TopicPartition(topic, 0);
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, -1);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051284#comment-15051284
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/648


> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051285#comment-15051285
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure

commit 8ded9104a04861f789a7a990c2ddd4fc38a899cd
Author: ZoneMayor 
Date:   2015-12-10T04:47:06Z

Merge pull request #6 from apache/trunk

2015-12-10

commit 2bcf010c73923bb24bbd9cece7e39983b2bdce0c
Author: jinxing 
Date:   2015-12-10T04:47:39Z

KAFKA-2837: WIP

commit dae4a3cc0b564bb25121d54e65b5ad363c3e866d
Author: jinxing 
Date:   2015-12-10T04:48:21Z

Merge branch 'trunk-KAFKA-2837' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2837

commit 7118e11813e445bca3eab65a23028e76138b136a
Author: jinxing 
Date:   2015-12-10T04:51:43Z

KAFKA-2837: WIP




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (KAFKA-2927) System tests: reduce storage footprint of collected logs

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051987#comment-15051987
 ] 

ASF GitHub Bot commented on KAFKA-2927:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/657


> System tests: reduce storage footprint of collected logs
> 
>
> Key: KAFKA-2927
> URL: https://issues.apache.org/jira/browse/KAFKA-2927
> Project: Kafka
>  Issue Type: Bug
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.9.1.0
>
>
> Looking at recent night test runs (testing.confluent.io/kafka), the storage 
> requirements for log output from the various services has increased 
> significantly, up to 7-10G for a single test run, up from hundreds of MB
> Current breakdown:
> 23M   Benchmark
> 3.2M  ClientCompatibilityTest
> 613M  ConnectDistributedTest
> 1.1M  ConnectRestApiTest
> 1.5M  ConnectStandaloneFileTest
> 2.0M  ConsoleConsumerTest
> 440K  KafkaVersionTest
> 744K  Log4jAppenderTest
> 49M   QuotaTest
> 3.0G  ReplicationTest
> 1.2G  TestMirrorMakerService
> 185M  TestUpgrade
> 372K  TestVerifiableProducer
> 2.3G  VerifiableConsumerTest
> The biggest contributors in these test suites:
> ReplicationTest:
> verifiable_producer.log (currently TRACE level)
> VerifiableConsumerTest:
> kafka server.log
> TestMirrorMakerService:
> verifiable_producer.log
> ConnectDistributedTest:
> kafka server.log
> The worst offenders are therefore 
> verifiable_producer.log which is logging at TRACE level, and kafka server.log 
> which is logging at debug level
> One solution is to:
> 1) Update the log4j configs to log separately to both an INFO level file, and 
> another file for DEBUG at least for the worst offenders.
> 2) Don't collect these DEBUG (and below) logs by default; only mark for 
> collection during failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2772) Stabilize replication hard bounce test

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049261#comment-15049261
 ] 

ASF GitHub Bot commented on KAFKA-2772:
---

Github user granders closed the pull request at:

https://github.com/apache/kafka/pull/481


> Stabilize replication hard bounce test
> --
>
> Key: KAFKA-2772
> URL: https://issues.apache.org/jira/browse/KAFKA-2772
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
>Priority: Minor
>
> There have been several spurious failures of replication tests during runs of 
> kafka system tests (see for example 
> http://testing.confluent.io/kafka/2015-11-07--001/)
> {code:title=report.txt}
> Expected producer to still be producing.
> Traceback (most recent call last):
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 101, in run_all_tests
> result.data = self.run_single_test()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/tests/runner.py",
>  line 151, in run_single_test
> return self.current_test_context.function(self.current_test)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/mark/_mark.py",
>  line 331, in wrapper
> return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/replication_test.py",
>  line 132, in test_replication_with_broker_failure
> self.run_produce_consume_validate(core_test_action=lambda: 
> failures[failure_mode](self))
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 65, in run_produce_consume_validate
> self.stop_producer_and_consumer()
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/tests/kafkatest/tests/produce_consume_validate.py",
>  line 55, in stop_producer_and_consumer
> err_msg="Expected producer to still be producing.")
>   File 
> "/var/lib/jenkins/workspace/kafka_system_tests/kafka/venv/local/lib/python2.7/site-packages/ducktape-0.3.8-py2.7.egg/ducktape/utils/util.py",
>  line 36, in wait_until
> raise TimeoutError(err_msg)
> TimeoutError: Expected producer to still be producing.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2972) ControlledShutdownResponse always serialises `partitionsRemaining` as empty

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049295#comment-15049295
 ] 

ASF GitHub Bot commented on KAFKA-2972:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/649


> ControlledShutdownResponse always serialises `partitionsRemaining` as empty
> ---
>
> Key: KAFKA-2972
> URL: https://issues.apache.org/jira/browse/KAFKA-2972
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> This only affects the Java response class which is not used for serialisation 
> in 0.9.0, but will be in 0.9.1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2978) Topic partition is not sometimes consumed after rebalancing of consumer group

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15052149#comment-15052149
 ] 

ASF GitHub Bot commented on KAFKA-2978:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/666

KAFKA-2978: consumer stops fetching when consumed and fetch positions get 
out of sync



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2978

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/666.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #666


commit 8a441def79cc8fa21da97759068c0caf7b7b425a
Author: Jason Gustafson 
Date:   2015-12-11T04:30:43Z

KAFKA-2978: consumer stops fetching when consumed and fetch positions get 
out of sync




> Topic partition is not sometimes consumed after rebalancing of consumer group
> -
>
> Key: KAFKA-2978
> URL: https://issues.apache.org/jira/browse/KAFKA-2978
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Hi there, we are evaluating Kafka 0.9 to find if it is stable enough and 
> ready for production. We wrote a tool that basically verifies that each 
> produced message is also properly consumed. We found the issue described 
> below while stressing Kafka using this tool.
> Adding more and more consumers to a consumer group may result in unsuccessful 
> rebalancing. Data from one or more partitions are not consumed and are not 
> effectively available to the client application (e.g. for 15 minutes). 
> Situation can be resolved externally by touching the consumer group again 
> (add or remove a consumer) which forces another rebalancing that may or may 
> not be successful.
> Significantly higher CPU utilization was observed in such cases (from about 
> 3% to 17%). The CPU utilization takes place in both the affected consumer and 
> Kafka broker according to htop and profiling using jvisualvm. 
> Jvisualvm indicates the issue may be related to KAFKA-2936 (see its 
> screenshots in the GitHub repo below), but I'm very unsure. I don't also know 
> if the issue is in consumer or broker because both are affected and I don't 
> know Kafka internals.
> The issue is not deterministic but it can be easily reproduced after a few 
> minutes just by executing more and more consumers. More parallelism with 
> multiple CPUs probably gives the issue more chances to appear.
> The tool itself together with very detailed instructions for quite reliable 
> reproduction of the issue and initial analysis are available here:
> - https://github.com/avast/kafka-tests
> - https://github.com/avast/kafka-tests/tree/issue1/issues/1_rebalancing
> - Prefer fixed tag {{issue1}} to branch {{master}} which may change.
> - Note there are also various screenshots of jvisualvm together with full 
> logs from all components of the tool.
> My colleague was able to independently reproduce the issue according to the 
> instructions above. If you have any questions or if you need any help with 
> the tool, just let us know. This issue is blocker for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051021#comment-15051021
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/648

KAFKA-2837: fix transient failure of kafka.api.ProducerBounceTest > 
testBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
for (i <- 0 until numServers) {
  for (server <- servers) {
server.shutdown()
server.awaitShutdown()
server.startup()
Thread.sleep(2000)
  }

  // Make sure the producer do not see any exception
  // in returned metadata due to broker failures
  assertTrue(scheduler.failed == false)

  // Make sure the leader still exists after bouncing brokers
  (0 until numPartitions).foreach(partition => 
TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in 
RecordAccumulator's BufferPool;
The limit for buffer is set to be 3;
TimeoutException("Failed to allocate memory within the configured max 
blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this 
transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens; 
for example if the broker with role of controller suffered a restart, it 
will take time to select controller first, then select leader, which will lead 
to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
@guozhangwang , Could you give some comments?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/648.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #648


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit cd5e6f4700a4387f9383b84aca0ee9c4639b1033
Author: jinxing 
Date:   2015-12-09T13:49:07Z

KAFKA-2837: fix transient failure kafka.api.ProducerBounceTest > 
testBrokerFailure

commit 8ded9104a04861f789a7a990c2ddd4fc38a899cd
Author: ZoneMayor 
Date:   2015-12-10T04:47:06Z

Merge pull request #6 from apache/trunk

2015-12-10

commit 2bcf010c73923bb24bbd9cece7e39983b2bdce0c
Author: jinxing 
Date:   2015-12-10T04:47:39Z

KAFKA-2837: WIP

commit dae4a3cc0b564bb25121d54e65b5ad363c3e866d
Author: jinxing 
Date:   2015-12-10T04:48:21Z

Merge branch 'trunk-KAFKA-2837' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2837

commit 7118e11813e445bca3eab65a23028e76138b136a
Author: jinxing 
Date:   2015-12-10T04:51:43Z

KAFKA-2837: WIP




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> 

[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051020#comment-15051020
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/648


> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

[jira] [Commented] (KAFKA-2984) KTable should send old values along with new values to downstreams

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15056429#comment-15056429
 ] 

ASF GitHub Bot commented on KAFKA-2984:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/672

KAFKA-2984: ktable sends old values when required

@guozhangwang 

At DAG level, `KTable` sends (key, (new value, old value)) to down 
stream.  This is done by wrapping the new value and the old value in an 
instance of `Change` class and sending it as a "value" part of the stream. 
The old value is omitted (set to null) by default for optimization. When any 
downstream processor needs to use the old value, the framework should enable it 
(see `KTableImpl.enableSendingOldValues()` and implementations of 
`KTableProcessorSupplier.enableSensingOldValues()`).

NOTE: This is meant to be used by aggregation. But, if there is a use case 
like a SQL database trigger, we can add a new KTable method to expose this. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka trigger

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/672.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #672


commit 41bbef2c1ba6e170403f7b48fcb618bbe49d9b6c
Author: Yasuhiro Matsuda 
Date:   2015-12-11T22:41:55Z

KAFKA-2984: KTable should send old values along with new values to 
downstreams

commit 91a9fad6e729bf631d82a88db5bb6ec483ae2062
Author: Yasuhiro Matsuda 
Date:   2015-12-14T18:08:15Z

Merge branch 'trunk' of github.com:apache/kafka into trigger

commit 7a1b689c594d2e859454c28a2367df793178d3b9
Author: Yasuhiro Matsuda 
Date:   2015-12-14T18:08:50Z

method names




> KTable should send old values along with new values to downstreams
> --
>
> Key: KAFKA-2984
> URL: https://issues.apache.org/jira/browse/KAFKA-2984
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.1
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>
> Old values are necessary for implementing aggregate functions. KTable should 
> augment an event with its old value. Basically KTable stream is a stream of 
> (key, (new value, old value)) internally. The old value may be omitted when 
> it is not used in the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2978) Topic partition is not sometimes consumed after rebalancing of consumer group

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15056886#comment-15056886
 ] 

ASF GitHub Bot commented on KAFKA-2978:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/666


> Topic partition is not sometimes consumed after rebalancing of consumer group
> -
>
> Key: KAFKA-2978
> URL: https://issues.apache.org/jira/browse/KAFKA-2978
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, core
>Affects Versions: 0.9.0.0
>Reporter: Michal Turek
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.9.0.1
>
>
> Hi there, we are evaluating Kafka 0.9 to find if it is stable enough and 
> ready for production. We wrote a tool that basically verifies that each 
> produced message is also properly consumed. We found the issue described 
> below while stressing Kafka using this tool.
> Adding more and more consumers to a consumer group may result in unsuccessful 
> rebalancing. Data from one or more partitions are not consumed and are not 
> effectively available to the client application (e.g. for 15 minutes). 
> Situation can be resolved externally by touching the consumer group again 
> (add or remove a consumer) which forces another rebalancing that may or may 
> not be successful.
> Significantly higher CPU utilization was observed in such cases (from about 
> 3% to 17%). The CPU utilization takes place in both the affected consumer and 
> Kafka broker according to htop and profiling using jvisualvm. 
> Jvisualvm indicates the issue may be related to KAFKA-2936 (see its 
> screenshots in the GitHub repo below), but I'm very unsure. I don't also know 
> if the issue is in consumer or broker because both are affected and I don't 
> know Kafka internals.
> The issue is not deterministic but it can be easily reproduced after a few 
> minutes just by executing more and more consumers. More parallelism with 
> multiple CPUs probably gives the issue more chances to appear.
> The tool itself together with very detailed instructions for quite reliable 
> reproduction of the issue and initial analysis are available here:
> - https://github.com/avast/kafka-tests
> - https://github.com/avast/kafka-tests/tree/issue1/issues/1_rebalancing
> - Prefer fixed tag {{issue1}} to branch {{master}} which may change.
> - Note there are also various screenshots of jvisualvm together with full 
> logs from all components of the tool.
> My colleague was able to independently reproduce the issue according to the 
> instructions above. If you have any questions or if you need any help with 
> the tool, just let us know. This issue is blocker for us.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2015-12-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15056936#comment-15056936
 ] 

ASF GitHub Bot commented on KAFKA-2837:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/674

KAFKA-2837 Follow-up: Default max block to 60 seconds



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2837

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/674.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #674


commit 232036383c805bfc732cfb62b50eec0eabf583ef
Author: Guozhang Wang 
Date:   2015-12-14T23:09:57Z

default to 60 seconds, set to 10 seconds in ProducerFailureHandlingTest




> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> 

[jira] [Commented] (KAFKA-2981) Fix javadoc in KafkaConsumer

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15053418#comment-15053418
 ] 

ASF GitHub Bot commented on KAFKA-2981:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/668


> Fix javadoc in KafkaConsumer
> 
>
> Key: KAFKA-2981
> URL: https://issues.apache.org/jira/browse/KAFKA-2981
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Xin Wang
>Priority: Minor
> Fix For: 0.9.0.1
>
>
> error javadoc:
> {code}consumer.subscribe("topic");{code}
> fix:
> {code}consumer.subscribe(Arrays.asList("topic"));{code}
> Since KafkaConsumer.subscribe() method uses List as the input type, using 
> string "topic" will get an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2070) Replace OffsetRequest/response with ListOffsetRequest/response from org.apache.kafka.common.requests

2015-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15053187#comment-15053187
 ] 

ASF GitHub Bot commented on KAFKA-2070:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/663


> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests
> 
>
> Key: KAFKA-2070
> URL: https://issues.apache.org/jira/browse/KAFKA-2070
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Replace OffsetRequest/response with ListOffsetRequest/response from 
> org.apache.kafka.common.requests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048996#comment-15048996
 ] 

ASF GitHub Bot commented on KAFKA-2973:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/650

KAFKA-2973; Fix issue where `childrenSensors` is incorrectly updated



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-2973-fix-leak-child-sensors-on-remove

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/650.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #650


commit ef6a543edd4c14e44b8dd660b936a7efa8aeaee0
Author: Ismael Juma 
Date:   2015-12-09T16:39:49Z

Fix issue where `childrenSensors` was incorrectly updated




> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2973) Fix leak of child sensors on remove

2015-12-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049066#comment-15049066
 ] 

ASF GitHub Bot commented on KAFKA-2973:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/650


> Fix leak of child sensors on remove
> ---
>
> Key: KAFKA-2973
> URL: https://issues.apache.org/jira/browse/KAFKA-2973
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> We added the ability to remove sensors from Kafka Metrics in 0.9.0.0. There 
> is, however, a bug in how we populate the `childrenSensors` map causing us to 
> leak some child sensors (all, but the last one added).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2578) Client Metadata internal state should be synchronized

2015-12-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15051203#comment-15051203
 ] 

ASF GitHub Bot commented on KAFKA-2578:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/659

KAFKA-2578; Client Metadata internal state should be synchronized



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka KAFKA-2578

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/659.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #659


commit 938a19a7a09238d7f6089a7a75d613fc33dd7ce3
Author: Edward Ribeiro 
Date:   2015-09-29T21:23:19Z

KAFKA-2578; Client Metadata internal state should be synchronized




> Client Metadata internal state should be synchronized
> -
>
> Key: KAFKA-2578
> URL: https://issues.apache.org/jira/browse/KAFKA-2578
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Edward Ribeiro
>Priority: Trivial
>
> Some recent patches introduced a couple new fields in o.a.k.clients.Metadata: 
> 'listeners' and 'needMetadataForAllTopics'. Accessor methods for these fields 
> should be synchronized like the rest of the internal Metadata state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2949) Make EndToEndAuthorizationTest replicated

2016-01-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080573#comment-15080573
 ] 

ASF GitHub Bot commented on KAFKA-2949:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/631


> Make EndToEndAuthorizationTest replicated
> -
>
> Key: KAFKA-2949
> URL: https://issues.apache.org/jira/browse/KAFKA-2949
> Project: Kafka
>  Issue Type: Test
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> The call to create a topic in the setup method is setting the degree of 
> replication to 1, we should make it 3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3051) security.protocol documentation is inaccurate

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081780#comment-15081780
 ] 

ASF GitHub Bot commented on KAFKA-3051:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/724


> security.protocol documentation is inaccurate
> -
>
> Key: KAFKA-3051
> URL: https://issues.apache.org/jira/browse/KAFKA-3051
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: security
> Fix For: 0.9.0.1
>
>
> In CommonClientConfigs, the doc for security.protocol says "Currently only 
> PLAINTEXT and SSL are supported.". This is inaccurate since we support SASL 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2422) Allow copycat connector plugins to be aliased to simpler names

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081693#comment-15081693
 ] 

ASF GitHub Bot commented on KAFKA-2422:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/687


> Allow copycat connector plugins to be aliased to simpler names
> --
>
> Key: KAFKA-2422
> URL: https://issues.apache.org/jira/browse/KAFKA-2422
> Project: Kafka
>  Issue Type: Sub-task
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Gwen Shapira
>Priority: Minor
> Fix For: 0.9.1.0
>
>
> Configurations of connectors can get quite verbose when you have to specify 
> the full class name, e.g. 
> connector.class=org.apache.kafka.copycat.file.FileStreamSinkConnector
> It would be nice to allow connector classes to provide shorter aliases, e.g. 
> something like "file-sink", to make this config less verbose. Flume does 
> this, so we can use it as an example.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3051) security.protocol documentation is inaccurate

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081028#comment-15081028
 ] 

ASF GitHub Bot commented on KAFKA-3051:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/724

KAFKA-3051 KAFKA-3048; Security config docs improvements



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka minor-security-fixes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/724.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #724


commit 9156fa3fb2dd90c3cf871d23a49b6ead149e29ad
Author: Ismael Juma 
Date:   2016-01-04T10:59:28Z

KAFKA-3051: security.protocol documentation is inaccurate

commit a9bae91039c63a32ec1c84c9a556a051a68d9373
Author: Ismael Juma 
Date:   2016-01-04T11:00:05Z

KAFKA-3048: incorrect property name ssl.want.client.auth

commit 6b3f87ee82266363584498df4a6fa99dc63eb47d
Author: Ismael Juma 
Date:   2016-01-04T11:17:44Z

Remove redundant default information in security-related config docs

I removed the cases where the value was the same as the one
that appears in the HTML table and left the cases where it adds
useful information.

commit 670363ec7846716c58d77ef266bea6a4e146084f
Author: Ismael Juma 
Date:   2016-01-04T11:18:41Z

Use `{` and `}` instead of `<` and `>` as placeholder to fix HTML rendering

commit b82ae369bd90fb63445efb6a022b5d4e5dbd93d6
Author: Ismael Juma 
Date:   2016-01-04T11:19:58Z

Fix `KafkaChannel.principal()` documentation

Also include minor `KafkaChannel` clean-ups.

commit 0fa85e5ab4acc7322fe07f79b9d2218fe2d08d2a
Author: Ismael Juma 
Date:   2016-01-04T11:20:36Z

Fix `SslTransportLayer` reference in `SslSelectorTest` comment




> security.protocol documentation is inaccurate
> -
>
> Key: KAFKA-3051
> URL: https://issues.apache.org/jira/browse/KAFKA-3051
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: security
> Fix For: 0.9.0.1
>
>
> In CommonClientConfigs, the doc for security.protocol says "Currently only 
> PLAINTEXT and SSL are supported.". This is inaccurate since we support SASL 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2937) Topics marked for delete in Zookeeper may become undeletable

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081958#comment-15081958
 ] 

ASF GitHub Bot commented on KAFKA-2937:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/729

KAFKA-2937 : Disable the leaderIsr check if the topic is to be deleted.

The check was implemented in KAFKA-340 : If we are shutting down a broker 
when the ISR of a partition includes only that broker, we could lose some 
messages that have been previously committed. For clean shutdown, we need to 
guarantee that there is at least 1 other broker in ISR after the broker is shut 
down.

When we are deleting the topic, this check can be avoided.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-2937

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/729.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #729


commit 9d5afd0f29f2f4311e534eb375e1c9ddb23b33dd
Author: Mayuresh Gharat 
Date:   2016-01-04T22:56:01Z

Disable the leaderIsr check if the topic is to be deleted.




> Topics marked for delete in Zookeeper may become undeletable
> 
>
> Key: KAFKA-2937
> URL: https://issues.apache.org/jira/browse/KAFKA-2937
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Mayuresh Gharat
>
> In our clusters, we occasionally see topics marked for delete, but never 
> actually deleted. It may be due to brokers being restarted while tests were 
> running, but further restarts of Kafka dont fix the problem. The topics 
> remain marked for delete in Zookeeper.
> Topic describe shows:
> {quote}
> Topic:testtopic   PartitionCount:1ReplicationFactor:3 Configs:
>   Topic: testtopicPartition: 0Leader: noneReplicas: 3,4,0 
> Isr: 
> {quote}
> Kafka logs show:
> {quote}
> 2015-12-02 15:53:30,152] ERROR Controller 2 epoch 213 initiated state change 
> of replica 3 for partition [testtopic,0] from OnlineReplica to OfflineReplica 
> failed (state.change.logger)
> kafka.common.StateChangeFailedException: Failed to change state of replica 3 
> for partition [testtopic,0] since the leader and isr path in zookeeper is 
> empty
> at 
> kafka.controller.ReplicaStateMachine.handleStateChange(ReplicaStateMachine.scala:269)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:322)
> at 
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:978)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:342)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:334)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> at 
> kafka.controller.TopicDeletionManager.startReplicaDeletion(TopicDeletionManager.scala:334)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onPartitionDeletion(TopicDeletionManager.scala:367)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:313)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:312)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onTopicDeletion(TopicDeletionManager.scala:312)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:431)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:403)
> at scala.collection.immutable.Set$Set2.foreach(Set.scala:111)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply$mcV$sp(TopicDeletionManager.scala:403)
> at 
> 

[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082183#comment-15082183
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/730

KAFKA-2653: Alternative Kafka Streams Stateful API Design

ping @ymatsuda for reviews.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K2653r

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/730.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #730


commit e46d649c2e40078ed161c83fdc1690456f09f43a
Author: Guozhang Wang 
Date:   2015-12-10T04:31:25Z

v1

commit 2167f29ff630577fe63abc93fd8a58aa6c7d3c1c
Author: Guozhang Wang 
Date:   2015-12-10T19:32:34Z

option 1 of windowing opeartions

commit fb92b2b20f7be6f17c006de6e48cb04065808477
Author: Guozhang Wang 
Date:   2015-12-11T05:47:51Z

v1

commit 0862ec2b4ecb151ea1b3395c74787e4de99891fe
Author: Guozhang Wang 
Date:   2015-12-11T22:15:02Z

v1

commit 9558891bdaccc0b8861f882b957b5131556f896c
Author: Guozhang Wang 
Date:   2015-12-15T00:30:20Z

address Yasu's comments

commit e6373cbc4229637100c97bbb440555c2f0719d03
Author: Guozhang Wang 
Date:   2015-12-15T01:50:17Z

add built-in aggregates

commit 66e122adc8911334e924921bc7fa67275445bd71
Author: Guozhang Wang 
Date:   2015-12-15T03:17:59Z

add built-in aggregates in KTable

commit 13c15ada1edbff51e34022484bcde3955cdf99cd
Author: Guozhang Wang 
Date:   2015-12-15T19:28:12Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2653r

commit 1f360a25022d0286f6ebbf1a6735201ba8fdab53
Author: Guozhang Wang 
Date:   2015-12-15T19:43:53Z

address Yasu's comments

commit 2b027bf8614026cbec05404dffd5e9c2598db6f4
Author: Guozhang Wang 
Date:   2015-12-15T20:58:11Z

add missing files

commit 5214b12fcd66eb4cfa9af4258ca2146c11aa2e89
Author: Guozhang Wang 
Date:   2015-12-15T23:11:27Z

address Yasu's comments

commit a603a9afde8a86906d085b6cf942df67d2082fb9
Author: Guozhang Wang 
Date:   2015-12-15T23:15:29Z

rename aggregateSupplier to aggregatorSupplier

commit e186710bc3b66e88148ab81087276cedffa2bad3
Author: Guozhang Wang 
Date:   2015-12-16T22:20:59Z

modify built-in aggregates

commit 5bb1e8c95e0c1ab131d5212d1a7d793ce8b49414
Author: Guozhang Wang 
Date:   2015-12-16T22:24:10Z

add missing files

commit 4570dd0d98526f8388c13ef5fe4af12d372f73c6
Author: Guozhang Wang 
Date:   2015-12-17T00:01:58Z

further comments addressed

commit d1ce4b85eff2364a34593cefbe0e8db4070d7fb9
Author: Guozhang Wang 
Date:   2016-01-05T00:53:43Z

Merge branch 'trunk' of https://git-wip-us.apache.org/repos/asf/kafka into 
K2653r

commit 13b6b995dff5fef42690852cd5cbb1d3f3b589d4
Author: Guozhang Wang 
Date:   2016-01-05T01:34:41Z

revision v1




> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081217#comment-15081217
 ] 

ASF GitHub Bot commented on KAFKA-3052:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/725

KAFKA-3052; Broker properties get logged twice if acl enabled

Fix it by making it possible to pass the `doLog` parameter to 
`AbstractConfig`. As explained in the code comments, this means that we can 
continue to benefit from ZK default settings as specified in `KafkaConfig` 
without having to duplicate code.

Also:
* Removed unused private methods from `KafkaConfig`
* Removed `case` modifier from `KafkaConfig` so that `hashCode`, `equals`
and `toString` from `AbstractConfig` are used.
* Made `props` a `val` and added `apply` method to `KafkaConfig` to
remain binary compatible.
* Call authorizer.close even if an exception is thrown during `configure`.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3052-broker-properties-get-logged-twice-if-acl-enabled

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/725.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #725


commit 7f2b2ec9a9f333bdbb042c68338d5f437c1df5af
Author: Ismael Juma 
Date:   2016-01-04T14:27:20Z

Fix duplicate logging of broker properties when ACL is enabled

Also:
* Removed unused private methods from `KafkaConfig`
* Removed `case` modifier so that `hashCode`, `equals`
and `toString` from `AbstractConfig` are used.
* Made `props` a `val` and added `apply` method to
remain binary compatible.

commit c4f3ab3013a79ca22a3bda954d54b1226c094220
Author: Ismael Juma 
Date:   2016-01-04T14:34:53Z

Call authorizer close even if an exception is thrown during `configure`




> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076457#comment-15076457
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076454#comment-15076454
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076451#comment-15076451
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076452#comment-15076452
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076453#comment-15076453
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076455#comment-15076455
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080445#comment-15080445
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

commit 320386d2c484dac7eed9b7fe1584ca376e4ad897
Author: jinxing 
Date:   2016-01-03T06:09:00Z

fix

commit 67d1b21661886b204bbb86bc472a2ecce57613dc
Author: jinxing 
Date:   2016-01-03T07:45:53Z

small fix




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080427#comment-15080427
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080428#comment-15080428
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

commit 320386d2c484dac7eed9b7fe1584ca376e4ad897
Author: jinxing 
Date:   2016-01-03T06:09:00Z

fix

commit 67d1b21661886b204bbb86bc472a2ecce57613dc
Author: jinxing 
Date:   2016-01-03T07:45:53Z

small fix




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> 

[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15080444#comment-15080444
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
> 

[jira] [Commented] (KAFKA-3065) Prevent unbounded growth of RecordAccumulator#batches in Kafka producer

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085694#comment-15085694
 ] 

ASF GitHub Bot commented on KAFKA-3065:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/735

KAFKA-3065: Remove unused topic partitions from RecordAccumulator

Removes unused topic partitions from RecordAccumulator#batches to prevent 
the map growing indefinitely. Replaces CopyOnWriteMap with ConcurrentHashMap to 
support deletes without double locking. Producer performance tests show no 
significant difference.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3065

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/735.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #735


commit 321aab4207673de26593dcdc00246fe0ca6b6c59
Author: Rajini Sivaram 
Date:   2016-01-05T20:24:27Z

KAFKA-3065: Remove unused topic partitions from RecordAccumulator




> Prevent unbounded growth of RecordAccumulator#batches in Kafka producer
> ---
>
> Key: KAFKA-3065
> URL: https://issues.apache.org/jira/browse/KAFKA-3065
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Topic partitions added to RecordAccumulator#batches are never removed from 
> the map. In a dynamic environment where topics are created and deleted 
> frequently, this can result in unbounded growth of the map with unused 
> entries. This is a particularly an issue for the Kafka REST service where a 
> producer object is used for the lifetime of the service, and the service 
> itself does not control the topics to which clients publish messages.
> This is a follow-on defect from KAFKA-2948.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2975) The newtorkClient should request a metadata update after it gets an error in the handleResponse()

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085926#comment-15085926
 ] 

ASF GitHub Bot commented on KAFKA-2975:
---

Github user MayureshGharat closed the pull request at:

https://github.com/apache/kafka/pull/658


> The newtorkClient should request a metadata update after it gets an error in 
> the handleResponse()
> -
>
> Key: KAFKA-2975
> URL: https://issues.apache.org/jira/browse/KAFKA-2975
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>
> Currently in data pipeline, 
> 1) Lets say Mirror Maker requestTimeout is set to 2 min and metadataExpiry is 
> set to 5 min
> 2) We delete a topic, the Mirror Maker get UNKNOWN_TOPIC_PARTITION and tries 
> torefresh its Metadata.
> 3) It gets LeaderNotAvailableException, may be because the topic is not 
> created yet.
> 4) Now its metadata does not have any information about that topic.
> 5) It will wait for 5 min to do the next refresh.
> 6) In the mean time the batches sitting in the accumulator will expire and 
> the mirror makers die to avoid data loss.
> To overcome this we need to refresh the metadata after 3).
> Well there is an alternative solution to have the metadataExpiry set to be 
> less then requestTimeout, but this will mean we make more metadataRequest 
> over the wire in normal scenario as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3016) Add KStream-KStream window joins

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085915#comment-15085915
 ] 

ASF GitHub Bot commented on KAFKA-3016:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/737

KAFKA-3016: phase-2. stream join implementations

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka windowed_join2

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/737.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #737


commit 5494ec5b816a86d3414b84172385c46f4c2603c5
Author: Yasuhiro Matsuda 
Date:   2016-01-06T17:39:35Z

KAFKA-3016: phase-2. stream join implementations




> Add KStream-KStream window joins
> 
>
> Key: KAFKA-3016
> URL: https://issues.apache.org/jira/browse/KAFKA-3016
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.1
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3069) Fix recursion in ZkSecurityMigrator

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085853#comment-15085853
 ] 

ASF GitHub Bot commented on KAFKA-3069:
---

GitHub user fpj opened a pull request:

https://github.com/apache/kafka/pull/736

KAFKA-3069: Fix recursion in ZkSecurityMigrator

I'm also fixing a bug in the testChroot test case.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fpj/kafka KAFKA-3069

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/736.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #736


commit 81028a160f3653b53331f71c4ee988b4945a0a0f
Author: Flavio Junqueira 
Date:   2016-01-06T17:11:46Z

KAFKA-3069: Fixed issue in the migrator tool and bug in the testChroot case.




> Fix recursion in ZkSecurityMigrator
> ---
>
> Key: KAFKA-3069
> URL: https://issues.apache.org/jira/browse/KAFKA-3069
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.0.1
>
>
> The zk migrator tool recursively sets ACLs starting with the root, which we 
> initially assumed was either the root of a dedicated ensemble or a chroot. 
> However, there are at least two reasons for not doing it this way. First, 
> shared ensembles might not really follow the practice of separating 
> applications into branches, essentially creating a chroot for each. Second, 
> there are paths we don't want to secure, like the ConsumersPath.
> To fix this, we simply need to set the root ACL separately and start the 
> recursion on each of the persistent paths to secure.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2934) Offset storage file configuration in Connect standalone mode is not included in StandaloneConfig

2016-01-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083117#comment-15083117
 ] 

ASF GitHub Bot commented on KAFKA-2934:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/734

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

Added offsetBackingStore config to StandaloneConfig and DistributedConfig;
Added config for offset.storage.topic and config.storage.topic into 
DistributedConfig;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2934

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/734.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #734


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit dd90fb07a073de2f58ba6d868348aeaf9e70a6a3
Author: jinxing 
Date:   2016-01-02T12:49:18Z

KAFKA-2934: Offset storage file configuration in Connect standalone mode is 
not included in StandaloneConfig

commit ff572f805de3eb7de94046f6e29aee7254262d4c
Author: jinxing 
Date:   2016-01-02T14:21:19Z

fix




> Offset storage file configuration in Connect standalone mode is not included 
> in StandaloneConfig
> 
>
> Key: KAFKA-2934
> URL: https://issues.apache.org/jira/browse/KAFKA-2934
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> The config is coded directly in FileOffsetBackingStore rather than being 
> listed (and validated) in StandaloneConfig. This also means it wouldn't be 
> included if we autogenerated docs from the config classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2072) Add StopReplica request/response to o.a.k.common.requests and replace the usage in core module

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086542#comment-15086542
 ] 

ASF GitHub Bot commented on KAFKA-2072:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/196


> Add StopReplica request/response to o.a.k.common.requests and replace the 
> usage in core module
> --
>
> Key: KAFKA-2072
> URL: https://issues.apache.org/jira/browse/KAFKA-2072
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: David Jacot
> Fix For: 0.9.1.0
>
>
> Add StopReplica request/response to o.a.k.common.requests and replace the 
> usage in core module



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3022) Deduplicate common project configurations

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086561#comment-15086561
 ] 

ASF GitHub Bot commented on KAFKA-3022:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/712


> Deduplicate common project configurations
> -
>
> Key: KAFKA-3022
> URL: https://issues.apache.org/jira/browse/KAFKA-3022
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Many of the configurations for subproject artifacts, tests, CheckStyle, etc. 
> are and should be exactly the same. We can reduce duplicate code by moving 
> this configuration to the sub-projects section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2937) Topics marked for delete in Zookeeper may become undeletable

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086378#comment-15086378
 ] 

ASF GitHub Bot commented on KAFKA-2937:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/729


> Topics marked for delete in Zookeeper may become undeletable
> 
>
> Key: KAFKA-2937
> URL: https://issues.apache.org/jira/browse/KAFKA-2937
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Mayuresh Gharat
> Fix For: 0.9.0.1
>
>
> In our clusters, we occasionally see topics marked for delete, but never 
> actually deleted. It may be due to brokers being restarted while tests were 
> running, but further restarts of Kafka dont fix the problem. The topics 
> remain marked for delete in Zookeeper.
> Topic describe shows:
> {quote}
> Topic:testtopic   PartitionCount:1ReplicationFactor:3 Configs:
>   Topic: testtopicPartition: 0Leader: noneReplicas: 3,4,0 
> Isr: 
> {quote}
> Kafka logs show:
> {quote}
> 2015-12-02 15:53:30,152] ERROR Controller 2 epoch 213 initiated state change 
> of replica 3 for partition [testtopic,0] from OnlineReplica to OfflineReplica 
> failed (state.change.logger)
> kafka.common.StateChangeFailedException: Failed to change state of replica 3 
> for partition [testtopic,0] since the leader and isr path in zookeeper is 
> empty
> at 
> kafka.controller.ReplicaStateMachine.handleStateChange(ReplicaStateMachine.scala:269)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.ReplicaStateMachine$$anonfun$handleStateChanges$2.apply(ReplicaStateMachine.scala:114)
> at 
> scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:322)
> at 
> scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:978)
> at 
> kafka.controller.ReplicaStateMachine.handleStateChanges(ReplicaStateMachine.scala:114)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:342)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$startReplicaDeletion$2.apply(TopicDeletionManager.scala:334)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:116)
> at 
> kafka.controller.TopicDeletionManager.startReplicaDeletion(TopicDeletionManager.scala:334)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onPartitionDeletion(TopicDeletionManager.scala:367)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:313)
> at 
> kafka.controller.TopicDeletionManager$$anonfun$kafka$controller$TopicDeletionManager$$onTopicDeletion$2.apply(TopicDeletionManager.scala:312)
> at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
> at 
> kafka.controller.TopicDeletionManager.kafka$controller$TopicDeletionManager$$onTopicDeletion(TopicDeletionManager.scala:312)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:431)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1$$anonfun$apply$mcV$sp$4.apply(TopicDeletionManager.scala:403)
> at scala.collection.immutable.Set$Set2.foreach(Set.scala:111)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply$mcV$sp(TopicDeletionManager.scala:403)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply(TopicDeletionManager.scala:397)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread$$anonfun$doWork$1.apply(TopicDeletionManager.scala:397)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.controller.TopicDeletionManager$DeleteTopicsThread.doWork(TopicDeletionManager.scala:397)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {quote}  
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3016) Add KStream-KStream window joins

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086416#comment-15086416
 ] 

ASF GitHub Bot commented on KAFKA-3016:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/737


> Add KStream-KStream window joins
> 
>
> Key: KAFKA-3016
> URL: https://issues.apache.org/jira/browse/KAFKA-3016
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.1
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3021) Centralize dependency version managment

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088367#comment-15088367
 ] 

ASF GitHub Bot commented on KAFKA-3021:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/741

KAFKA-3021: Centralize dependency version management



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka central-deps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/741.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #741


commit 2e7e5bda1bc4801e17441c7ede6c523f69500e15
Author: Grant Henke 
Date:   2015-12-24T05:02:58Z

KAFKA-3021: Centralize dependency version management




> Centralize dependency version managment
> ---
>
> Key: KAFKA-3021
> URL: https://issues.apache.org/jira/browse/KAFKA-3021
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2653) Stateful operations in the KStream DSL layer

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088533#comment-15088533
 ] 

ASF GitHub Bot commented on KAFKA-2653:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/730


> Stateful operations in the KStream DSL layer
> 
>
> Key: KAFKA-2653
> URL: https://issues.apache.org/jira/browse/KAFKA-2653
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> This includes the interface design the implementation for stateful operations 
> including:
> 0. table representation in KStream.
> 1. stream-stream join.
> 2. stream-table join.
> 3. table-table join.
> 4. stream / table aggregations.
> With 0 and 3 being tackled in KAFKA-2856 and KAFKA-2962 separately, this 
> ticket is going to only focus on windowing definition and 1 / 2 / 4 above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2979) Enable authorizer and ACLs in ducktape tests

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088663#comment-15088663
 ] 

ASF GitHub Bot commented on KAFKA-2979:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/683


> Enable authorizer and ACLs in ducktape tests
> 
>
> Key: KAFKA-2979
> URL: https://issues.apache.org/jira/browse/KAFKA-2979
> Project: Kafka
>  Issue Type: Test
>  Components: system tests
>Affects Versions: 0.9.0.0
>Reporter: Flavio Junqueira
>Assignee: Flavio Junqueira
> Fix For: 0.9.1.0
>
>
> Add some support to test ACLs with ducktape tests and enable some test cases 
> to use it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2874) zookeeper-server-stop.sh may fail to shutdown ZK and/or may stop unrelated processes

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086480#comment-15086480
 ] 

ASF GitHub Bot commented on KAFKA-2874:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/573


> zookeeper-server-stop.sh may fail to shutdown ZK and/or may stop unrelated 
> processes
> 
>
> Key: KAFKA-2874
> URL: https://issues.apache.org/jira/browse/KAFKA-2874
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0
>Reporter: Michael Noll
> Fix For: 0.9.1.0
>
>
> We have run into the problem of ZK not shutting down properly when the 
> included {{bin/zookeeper-server-stop.sh}} is being used.  In a nutshell, ZK 
> may not shutdown when you send only a SIGINT;  instead, there are certain 
> situations (which unfortunately are a bit hard to pin down) where for some 
> reason only a SIGTERM will shut ZK down.
> Similarly, the current 
> [zookeeper-server-stop|https://github.com/apache/kafka/blob/trunk/bin/zookeeper-server-stop.sh#L16]
>  script uses a very broad grep statement (`grep -i zookeeper`) that might 
> cause the script to shutdown other processes on the machine as well, think: 
> collateral damage.
> For reference this is the current command to stop ZK:
> {code}
> ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill 
> -SIGINT
> {code}
> Disclaimer: I don't know whether there are any unwanted side effects of 
> switching from SIGINT to SIGTERM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3049) VerifiableProperties does not respect 'default' properties of underlying java.util.Properties instance

2015-12-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074312#comment-15074312
 ] 

ASF GitHub Bot commented on KAFKA-3049:
---

GitHub user jeffreyolchovy opened a pull request:

https://github.com/apache/kafka/pull/719

KAFKA-3049: VerifiableProperties does not respect 'default' properties of 
underlying java.util.Properties instance

See https://issues.apache.org/jira/browse/KAFKA-3049 for more information.

An alternative solution can be found at: 
https://github.com/apache/kafka/compare/trunk...jeffreyolchovy:KAFKA-3049_immutable-property-names.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jeffreyolchovy/kafka KAFKA-3049_null-check

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/719.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #719


commit 70360a1030d078fc8d7cd14fd2da129c3b425151
Author: Jeffrey Olchovy 
Date:   2015-12-29T21:06:50Z

add test scaffolding and unit tests for VerifiableProperties

commit 6ded4d8720756977b31e7e20b0d1c3c2ae020997
Author: Jeffrey Olchovy 
Date:   2015-12-29T21:08:55Z

replace containsKey with a null check on getProperty to support default 
properties




> VerifiableProperties does not respect 'default' properties of underlying 
> java.util.Properties instance
> --
>
> Key: KAFKA-3049
> URL: https://issues.apache.org/jira/browse/KAFKA-3049
> Project: Kafka
>  Issue Type: Bug
>  Components: config, core
>Affects Versions: 0.7, 0.8.1.1, 0.8.2.1, 0.9.0.0
>Reporter: Jeffrey Olchovy
>Priority: Minor
>  Labels: easyfix
>
> When retrieving values from the underlying {{Properties}} instance with the 
> {{getString}}, {{get}}, etc. methods of a {{VerifiableProperties}} 
> instance, a call to the underlying {{Properties.containsKey}} method is made. 
> This method will not search the default properties values of the instance, 
> rendering any default properties defined on the {{Properties}} instance 
> useless.
> A practical example is shown below:
> {noformat}
> // suppose we have a base, default set of properties to supply to all 
> consumer groups
> val baseProps = new Properties
> baseProps.setProperty("zookeeper.connect", "localhost:2181/kafka")
> baseProps.setProperty("zookeeper.connection.timeout.ms", "2000")
> // additional we have discrete properties instances for each consumer group 
> that utilize these defaults
> val groupProps1 = new Properties(baseProps)
> groupProps1.setProperty("group.id", "test-1")
> val groupProps2 = new Properties(baseProps)
> groupProps2.setProperty("group.id", "test-2")
> // when attempting to create an instance of a high-level Consumer with the 
> above properties an exception will be thrown due to the aforementioned 
> problem description
> java.lang.IllegalArgumentException: requirement failed: Missing required 
> property 'zookeeper.connect'
> at scala.Predef$.require(Predef.scala:233)
> at 
> kafka.utils.VerifiableProperties.getString(VerifiableProperties.scala:177)
> at kafka.utils.ZKConfig.(ZkUtils.scala:879)
> at kafka.consumer.ConsumerConfig.(ConsumerConfig.scala:100)
> at kafka.consumer.ConsumerConfig.(ConsumerConfig.scala:104)
> // however, the groupProps instances will return the correct value for 
> "zookeeper.connect" when using `Properties.getProperty`
> assert(groupProps1.getProperty("zookeeper.connect", "localhost:2181/kafka"))
> assert(groupProps2.getProperty("zookeeper.connect", "localhost:2181/kafka"))
> {noformat}
> I believe it is worthwhile for Kafka to respect the default properties 
> feature of {{java.util.Properties}}, and further, that Kafka should 
> discourage the use of the methods on {{Properties}} that are inherited from 
> {{Hashtable}} (e.g. {{containsKey}}). One can argue that 
> {{VerifiableProperties}} is not 'correct' due to this behavior, but a user 
> can always workaround this by creating discrete instances of {{Properties}} 
> with a set of default properties manually added to each instance. However, 
> this is inconvenient and may only encourage the use of the discouraged 
> {{Hashtable}} methods like {{putAll}}.
> Two proposed solutions follow:
> 1. Do not delegate to the {{Properties.containsKey}} method during the 
> invocation of {{VerifiableProperties.containsKey}}. One can use a null check 
> in conjunction with {{getProperty}} in its place.
> 2. Treat the underlying {{Properties}} instance as immutable and assign the 
> result of 

[jira] [Commented] (KAFKA-3055) JsonConverter mangles schema during serialization (fromConnectData)

2015-12-31 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076221#comment-15076221
 ] 

ASF GitHub Bot commented on KAFKA-3055:
---

GitHub user ksenji opened a pull request:

https://github.com/apache/kafka/pull/722

Fix for KAFKA-3055



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ksenji/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/722.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #722


commit 60a5b5cecd39ccdd4ff2f977a6bfdef123cadb44
Author: ksenji 
Date:   2016-01-01T02:01:44Z

Fix for KAFKA-3055




> JsonConverter mangles schema during serialization (fromConnectData)
> ---
>
> Key: KAFKA-3055
> URL: https://issues.apache.org/jira/browse/KAFKA-3055
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Kishore Senji
>Assignee: Ewen Cheslack-Postava
>
> Test case is here: 
> https://github.com/ksenji/kafka-connect-test/tree/master/src/test/java/org/apache/kafka/connect/json
> If Caching is disabled, it behaves correctly and JsonConverterWithNoCacheTest 
> runs successfully. Otherwise the test JsonConverterTest fails.
> The reason is that the JsonConverter has a bug where it mangles the schema as 
> it assigns all String fields with the same name (and similar for all Int32 
> fields)
> This is how the schema & payload gets serialized for the Person Struct (with 
> caching disabled):
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"firstName"},{"type":"string","optional":false,"field":"lastName"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"age"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> where as when caching is enabled the same Struct gets serialized as (with 
> caching enabled) :
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"weightInKgs"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> As we can see all String fields became "email" and all int32 fields became 
> "weightInKgs". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2016-01-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15076316#comment-15076316
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

GitHub user ZoneMayor opened a pull request:

https://github.com/apache/kafka/pull/723

KAFKA-2944: fix NullPointerException in KafkaConfigStorage

Lost of "config messages" can affect the logic of KafkaConfigStorage;
Call readToEnd after sending each message to KafkaBasedLog to ensure that 
all config messages are flushed to Kafka;
Since "config messages"  sending to KafkaBasedLog are metadata, it will not 
affect performance too much;

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2944

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/723.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #723


commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 52d02f333e86d06cfa8fff5facd18999b3db6d83
Author: ZoneMayor 
Date:   2015-12-30T03:08:08Z

Merge pull request #13 from apache/trunk

2015-12-30

commit 82150dccc59ac0e436acc5186d2b8fb66c9df671
Author: jinxing 
Date:   2016-01-01T15:21:41Z

KAFKA-2944: fix NullPointerException in KafkaConfigStorage




> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> 

[jira] [Commented] (KAFKA-3076) BrokerChangeListener should log the brokers in order

2016-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090555#comment-15090555
 ] 

ASF GitHub Bot commented on KAFKA-3076:
---

GitHub user konradkalita opened a pull request:

https://github.com/apache/kafka/pull/749

KAFKA-3076: BrokerChangeListener should log the brokers in order



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/konradkalita/kafka kafka-3076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/749.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #749


commit acbc58b4fed89ff36bbf530c1cf4c3e8f8dd7469
Author: Konrad 
Date:   2016-01-09T10:14:33Z

BrokerChangeListener  logging the brokers in order




> BrokerChangeListener should log the brokers in order
> 
>
> Key: KAFKA-3076
> URL: https://issues.apache.org/jira/browse/KAFKA-3076
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Konrad Kalita
>  Labels: newbie
>
> Currently, in BrokerChangeListener, we log the full, new and deleted broker 
> set in random order. It would be better if we log them in sorted order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3084) Topic existence checks in topic commands (create, alter, delete)

2016-01-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089874#comment-15089874
 ] 

ASF GitHub Bot commented on KAFKA-3084:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/744

KAFKA-3084: Topic existence checks in topic commands (create, alter, …

…delete)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka exists-checks

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/744.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #744


commit d85b01e84cf3c5aadd0eaeba696e309873ab5bb7
Author: Grant Henke 
Date:   2016-01-08T20:21:12Z

KAFKA-3084: Topic existence checks in topic commands (create, alter, delete)




> Topic existence checks in topic commands (create, alter, delete)
> 
>
> Key: KAFKA-3084
> URL: https://issues.apache.org/jira/browse/KAFKA-3084
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> In Kafka 0.9.0 error codes were added to the topic commands. However, often 
> users only want to perform an action based on the existence of a topic. And 
> they don't want to error if the topic does or does not exist.
> Adding if-exists option for the topic delete and alter commands and 
> if-not-exists for the create command allows users to build scripts that can 
> handle this expected state without error codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3038) Speeding up partition reassignment after broker failure

2016-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090765#comment-15090765
 ] 

ASF GitHub Bot commented on KAFKA-3038:
---

GitHub user enothereska opened a pull request:

https://github.com/apache/kafka/pull/750

KAFKA-3038: use async ZK calls to speed up leader reassignment

Updated failure code path to deal specifically with issue identified at 
affecting latency most. 
@fpj could you have a look please?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/enothereska/kafka kafka-3038

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/750.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #750


commit 3be8bb68c6ccb37b77ed527cf4ff05bc80ee8e99
Author: Eno Thereska 
Date:   2016-01-08T16:09:38Z

Asynchronous implementation of failure path when updating Zookeeper

commit e288c5e35d151e6e8ce06eaa1076ebb2ceb2db13
Author: Eno Thereska 
Date:   2016-01-08T16:10:07Z

Merge remote-tracking branch 'apache-kafka/trunk' into kafka-3038

commit 3913ab76707a6ad125b4252d88bc3cdf091702ee
Author: Eno Thereska 
Date:   2016-01-09T18:23:33Z

Implemented top method using a CountDownLatch. Minor code cleanup

commit a40ad4e768f1c626fc6c818c28d22f0a91d33eaf
Author: Eno Thereska 
Date:   2016-01-09T18:24:25Z

Merge remote-tracking branch 'apache-kafka/trunk' into kafka-3038




> Speeding up partition reassignment after broker failure
> ---
>
> Key: KAFKA-3038
> URL: https://issues.apache.org/jira/browse/KAFKA-3038
> Project: Kafka
>  Issue Type: Improvement
>  Components: controller, core
>Affects Versions: 0.9.0.0
>Reporter: Eno Thereska
>Assignee: Eno Thereska
> Fix For: 0.9.0.0
>
>
> After a broker failure the controller does several writes to Zookeeper for 
> each partition on the failed broker. Writes are done one at a time, in closed 
> loop, which is slow especially under high latency networks. Zookeeper has 
> support for batching operations (the "multi" API). It is expected that 
> substituting serial writes with batched ones should reduce failure handling 
> time by an order of magnitude.
> This is identified as an issue in 
> https://cwiki.apache.org/confluence/display/KAFKA/kafka+Detailed+Replication+Design+V3
>  (section End-to-end latency during a broker failure)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3044) Consumer.poll doesnot return messages when poll interval is less

2016-01-09 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090892#comment-15090892
 ] 

ASF GitHub Bot commented on KAFKA-3044:
---

GitHub user praveend opened a pull request:

https://github.com/apache/kafka/pull/751

KAFKA-3044: Re-word consumer.poll behaviour

https://issues.apache.org/jira/browse/KAFKA-3044

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/praveend/kafka poll_doc_changes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/751.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #751


commit 331a5ecc6a65212e27e8a4551ce4af3321598d1a
Author: Praveen Devarao 
Date:   2016-01-10T04:54:00Z

Doc change as per KAFKA-3044




> Consumer.poll doesnot return messages when poll interval is less
> 
>
> Key: KAFKA-3044
> URL: https://issues.apache.org/jira/browse/KAFKA-3044
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> When seeking to particular position in consumer and starting poll with 
> timeout param 0 the consumer does not come back with data though there is 
> data published via a producer already. If the timeout is increased slowly in 
> chunks of 100ms then at 700ms value the consumer returns back the record on 
> first call to poll.
> Docs 
> [http://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)]
>  for poll reads if timeout is 0 then data will be returned immediately but 
> the behaviour seen is that data is not returned.
> The test code I am using can be found here 
> https://gist.github.com/praveend/013dcab01ebb8c7e2f2d
> I have created a topic with data published as below and then running the test 
> program [ConsumerPollTest.java]
> $ bin/kafka-topics.sh --create --zookeeper localhost:2181 
> --replication-factor 1 --partitions 1 --topic mytopic
> $ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic
> Hello
> Hai
> bye
> $ java ConsumerPollTest
> I have published this 3 lines of data to kafka only oncelater on I just 
> use the above program with different poll interval
> Let me know if I am missing anything and interpreting it wrongly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3045) ZkNodeChangeNotificationListener shouldn't log interrupted exception as error

2016-01-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083691#comment-15083691
 ] 

ASF GitHub Bot commented on KAFKA-3045:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/731


> ZkNodeChangeNotificationListener shouldn't log interrupted exception as error
> -
>
> Key: KAFKA-3045
> URL: https://issues.apache.org/jira/browse/KAFKA-3045
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Dong Lin
>  Labels: security
> Fix For: 0.9.0.1
>
>
> Saw the following when running /opt/kafka/bin/kafka-acls.sh --authorizer 
> kafka.security.auth.SimpleAclAuthorizer.
> [2015-12-28 08:04:39,382] ERROR Error processing notification change for path 
> = /kafka-acl-changes and notification= [acl_changes_04, 
> acl_changes_03, acl_changes_02, acl_changes_01, 
> acl_changes_00] : (kafka.common.ZkNodeChangeNotificationListener)
> org.I0Itec.zkclient.exception.ZkInterruptedException: 
> java.lang.InterruptedException
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:997)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1090)
> at org.I0Itec.zkclient.ZkClient.readData(ZkClient.java:1085)
> at kafka.utils.ZkUtils.readDataMaybeNull(ZkUtils.scala:525)
> at 
> kafka.security.auth.SimpleAclAuthorizer.kafka$security$auth$SimpleAclAuthorizer$$getAclsFromZk(SimpleAclAuthorizer.scala:213)
> at 
> kafka.security.auth.SimpleAclAuthorizer$AclChangedNotificaitonHandler$.processNotification(SimpleAclAuthorizer.scala:273)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:84)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2$$anonfun$apply$2.apply(ZkNodeChangeNotificationListener.scala:84)
> at scala.Option.map(Option.scala:146)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:84)
> at 
> kafka.common.ZkNodeChangeNotificationListener$$anonfun$kafka$common$ZkNodeChangeNotificationListener$$processNotifications$2.apply(ZkNodeChangeNotificationListener.scala:79)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
> at 
> kafka.common.ZkNodeChangeNotificationListener.kafka$common$ZkNodeChangeNotificationListener$$processNotifications(ZkNodeChangeNotificationListener.scala:79)
> at 
> kafka.common.ZkNodeChangeNotificationListener$NodeChangeListener$.handleChildChange(ZkNodeChangeNotificationListener.scala:121)
> at org.I0Itec.zkclient.ZkClient$10.run(ZkClient.java:842)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> Caused by: java.lang.InterruptedException
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1153)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1184)
> at org.I0Itec.zkclient.ZkConnection.readData(ZkConnection.java:119)
> at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1094)
> at org.I0Itec.zkclient.ZkClient$12.call(ZkClient.java:1090)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:985)
> When SimpleAclAuthorizer terminates, we close zkclient, which interrupts the 
> watcher processor thread. Since this is expected, we shouldn't log this as an 
> error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3058) remove the usage of deprecated config properties

2016-01-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083695#comment-15083695
 ] 

ASF GitHub Bot commented on KAFKA-3058:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/732


> remove the usage of deprecated config properties
> 
>
> Key: KAFKA-3058
> URL: https://issues.apache.org/jira/browse/KAFKA-3058
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Konrad Kalita
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> There are compilation warnings like the following, which can be avoided.
> core/src/main/scala/kafka/tools/EndToEndLatency.scala:74: value 
> BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
> corresponding Javadoc for more information.
> producerProps.put(ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
>  ^
> kafka/core/src/main/scala/kafka/tools/MirrorMaker.scala:195: value 
> BLOCK_ON_BUFFER_FULL_CONFIG in object ProducerConfig is deprecated: see 
> corresponding Javadoc for more information.
>   maybeSetDefaultProperty(producerProps, 
> ProducerConfig.BLOCK_ON_BUFFER_FULL_CONFIG, "true")
> ^
> /Users/junrao/intellij/kafka/core/src/main/scala/kafka/tools/ProducerPerformance.scala:40:
>  @deprecated now takes two arguments; see the scaladoc.
> @deprecated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3016) Add KStream-KStream window joins

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081461#comment-15081461
 ] 

ASF GitHub Bot commented on KAFKA-3016:
---

GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/726

KAFKA-3016: phase-1. A local store for join window

@guozhangwang 
An implementation of local store for join window. This implementation uses 
"rolling" of RocksDB instances for timestamp based truncation.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka windowed_join

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/726.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #726


commit 87734776268d8f9d7315cc7552cdfb1fe86ecb69
Author: Yasuhiro Matsuda 
Date:   2016-01-04T17:34:57Z

join window store

commit 096a83941f97af5da8192a71c8d5bb6e66130a45
Author: Yasuhiro Matsuda 
Date:   2016-01-04T17:42:53Z

Merge branch 'trunk' of github.com:apache/kafka into windowed_join




> Add KStream-KStream window joins
> 
>
> Key: KAFKA-3016
> URL: https://issues.apache.org/jira/browse/KAFKA-3016
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Affects Versions: 0.9.0.1
>Reporter: Yasuhiro Matsuda
>Assignee: Yasuhiro Matsuda
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3055) JsonConverter mangles schema during serialization (fromConnectData)

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081345#comment-15081345
 ] 

ASF GitHub Bot commented on KAFKA-3055:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/722


> JsonConverter mangles schema during serialization (fromConnectData)
> ---
>
> Key: KAFKA-3055
> URL: https://issues.apache.org/jira/browse/KAFKA-3055
> Project: Kafka
>  Issue Type: Bug
>  Components: copycat
>Affects Versions: 0.9.0.0
>Reporter: Kishore Senji
>Assignee: Ewen Cheslack-Postava
> Fix For: 0.9.0.1, 0.9.1.0
>
>
> Test case is here: 
> https://github.com/ksenji/kafka-connect-test/tree/master/src/test/java/org/apache/kafka/connect/json
> If Caching is disabled, it behaves correctly and JsonConverterWithNoCacheTest 
> runs successfully. Otherwise the test JsonConverterTest fails.
> The reason is that the JsonConverter has a bug where it mangles the schema as 
> it assigns all String fields with the same name (and similar for all Int32 
> fields)
> This is how the schema & payload gets serialized for the Person Struct (with 
> caching disabled):
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"firstName"},{"type":"string","optional":false,"field":"lastName"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"age"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> where as when caching is enabled the same Struct gets serialized as (with 
> caching enabled) :
> {code}
> {"schema":{"type":"struct","fields":[{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"string","optional":false,"field":"email"},{"type":"int32","optional":false,"field":"weightInKgs"},{"type":"int32","optional":false,"field":"weightInKgs"}],"optional":false,"name":"Person"},"payload":{"firstName":"Eric","lastName":"Cartman","email":"eric.cart...@southpark.com","age":10,"weightInKgs":40}}
> {code}
> As we can see all String fields became "email" and all int32 fields became 
> "weightInKgs". 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3052) broker properties get logged twice if acl is enabled

2016-01-04 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15081349#comment-15081349
 ] 

ASF GitHub Bot commented on KAFKA-3052:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/725


> broker properties get logged twice if acl is enabled
> 
>
> Key: KAFKA-3052
> URL: https://issues.apache.org/jira/browse/KAFKA-3052
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>  Labels: newbie, security
> Fix For: 0.9.0.1
>
>
> This is because in SimpleAclAuthorizer.configure(), there is the following 
> statement which triggers the logging of all broker properties.
> val kafkaConfig = KafkaConfig.fromProps(props)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2988) Change default configuration of the log cleaner

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1508#comment-1508
 ] 

ASF GitHub Bot commented on KAFKA-2988:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/686


> Change default configuration of the log cleaner
> ---
>
> Key: KAFKA-2988
> URL: https://issues.apache.org/jira/browse/KAFKA-2988
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Since 0.9.0 the internal "__consumer_offsets" topic is being used more 
> heavily. Because this is a compacted topic "log.cleaner.enable" needs to be 
> "true" in order for it to be compacted. 
> Since this is critical for core Kafka functionality we should change the 
> default to true and potentially consider removing the option to disable all 
> together. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1901) Move Kafka version to be generated in code by build (instead of in manifest)

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086254#comment-15086254
 ] 

ASF GitHub Bot commented on KAFKA-1901:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/209


> Move Kafka version to be generated in code by build (instead of in manifest)
> 
>
> Key: KAFKA-1901
> URL: https://issues.apache.org/jira/browse/KAFKA-1901
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.0
>Reporter: Jason Rosenberg
>Assignee: Manikumar Reddy
> Fix For: 0.9.0.0
>
> Attachments: KAFKA-1901.patch, KAFKA-1901_2015-06-26_13:16:29.patch, 
> KAFKA-1901_2015-07-10_16:42:53.patch, KAFKA-1901_2015-07-14_17:59:56.patch, 
> KAFKA-1901_2015-08-09_15:04:39.patch, KAFKA-1901_2015-08-20_12:35:00.patch
>
>
> With 0.8.2 (rc2), I've started seeing this warning in the logs of apps 
> deployed to our staging (both server and client):
> {code}
> 2015-01-23 00:55:25,273  WARN [async-message-sender-0] common.AppInfo$ - 
> Can't read Kafka version from MANIFEST.MF. Possible cause: 
> java.lang.NullPointerException
> {code}
> The issues is that in our deployment, apps are deployed with single 'shaded' 
> jars (e.g. using the maven shade plugin).  This means the MANIFEST.MF file 
> won't have a kafka version.  Instead, suggest the kafka build generate the 
> proper version in code, as part of the build.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3070) SASL unit tests dont work with IBM JDK

2016-01-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086252#comment-15086252
 ] 

ASF GitHub Bot commented on KAFKA-3070:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/738

KAFKA-3070: Use IBM Kerberos module for SASL tests if running on IBM JDK

Use IBM Kerberos module and properties for SASL tests if using IBM JRE

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3070

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/738.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #738


commit f9fc14d09c1d5329773d6e4fe4800515a998a282
Author: Rajini Sivaram 
Date:   2016-01-06T20:43:43Z

KAFKA-3070: Use IBM Kerberos module for SASL tests if running on IBM JDK




> SASL unit tests dont work with IBM JDK
> --
>
> Key: KAFKA-3070
> URL: https://issues.apache.org/jira/browse/KAFKA-3070
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> jaas.conf used for SASL tests in core use the Kerberos module 
> com.sun.security.auth.module.Krb5LoginModule and hence dont work with IBM 
> JDK. The IBM JDK Kerberos module com.ibm.security.auth.module.Krb5LoginModule 
> should be used along with properties corresponding to this module when tests 
> are run with IBM JDK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3077) Enable KafkaLog4jAppender to work with SASL enabled brokers.

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088284#comment-15088284
 ] 

ASF GitHub Bot commented on KAFKA-3077:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/740

KAFKA-3077: Enable KafkaLog4jAppender to work with SASL enabled brokers



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-3077

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/740.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #740


commit 52a9e37f7cee7b6d565dbf9da28595ce6de85c74
Author: Ashish Singh 
Date:   2016-01-07T22:32:51Z

KAFKA-3077: Enable KafkaLog4jAppender to work with SASL enabled brokers




> Enable KafkaLog4jAppender to work with SASL enabled brokers.
> 
>
> Key: KAFKA-3077
> URL: https://issues.apache.org/jira/browse/KAFKA-3077
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> KafkaLog4jAppender is not enhanced to talk to sasl enabled cluster. This JIRA 
> aims at adding that support, thus enabling users using log4j appender to 
> publish to a SASL enabled Kafka cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2649) Add support for custom partitioner in sink nodes

2016-01-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088312#comment-15088312
 ] 

ASF GitHub Bot commented on KAFKA-2649:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/309


> Add support for custom partitioner in sink nodes
> 
>
> Key: KAFKA-2649
> URL: https://issues.apache.org/jira/browse/KAFKA-2649
> Project: Kafka
>  Issue Type: Sub-task
>  Components: kafka streams
>Reporter: Randall Hauch
>Assignee: Randall Hauch
> Fix For: 0.9.1.0
>
>
> The only way for Processor implementations to control partitioning of 
> forwarded messages is to set the partitioner class as property 
> {{ProducerConfig.PARTITIONER_CLASS_CONFIG}} in the StreamsConfig, which 
> should be set to the name of a 
> {{org.apache.kafka.clients.producer.Partitioner}} implementation. However, 
> doing this requires the partitioner knowing how to properly partition *all* 
> topics, not just the one or few topics used by the Processor.
> Instead, Kafka Streams should make it easy to optionally add a partitioning 
> function for each sink used in a topology. Each sink represents a single 
> output topic, and thus is far simpler to implement. Additionally, the sink is 
> already typed with the key and value types (via serdes for the keys and 
> values), so the partitioner can be also be typed with the key and value 
> types. Finally, this also keeps the logic of choosing partitioning strategies 
> where it belongs, as part of building the topology.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2058) ProducerTest.testSendWithDeadBroker transient failure

2015-12-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066072#comment-15066072
 ] 

ASF GitHub Bot commented on KAFKA-2058:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/689


> ProducerTest.testSendWithDeadBroker transient failure
> -
>
> Key: KAFKA-2058
> URL: https://issues.apache.org/jira/browse/KAFKA-2058
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Guozhang Wang
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> {code}
> kafka.producer.ProducerTest > testSendWithDeadBroker FAILED
> java.lang.AssertionError: Message set should have 1 message
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:44)
> at 
> kafka.producer.ProducerTest.testSendWithDeadBroker(ProducerTest.scala:260)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066076#comment-15066076
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/671


> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Guozhang Wang
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Commented] (KAFKA-3024) Remove old patch review tools

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067031#comment-15067031
 ] 

ASF GitHub Bot commented on KAFKA-3024:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/705

KAFKA-3024: Remove old patch review tools



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka review-tools-cleanup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/705.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #705


commit 4ae0d0b51dcd0bbf57cbbdedea6736480a344eca
Author: Grant Henke 
Date:   2015-12-21T20:19:55Z

KAFKA-3024: Remove old patch review tools




> Remove old patch review tools
> -
>
> Key: KAFKA-3024
> URL: https://issues.apache.org/jira/browse/KAFKA-3024
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka has been using the new GitHub PR and Jenkins build process for a while 
> now. No new patches have been added to Review Board for some time. We should 
> remove the old patch review tools, and any new functionality should be added 
> the new PR build and merge script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2989) Verify all partitions consumed after rebalancing in system tests

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066845#comment-15066845
 ] 

ASF GitHub Bot commented on KAFKA-2989:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/702

KAFKA-2989: system tests should verify partitions consumed after rebalancing



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-2989

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/702.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #702


commit f6640867eecd3106c635fc08a0abefa2c2cabc8e
Author: Jason Gustafson 
Date:   2015-12-16T01:49:36Z

KAFKA-2989: system tests should verify partitions consumed after rebalancing




> Verify all partitions consumed after rebalancing in system tests
> 
>
> Key: KAFKA-2989
> URL: https://issues.apache.org/jira/browse/KAFKA-2989
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> In KAFKA-2978, we found a bug which prevented the consumer from fetching some 
> assigned partitions. Our system tests didn't catch the bug because we only 
> assert that some messages from any topic are consumed after rebalancing. We 
> should strengthen these assertions to ensure that each partition is consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066984#comment-15066984
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2000) Delete consumer offsets from kafka once the topic is deleted

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067028#comment-15067028
 ] 

ASF GitHub Bot commented on KAFKA-2000:
---

GitHub user Parth-Brahmbhatt opened a pull request:

https://github.com/apache/kafka/pull/704

KAFKA-2000: Delete topic should also delete consumer offsets.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Parth-Brahmbhatt/kafka KAFKA-2000

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/704.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #704


commit ceae0b7031d297a7db6664b435bb3cdc55228646
Author: Parth Brahmbhatt 
Date:   2015-12-18T20:35:32Z

KAFKA-2000: Delete topic should also delete consumer offsets.




> Delete consumer offsets from kafka once the topic is deleted
> 
>
> Key: KAFKA-2000
> URL: https://issues.apache.org/jira/browse/KAFKA-2000
> Project: Kafka
>  Issue Type: Bug
>Reporter: Sriharsha Chintalapani
>Assignee: Sriharsha Chintalapani
>  Labels: newbie++
> Fix For: 0.9.1.0
>
> Attachments: KAFKA-2000.patch, KAFKA-2000_2015-05-03_10:39:11.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3009) Disallow star imports

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067095#comment-15067095
 ] 

ASF GitHub Bot commented on KAFKA-3009:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/700


> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
> Fix For: 0.9.1.0
>
> Attachments: main.xml
>
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2455) Test Failure: kafka.consumer.MetricsTest > testMetricsLeak

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066855#comment-15066855
 ] 

ASF GitHub Bot commented on KAFKA-2455:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/694


> Test Failure: kafka.consumer.MetricsTest > testMetricsLeak 
> ---
>
> Key: KAFKA-2455
> URL: https://issues.apache.org/jira/browse/KAFKA-2455
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> I've seen this failure in builds twice recently:
> kafka.consumer.MetricsTest > testMetricsLeak FAILED
> java.lang.AssertionError: expected:<174> but was:<176>
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.failNotEquals(Assert.java:689)
> at org.junit.Assert.assertEquals(Assert.java:127)
> at org.junit.Assert.assertEquals(Assert.java:514)
> at org.junit.Assert.assertEquals(Assert.java:498)
> at 
> kafka.consumer.MetricsTest$$anonfun$testMetricsLeak$1.apply$mcVI$sp(MetricsTest.scala:65)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at kafka.consumer.MetricsTest.testMetricsLeak(MetricsTest.scala:63)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3014) Integer overflow causes incorrect node iteration in leastLoadedNode()

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15066875#comment-15066875
 ] 

ASF GitHub Bot commented on KAFKA-3014:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/696


> Integer overflow causes incorrect node iteration in leastLoadedNode() 
> --
>
> Key: KAFKA-3014
> URL: https://issues.apache.org/jira/browse/KAFKA-3014
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.0.1
>
>
> The leastLoadedNode() implementation iterates over all the known nodes to 
> find a suitable candidate for sending metadata. The loop looks like this:
> {code}
> for (int i = 0; i < nodes.size(); i++) {
>   int idx = Utils.abs((this.nodeIndexOffset + i) % nodes.size());
>   Node node = nodes.get(idx);
>   ...
> }
> {code}
> Unfortunately, this doesn't handle integer overflow correctly, which can 
> result in some nodes in the list being passed over. For example, if the size 
> of the node list is 5 and the random offset is Integer.MAX_VALUE, then the 
> loop will iterate over the following indices: 2, 3, 2, 1, 0. 
> In pathological cases, this can prevent the client from being able to connect 
> to good nodes in order to refresh metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2989) Verify all partitions consumed after rebalancing in system tests

2015-12-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068946#comment-15068946
 ] 

ASF GitHub Bot commented on KAFKA-2989:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/702


> Verify all partitions consumed after rebalancing in system tests
> 
>
> Key: KAFKA-2989
> URL: https://issues.apache.org/jira/browse/KAFKA-2989
> Project: Kafka
>  Issue Type: Test
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.9.1.0
>
>
> In KAFKA-2978, we found a bug which prevented the consumer from fetching some 
> assigned partitions. Our system tests didn't catch the bug because we only 
> assert that some messages from any topic are consumed after rebalancing. We 
> should strengthen these assertions to ensure that each partition is consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3029) Make class org.apache.kafka.common.TopicPartition Serializable

2015-12-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068363#comment-15068363
 ] 

ASF GitHub Bot commented on KAFKA-3029:
---

GitHub user praveend opened a pull request:

https://github.com/apache/kafka/pull/711

KAFKA-3029:Marking class org.apache.kafka.common.TopicPartition as 
serializable

Patch for issue KAFKA-3029

Given that the fix is trivial no new test case is needed. I have run the 
test suite using gradle (as mentioned @ 
https://github.com/apache/kafka/blob/trunk/README.md) and suite runs clean.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/praveend/kafka tp_serializable_branch

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/711.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #711


commit 66bda1b1d9e1e073efcd1c9caa166e6817874a47
Author: Praveen Devarao 
Date:   2015-12-22T15:28:10Z

Marking class org.apache.kafka.common.TopicPartition as serializable




> Make class org.apache.kafka.common.TopicPartition Serializable
> --
>
> Key: KAFKA-3029
> URL: https://issues.apache.org/jira/browse/KAFKA-3029
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Praveen Devarao
>  Labels: easyfix, patch
> Fix For: 0.9.0.1
>
>
> The client class TopicPartition is exposed and used by consumer applications 
> directly. In case where the application needs to checkpoint the state it is 
> difficult to serialize different app classes that use TopicPartition as 
> TopicParitition is not serializable.
> For instance consider the Spark use case where RDDs have to be 
> checkpointedthe KafakaInputDstream (which we are currently modifying to 
> use the new Kafka API rather than the Highlevel apis in previous version) 
> cannot be serialized due to above limitation.
> I have created a patch to serialize TopicPartition class by making it 
> implement serializable interface and have issued a pull request.
> Can this be merged for the next release (0.9.0.1)
> Thanks
> Praveen



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3022) Deduplicate common project configurations

2015-12-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15068380#comment-15068380
 ] 

ASF GitHub Bot commented on KAFKA-3022:
---

GitHub user granthenke opened a pull request:

https://github.com/apache/kafka/pull/712

KAFKA-3022: Deduplicate common project configurations

- Move testJar to subprojects config
- Move CheckStyle to subprojects config
- Move testLogging to subprojects config
- Add testSourceJar in subprojects config
- Minor cleanup

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka build-dedupe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/712.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #712


commit eb469093f9fe83416a1ace421a9fb9ab17b936ee
Author: Grant Henke 
Date:   2015-12-22T16:20:26Z

KAFKA-3022: Deduplicate common project configurations

- Move testJar to subprojects config
- Move CheckStyle to subprojects config
- Move testLogging to subprojects config
- Add testSourceJar in subprojects config
- Minor cleanup




> Deduplicate common project configurations
> -
>
> Key: KAFKA-3022
> URL: https://issues.apache.org/jira/browse/KAFKA-3022
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>
> Many of the configurations for subproject artifacts, tests, CheckStyle, etc. 
> are and should be exactly the same. We can reduce duplicate code by moving 
> this configuration to the sub-projects section.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15071365#comment-15071365
 ] 

ASF GitHub Bot commented on KAFKA-3043:
---

GitHub user sasakitoa opened a pull request:

https://github.com/apache/kafka/pull/716

KAFKA-3043: Replace request.required.acks with acks in docs.

In Kafka 0.9, request.required.acks=-1 which configration of producer is 
replaced by acks=all, 
but this old config is remained in docs.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sasakitoa/kafka acks_doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/716.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #716


commit ab7d1b041f7c9472757672c7d2e62ac533390c85
Author: Sasaki Toru 
Date:   2015-12-25T05:27:44Z

Replace request.required.acks with acks in docs.




> Replace request.required.acks with acks in docs
> ---
>
> Key: KAFKA-3043
> URL: https://issues.apache.org/jira/browse/KAFKA-3043
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.9.0.1, 0.9.1.0
>Reporter: Sasaki Toru
> Fix For: 0.9.0.0
>
>
> In Kafka 0.9, request.required.acks=-1 which configration of producer is 
> replaced by acks=all, but this old config is remained in docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3043) Replace request.required.acks with acks in docs

2015-12-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072074#comment-15072074
 ] 

ASF GitHub Bot commented on KAFKA-3043:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/716


> Replace request.required.acks with acks in docs
> ---
>
> Key: KAFKA-3043
> URL: https://issues.apache.org/jira/browse/KAFKA-3043
> Project: Kafka
>  Issue Type: Improvement
>  Components: website
>Affects Versions: 0.9.0.1, 0.9.1.0
>Reporter: Sasaki Toru
> Fix For: 0.9.1.0
>
>
> In Kafka 0.9, request.required.acks=-1 which configration of producer is 
> replaced by acks=all, but this old config is remained in docs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15065630#comment-15065630
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

GitHub user ZoneMayor reopened a pull request:

https://github.com/apache/kafka/pull/671

KAFKA-2977: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

Code is as below:
val appends = writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
cleaner.startup()
val firstDirty = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty)

val appends2 = appends ++ writeDups(numKeys = 100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))
val firstDirty2 = log.activeSegment.baseOffset
cleaner.awaitCleaned("log", 0, firstDirty2)

log cleaner and writeDups are two different threads;
log cleaner do cleaning every 15s, timeout in "cleaner.awaitCleaned" is 60s;
there is a filtering condition for a log to be chosen to become a cleaning 
target: cleanableRatio> 0.5(configured log.cleaner.min.cleanable.ratio) by 
default;
It may happen that, during "val appends2 = appends ++ writeDups(numKeys = 
100, numDups = 3, log, 
CompressionCodec.getCompressionCodec(compressionCodec))", log is also 
undergoing a cleaning process; 
Since the segment size configured in this test is quite small: 100, there 
is possibility that before the end of 'writeDups', some 'dirty segment' of the 
log is already cleaned;
With tiny dirty part left,  cleanableRatio> 0.5 cannot be satisfied;
thus firstDirty2>lastCleaned2, which leads to this test failed;
To fix, make MinCleanableDirtyRatioProp configurable in makeCleaner;
Also removed minDirtyMessages.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ZoneMayor/kafka trunk-KAFKA-2977

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #671


commit 95374147a28208d4850f6e73f714bf418935fc2d
Author: ZoneMayor 
Date:   2015-11-27T03:49:34Z

Merge pull request #1 from apache/trunk

merge

commit cec5b48b651a7efd3900cfa3c1fd0ab1eeeaa3ec
Author: ZoneMayor 
Date:   2015-12-01T10:44:02Z

Merge pull request #2 from apache/trunk

2015-12-1

commit a119d547bf1741625ce0627073c7909992a20f15
Author: ZoneMayor 
Date:   2015-12-04T13:42:27Z

Merge pull request #3 from apache/trunk

2015-12-04#KAFKA-2893

commit b767a8dff85fc71c75d4cf5178c3f6f03ff81bfc
Author: ZoneMayor 
Date:   2015-12-09T10:42:30Z

Merge pull request #5 from apache/trunk

2015-12-9

commit 0070c2d71d06ee8baa1cddb3451cd5af6c6b1d4a
Author: ZoneMayor 
Date:   2015-12-11T14:50:30Z

Merge pull request #8 from apache/trunk

2015-12-11

commit 09908ac646d4c84f854dad63b8c99213b74a7063
Author: ZoneMayor 
Date:   2015-12-13T14:17:19Z

Merge pull request #9 from apache/trunk

2015-12-13

commit ff1e68bb7101d12624c189174ef1dceb21ed9798
Author: jinxing 
Date:   2015-12-13T14:31:34Z

KAFKA-2054: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

commit 6321ab6599cb7a981fac2a4eea64a5f2ea805dd6
Author: jinxing 
Date:   2015-12-13T14:36:11Z

removed unnecessary maven repo

commit 05cae52c72a02c0ed40fd4e3be03e1cb19f33f7a
Author: jinxing 
Date:   2015-12-17T12:21:12Z

removed the semicolon

commit 34240b52e1b70aa172b65155f6042243d838b420
Author: ZoneMayor 
Date:   2015-12-18T07:22:20Z

Merge pull request #12 from apache/trunk

2015-12-18

commit 651de48663cf375ea714cdbeb34650d75f1f4d71
Author: jinxing 
Date:   2015-12-18T07:43:38Z

KAFKA-2977: Transient Failure in 
kafka.log.LogCleanerIntegrationTest.cleanerTest

commit ba03eee18045dc4aabc56ff17907036c238b1f7d
Author: jinxing 
Date:   2015-12-18T07:50:57Z

KAFKA-2977: fix

commit 341c0f7d9d8a6910bc81a1cc9da6608c48323201
Author: jinxing 
Date:   2015-12-18T17:08:43Z

Merge branch 'trunk-KAFKA-2977' of https://github.com/ZoneMayor/kafka into 
trunk-KAFKA-2977

commit 734d1d5ee179cf8671426ec770c8cab5f6228112
Author: jinxing 
Date:   2015-12-18T17:23:15Z

KAFKA-2977: using String interpolation

commit 55430ed040354279f06bd59ff1a62c2a50c8882c
Author: jinxing 
Date:   2015-12-19T13:39:39Z

make MinCleanableDirtyRatioProp configurable in makeCleaner and removed 
minDirtyMessages


[jira] [Commented] (KAFKA-2977) Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest

2015-12-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15065629#comment-15065629
 ] 

ASF GitHub Bot commented on KAFKA-2977:
---

Github user ZoneMayor closed the pull request at:

https://github.com/apache/kafka/pull/671


> Transient Failure in kafka.log.LogCleanerIntegrationTest.cleanerTest
> 
>
> Key: KAFKA-2977
> URL: https://issues.apache.org/jira/browse/KAFKA-2977
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Guozhang Wang
>Assignee: jin xing
>  Labels: newbie
> Fix For: 0.9.0.1
>
>
> {code}
> java.lang.AssertionError: log cleaner should have processed up to offset 599
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.log.LogCleanerIntegrationTest.cleanerTest(LogCleanerIntegrationTest.scala:76)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> 

[jira] [Commented] (KAFKA-3013) Display the topic-partition in the exception message for expired batches in recordAccumulator

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064422#comment-15064422
 ] 

ASF GitHub Bot commented on KAFKA-3013:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/695

KAFKA-3013

Added topic-partition information to the exception message on batch expiry 
in RecordAccumulator

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-3013

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/695.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #695


commit 5c7f2e749fd8674bae66b6698319181a0f3e9251
Author: Mayuresh Gharat 
Date:   2015-12-18T18:28:32Z

Added topic-partition information to the exception message on batch expiry 
in RecordAccumulator




> Display the topic-partition in the exception message for expired batches in 
> recordAccumulator 
> --
>
> Key: KAFKA-3013
> URL: https://issues.apache.org/jira/browse/KAFKA-3013
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Mayuresh Gharat
>Assignee: Mayuresh Gharat
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1860) File system errors are not detected unless Kafka tries to write

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064922#comment-15064922
 ] 

ASF GitHub Bot commented on KAFKA-1860:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/697

KAFKA-1860

The JVM should stop if the underlying file system goes in to Read only mode

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka kafka-1860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/697.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #697


commit 5c7f2e749fd8674bae66b6698319181a0f3e9251
Author: Mayuresh Gharat 
Date:   2015-12-18T18:28:32Z

Added topic-partition information to the exception message on batch expiry 
in RecordAccumulator

commit 140d89f33171d665ec27839e8589f2055dc2a34b
Author: Mayuresh Gharat 
Date:   2015-12-18T19:02:49Z

Made the exception message more clear explaining why the batches expired




> File system errors are not detected unless Kafka tries to write
> ---
>
> Key: KAFKA-1860
> URL: https://issues.apache.org/jira/browse/KAFKA-1860
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1860.patch
>
>
> When the disk (raid with caches dir) dies on a Kafka broker, typically the 
> filesystem gets mounted into read-only mode, and hence when Kafka tries to 
> read the disk, they'll get a FileNotFoundException with the read-only errno 
> set (EROFS).
> However, as long as there is no produce request received, hence no writes 
> attempted on the disks, Kafka will not exit on such FATAL error (when the 
> disk starts working again, Kafka might think some files are gone while they 
> will reappear later as raid comes back online). Instead it keeps spilling 
> exceptions like:
> {code}
> 2015/01/07 09:47:41.543 ERROR [KafkaScheduler] [kafka-scheduler-1] 
> [kafka-server] [] Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint'
> java.io.FileNotFoundException: 
> /export/content/kafka/i001_caches/recovery-point-offset-checkpoint.tmp 
> (Read-only file system)
>   at java.io.FileOutputStream.open(Native Method)
>   at java.io.FileOutputStream.(FileOutputStream.java:206)
>   at java.io.FileOutputStream.(FileOutputStream.java:156)
>   at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:37)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1860) File system errors are not detected unless Kafka tries to write

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064942#comment-15064942
 ] 

ASF GitHub Bot commented on KAFKA-1860:
---

Github user MayureshGharat closed the pull request at:

https://github.com/apache/kafka/pull/697


> File system errors are not detected unless Kafka tries to write
> ---
>
> Key: KAFKA-1860
> URL: https://issues.apache.org/jira/browse/KAFKA-1860
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1860.patch
>
>
> When the disk (raid with caches dir) dies on a Kafka broker, typically the 
> filesystem gets mounted into read-only mode, and hence when Kafka tries to 
> read the disk, they'll get a FileNotFoundException with the read-only errno 
> set (EROFS).
> However, as long as there is no produce request received, hence no writes 
> attempted on the disks, Kafka will not exit on such FATAL error (when the 
> disk starts working again, Kafka might think some files are gone while they 
> will reappear later as raid comes back online). Instead it keeps spilling 
> exceptions like:
> {code}
> 2015/01/07 09:47:41.543 ERROR [KafkaScheduler] [kafka-scheduler-1] 
> [kafka-server] [] Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint'
> java.io.FileNotFoundException: 
> /export/content/kafka/i001_caches/recovery-point-offset-checkpoint.tmp 
> (Read-only file system)
>   at java.io.FileOutputStream.open(Native Method)
>   at java.io.FileOutputStream.(FileOutputStream.java:206)
>   at java.io.FileOutputStream.(FileOutputStream.java:156)
>   at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:37)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1860) File system errors are not detected unless Kafka tries to write

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064951#comment-15064951
 ] 

ASF GitHub Bot commented on KAFKA-1860:
---

GitHub user MayureshGharat opened a pull request:

https://github.com/apache/kafka/pull/698

KAFKA-1860

The JVM should stop if the underlying file system goes in to Read only mode

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MayureshGharat/kafka KAFKA-1860

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/698.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #698


commit 4ba2186fbf422395387254d3201f53bc6707
Author: Mayuresh Gharat 
Date:   2015-12-18T23:21:01Z

The JVM should stop if the underlying file system goes in to Read only mode




> File system errors are not detected unless Kafka tries to write
> ---
>
> Key: KAFKA-1860
> URL: https://issues.apache.org/jira/browse/KAFKA-1860
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Mayuresh Gharat
> Fix For: 0.10.0.0
>
> Attachments: KAFKA-1860.patch
>
>
> When the disk (raid with caches dir) dies on a Kafka broker, typically the 
> filesystem gets mounted into read-only mode, and hence when Kafka tries to 
> read the disk, they'll get a FileNotFoundException with the read-only errno 
> set (EROFS).
> However, as long as there is no produce request received, hence no writes 
> attempted on the disks, Kafka will not exit on such FATAL error (when the 
> disk starts working again, Kafka might think some files are gone while they 
> will reappear later as raid comes back online). Instead it keeps spilling 
> exceptions like:
> {code}
> 2015/01/07 09:47:41.543 ERROR [KafkaScheduler] [kafka-scheduler-1] 
> [kafka-server] [] Uncaught exception in scheduled task 
> 'kafka-recovery-point-checkpoint'
> java.io.FileNotFoundException: 
> /export/content/kafka/i001_caches/recovery-point-offset-checkpoint.tmp 
> (Read-only file system)
>   at java.io.FileOutputStream.open(Native Method)
>   at java.io.FileOutputStream.(FileOutputStream.java:206)
>   at java.io.FileOutputStream.(FileOutputStream.java:156)
>   at kafka.server.OffsetCheckpoint.write(OffsetCheckpoint.scala:37)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3014) Integer overflow causes incorrect node iteration in leastLoadedNode()

2015-12-18 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15064887#comment-15064887
 ] 

ASF GitHub Bot commented on KAFKA-3014:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/696

KAFKA-3014: fix integer overflow problem in leastLoadedNode



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3014

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/696.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #696


commit d513df5a3e732cde7576a4a126ac137045e1046c
Author: Jason Gustafson 
Date:   2015-12-18T22:18:07Z

KAFKA-3014: fix integer overflow problem in leastLoadedNode




> Integer overflow causes incorrect node iteration in leastLoadedNode() 
> --
>
> Key: KAFKA-3014
> URL: https://issues.apache.org/jira/browse/KAFKA-3014
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The leastLoadedNode() implementation iterates over all the known nodes to 
> find a suitable candidate for sending metadata. The loop looks like this:
> {code}
> for (int i = 0; i < nodes.size(); i++) {
>   int idx = Utils.abs((this.nodeIndexOffset + i) % nodes.size());
>   Node node = nodes.get(idx);
>   ...
> }
> {code}
> Unfortunately, this doesn't handle integer overflow correctly, which can 
> result in some nodes in the list being passed over. For example, if the size 
> of the node list is 5 and the random offset is Integer.MAX_VALUE, then the 
> loop will iterate over the following indices: 2, 3, 2, 1, 0. 
> In pathological cases, this can prevent the client from being able to connect 
> to good nodes in order to refresh metadata.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3009) Disallow star imports

2015-12-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15065697#comment-15065697
 ] 

ASF GitHub Bot commented on KAFKA-3009:
---

GitHub user manasvigupta opened a pull request:

https://github.com/apache/kafka/pull/700

KAFKA-3009 : Disallow star imports

Summary of code changes

1) Added a new Checkstyle rule to flag any code using star imports
2) Fixed ALL existing code violations using star imports

Testing
---
Local build was successful
ALL JUnits ran successfully on local.

@ewencp - Request you to please review changes. Thank you !

I state that the contribution is my original work and I license the work to 
the project under the project's open source license.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manasvigupta/kafka KAFKA-3009

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/700.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #700


commit fbe5a941a2aefc6507ef4d2eb515f8279d93313b
Author: manasvigupta 
Date:   2015-12-20T09:40:30Z

code changes to fix issue - KAFKA-3009




> Disallow star imports
> -
>
> Key: KAFKA-3009
> URL: https://issues.apache.org/jira/browse/KAFKA-3009
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>Assignee: Manasvi Gupta
>  Labels: newbie
>
> Looks like we don't want star imports in our code (java.utils.*)
> So, lets add this rule to checkstyle and fix existing violations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2929) Migrate server side error mapping functionality

2015-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15072726#comment-15072726
 ] 

ASF GitHub Bot commented on KAFKA-2929:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/616


> Migrate server side error mapping functionality
> ---
>
> Key: KAFKA-2929
> URL: https://issues.apache.org/jira/browse/KAFKA-2929
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.9.0.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> Kafka common and core both have a class that maps error codes and exceptions. 
> To prevent errors and issues with consistency we should migrate from 
> ErrorMapping.scala in core in favor or Errors.java in common.
> When the old clients are removed ErrorMapping.scala and the old exceptions 
> should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3046) add ByteBuffer Serializer

2015-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073536#comment-15073536
 ] 

ASF GitHub Bot commented on KAFKA-3046:
---

GitHub user vesense opened a pull request:

https://github.com/apache/kafka/pull/718

KAFKA-3046: add ByteBuffer Serializer

https://issues.apache.org/jira/browse/KAFKA-3046

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vesense/kafka patch-3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/718.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #718


commit 0f39db647e2a1a3189aba6d2fa96aa4c1156bf86
Author: Xin Wang 
Date:   2015-12-29T05:57:13Z

add bytebuffer ser

commit 711a8b594bd7a2a5eacb958b74ab85ba10d97158
Author: Xin Wang 
Date:   2015-12-29T05:58:49Z

add bytebuffer deser




> add ByteBuffer Serializer
> --
>
> Key: KAFKA-3046
> URL: https://issues.apache.org/jira/browse/KAFKA-3046
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients
>Reporter: Xin Wang
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067541#comment-15067541
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/703

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka checkstyle-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/703.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #703


commit adc050940054947cc8a9a7396ec70a70a01f3e5f
Author: Grant Henke 
Date:   2015-11-10T22:46:38Z

KAFKA-3020: Ensure CheckStyle runs on all Java code

- Adds CheckStyle to core and examples modules
- Fixes any existing CheckStyle issues




> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3020) Ensure Checkstyle runs on all Java code

2015-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067540#comment-15067540
 ] 

ASF GitHub Bot commented on KAFKA-3020:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/703


> Ensure Checkstyle runs on all Java code
> ---
>
> Key: KAFKA-3020
> URL: https://issues.apache.org/jira/browse/KAFKA-3020
> Project: Kafka
>  Issue Type: Sub-task
>  Components: build
>Reporter: Grant Henke
>Assignee: Grant Henke
> Fix For: 0.9.1.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3010) include error code when logging an error when ack = 0

2015-12-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15067937#comment-15067937
 ] 

ASF GitHub Bot commented on KAFKA-3010:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/709

KAFKA-3010; Include error in log when ack 0

I verified this by trying to produce to __consumer_offsets and the logged 
message looks like:

[2015-12-22 10:34:40,897] INFO [KafkaApi-0] Closing connection due to error 
during produce request with correlation id 1 from client id console-producer 
with ack=0
Topic and partition to exceptions: [__consumer_offsets,43] -> 
kafka.common.InvalidTopicException (kafka.server.KafkaApis)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
kafka-3010-include-error-in-log-when-ack-0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/709.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #709


commit 9e794eef6f207b7a1beb102a868d4ccd1fedcced
Author: Ismael Juma 
Date:   2015-12-19T16:32:36Z

Include error code when logging an error when ack = 0

commit 5e60620d5007af85e4f123e02846d026855402be
Author: Ismael Juma 
Date:   2015-12-22T10:39:59Z

Remove redundant and inaccurate comment




> include error code when logging an error when ack = 0
> -
>
> Key: KAFKA-3010
> URL: https://issues.apache.org/jira/browse/KAFKA-3010
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> In handleProducerRequest.produceResponseCallback(), we have the following 
> logging.
>  "Close connection due to error handling produce request with 
> correlation id %d from client id %s with ack=0".format(
> We should log the error code or exception as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    4   5   6   7   8   9   10   11   12   13   >