[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Component/s: unit tests

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolE

[jira] [Reopened] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-1482:
--

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at j

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Fix Version/s: (was: 0.8.2.0)
   0.10.3.0

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.10.3.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadP

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Summary: Transient test failures for  
kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic  (was: 
Transient test failures for kafka.admin.DeleteTopicTest)

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.

[jira] [Commented] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820321#comment-15820321
 ] 

Guozhang Wang commented on KAFKA-1482:
--

Saw this test failure in another example: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/776/testReport/junit/kafka.admin/DeleteTopicTest/testPartitionReassignmentDuringDeleteTopic/

> Transient test failures for  
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic
> ---
>
> Key: KAFKA-1482
> URL: https://issues.apache.org/jira/browse/KAFKA-1482
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Assignee: Sriharsha Chintalapani
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: kafka_delete_topic_test.log, kafka_tests.log
>
>
> {code}
> Stacktrace
> kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
> marked for deletion
>   at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
>   at 
> kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecu

[jira] [Updated] (KAFKA-1482) Transient test failures for kafka.admin.DeleteTopicTest

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-1482:
-
Description: 


{code}
Stacktrace

kafka.common.TopicAlreadyMarkedForDeletionException: topic test is already 
marked for deletion
at kafka.admin.AdminUtils$.deleteTopic(AdminUtils.scala:324)
at 
kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTest.scala:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

  was:
A couple of test cases have timing related transient test failures:

kafka.admin.DeleteTopicTest > testPartitionReassignmentDuringDeleteTopic FAILED
junit.framework.AssertionFailedError: Admin path /admin/delete_topic/test 
path not deleted even after a replica is restarted
at junit.framework.Assert.fail(Assert.java:47)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:578)
at 
kafka.admin.DeleteTopicTest.verifyTopicDeletion(DeleteTopicTest.scala:333)
at 
kafka.admin.DeleteTopicTest.testPartitionReassignmentDuringDeleteTopic(DeleteTopicTe

Build failed in Jenkins: kafka-trunk-jdk8 #1172

2017-01-11 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-4114: Allow different offset reset strategies

[wangguoz] KAFKA-3452; Follow-up: Add SessionWindows

--
[...truncated 18378 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingPattern PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowExceptionOverlappingTopic PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldThrowStreamsExceptionNoResetSpecified PASSED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified STARTED

org.apache.kafka.streams.integration.KStreamsFineGrainedAutoResetIntegrationTest
 > shouldOnlyReadRecordsWhereEarliestSpecified PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

or

[jira] [Resolved] (KAFKA-2543) facing test failure while building apache-kafka from source

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-2543.
--
Resolution: Cannot Reproduce

> facing test failure while building apache-kafka from source
> ---
>
> Key: KAFKA-2543
> URL: https://issues.apache.org/jira/browse/KAFKA-2543
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.8.2.1
> Environment: Linux ppc 64 le
>Reporter: naresh gundu
> Fix For: 0.8.1.2
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> I have run below steps from github https://github.com/apache/kafka to 
> building apache kafka branch 0.8.1.2
>  cd source-code
>  gradle
>  ./gradlew jar
>  ./gradlew srcJar
>  ./gradlew test 
> error :
> org.apache.kafka.common.record.MemoryRecordsTest > testIterator[2] FAILED
>  org.apache.kafka.common.KafkaException: 
> java.lang.reflect.InvocationTargetException
>  at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:217)
>  at org.apache.kafka.common.record.Compressor.(Compressor.java:73)
>  at org.apache.kafka.common.record.Compressor.(Compressor.java:77)
>  at org.apache.kafka.common.record.MemoryRecords.(MemoryRecords.java:43)
>  at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:51)
>  at 
> org.apache.kafka.common.record.MemoryRecords.emptyRecords(MemoryRecords.java:55)
>  at 
> org.apache.kafka.common.record.MemoryRecordsTest.testIterator(MemoryRecordsTest.java:42)
> Caused by:
>  java.lang.reflect.InvocationTargetException
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>  at 
> org.apache.kafka.common.record.Compressor.wrapForOutput(Compressor.java:213)
>  ... 6 more
> Caused by:
>  java.lang.UnsatisfiedLinkError: 
> /tmp/snappy-unknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> /tmp/snappy-u nknown-fe798961-3b66-41f3-808a-68ebd27cc82d-libsnappyjava.so: 
> cannot open shared object file: No such file or directory (Possible ca use: 
> endianness mismatch)
>  at java.lang.ClassLoader$NativeLibrary.load(Native Method)
>  at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)
>  at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1890)
>  at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1851)
>  at java.lang.Runtime.load0(Runtime.java:795)
>  at java.lang.System.load(System.java:1062)
>  at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:166)
>  at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:145)
>  at org.xerial.snappy.Snappy.(Snappy.java:47)
>  at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:90)
>  at org.xerial.snappy.SnappyOutputStream.(SnappyOutputStream.java:83)
>  ... 11 more
> 267 tests completed, 1 failed
>  :clients:test FAILED
> please help me fix the failure test case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-107 Add purgeDataBefore() API in AdminClient

2017-01-11 Thread radai
LGTM, +1

On Wed, Jan 11, 2017 at 1:01 PM, Dong Lin  wrote:

> Hi all,
>
> It seems that there is no further concern with the KIP-107. At this point
> we would like to start the voting process. The KIP can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-107
> %3A+Add+purgeDataBefore%28%29+API+in+AdminClient.
>
> Thanks,
> Dong
>


[jira] [Commented] (KAFKA-4616) Message log is seen when kafka-producer-perf-test.sh is running and any broker restarted in middle in-between

2017-01-11 Thread huxi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820267#comment-15820267
 ] 

huxi commented on KAFKA-4616:
-

Appending acks option to the command as this:  --producer-props 
bootstrap.servers=x.x.x.x:,x.x.x.x:,x.x.x.x: acks=-1

> Message log is seen when kafka-producer-perf-test.sh is running and any 
> broker restarted in middle in-between 
> --
>
> Key: KAFKA-4616
> URL: https://issues.apache.org/jira/browse/KAFKA-4616
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: Apache mesos
>Reporter: sandeep kumar singh
>
> if any broker is restarted while kafka-producer-perf-test.sh command is 
> running, we see message loss.
> commands i run:
> **perf command:
> $ bin/kafka-producer-perf-test.sh --num-records 10 --record-size 4096  
> --throughput 1000 --topic test3R3P3 --producer-props 
> bootstrap.servers=x.x.x.x:,x.x.x.x:,x.x.x.x:
> I am  sending 10 messages of each having size 4096
> error thrown by perf command:
> 4944 records sent, 988.6 records/sec (3.86 MB/sec), 31.5 ms avg latency, 
> 433.0 max latency.
> 5061 records sent, 1012.0 records/sec (3.95 MB/sec), 67.7 ms avg latency, 
> 798.0 max latency.
> 5001 records sent, 1000.0 records/sec (3.91 MB/sec), 49.0 ms avg latency, 
> 503.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 37.3 ms avg latency, 
> 594.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 32.6 ms avg latency, 
> 501.0 max latency.
> 5000 records sent, 999.8 records/sec (3.91 MB/sec), 49.4 ms avg latency, 
> 516.0 max latency.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> truncated
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 33.9 ms avg latency, 
> 497.0 max latency.
> 4928 records sent, 985.6 records/sec (3.85 MB/sec), 42.1 ms avg latency, 
> 521.0 max latency.
> 5073 records sent, 1014.4 records/sec (3.96 MB/sec), 39.4 ms avg latency, 
> 418.0 max latency.
> 10 records sent, 999.950002 records/sec (3.91 MB/sec), 37.65 ms avg 
> latency, 798.00 ms max latency, 1 ms 50th, 260 ms 95th, 411 ms 99th, 571 ms 
> 99.9th.
> **consumer command:
> $ bin/kafka-console-consumer.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --topic  test3R3P3  
> 1>~/kafka_output.log
> message stored:
> $ wc -l ~/kafka_output.log
> 99932 /home/montana/kafka_output.log
> I found only 99932 message are stored and 68 messages are lost.
> **topic describe command:
>  $ bin/kafka-topics.sh  --zookeeper x.x.x.x:2181/dcos-service-kafka-framework 
> --describe |grep test3R3
> Topic:test3R3P3 PartitionCount:3ReplicationFactor:3 Configs:
> Topic: test3R3P3Partition: 0Leader: 2   Replicas: 
> 1,2,0 Isr: 2,0,1
> Topic: test3R3P3Partition: 1Leader: 2   Replicas: 
> 2,0,1 Isr: 2,0,1
> Topic: test3R3P3Partition: 2Leader: 0   Replicas: 
> 0,1,2 Isr: 2,0,1
> **consumer group command:
> $  bin/kafka-consumer-groups.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --describe --group 
> console-consumer-9926
> GROUP  TOPIC  PARTITION  
> CURRENT-OFFSET  LOG-END-OFFSET  LAG OWNER
> console-consumer-9926  test3R3P3  0  
> 33265   33265   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  1  
> 4   4   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  2  
> 3   3   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> could you please help me understand what this error means "err - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received."?
> Could you please provide suggestion to fix this issue?
> we are seeing this behavior every-time we perform above test-scenario.
> my understanding is, there should not any data loss till n-1 broker is alive. 
> is message loss is an expected behavior in the above case?
> thanks
> Sandeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4616) Message log is seen when kafka-producer-perf-test.sh is running and any broker restarted in middle in-between

2017-01-11 Thread sandeep kumar singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820237#comment-15820237
 ] 

sandeep kumar singh commented on KAFKA-4616:


is there any way to add this to perf-test command? i can add this option in 
producer command but not sure on how to add this option with pert-test command..


> Message log is seen when kafka-producer-perf-test.sh is running and any 
> broker restarted in middle in-between 
> --
>
> Key: KAFKA-4616
> URL: https://issues.apache.org/jira/browse/KAFKA-4616
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
> Environment: Apache mesos
>Reporter: sandeep kumar singh
>
> if any broker is restarted while kafka-producer-perf-test.sh command is 
> running, we see message loss.
> commands i run:
> **perf command:
> $ bin/kafka-producer-perf-test.sh --num-records 10 --record-size 4096  
> --throughput 1000 --topic test3R3P3 --producer-props 
> bootstrap.servers=x.x.x.x:,x.x.x.x:,x.x.x.x:
> I am  sending 10 messages of each having size 4096
> error thrown by perf command:
> 4944 records sent, 988.6 records/sec (3.86 MB/sec), 31.5 ms avg latency, 
> 433.0 max latency.
> 5061 records sent, 1012.0 records/sec (3.95 MB/sec), 67.7 ms avg latency, 
> 798.0 max latency.
> 5001 records sent, 1000.0 records/sec (3.91 MB/sec), 49.0 ms avg latency, 
> 503.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 37.3 ms avg latency, 
> 594.0 max latency.
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 32.6 ms avg latency, 
> 501.0 max latency.
> 5000 records sent, 999.8 records/sec (3.91 MB/sec), 49.4 ms avg latency, 
> 516.0 max latency.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received.
> truncated
> 5001 records sent, 1000.2 records/sec (3.91 MB/sec), 33.9 ms avg latency, 
> 497.0 max latency.
> 4928 records sent, 985.6 records/sec (3.85 MB/sec), 42.1 ms avg latency, 
> 521.0 max latency.
> 5073 records sent, 1014.4 records/sec (3.96 MB/sec), 39.4 ms avg latency, 
> 418.0 max latency.
> 10 records sent, 999.950002 records/sec (3.91 MB/sec), 37.65 ms avg 
> latency, 798.00 ms max latency, 1 ms 50th, 260 ms 95th, 411 ms 99th, 571 ms 
> 99.9th.
> **consumer command:
> $ bin/kafka-console-consumer.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --topic  test3R3P3  
> 1>~/kafka_output.log
> message stored:
> $ wc -l ~/kafka_output.log
> 99932 /home/montana/kafka_output.log
> I found only 99932 message are stored and 68 messages are lost.
> **topic describe command:
>  $ bin/kafka-topics.sh  --zookeeper x.x.x.x:2181/dcos-service-kafka-framework 
> --describe |grep test3R3
> Topic:test3R3P3 PartitionCount:3ReplicationFactor:3 Configs:
> Topic: test3R3P3Partition: 0Leader: 2   Replicas: 
> 1,2,0 Isr: 2,0,1
> Topic: test3R3P3Partition: 1Leader: 2   Replicas: 
> 2,0,1 Isr: 2,0,1
> Topic: test3R3P3Partition: 2Leader: 0   Replicas: 
> 0,1,2 Isr: 2,0,1
> **consumer group command:
> $  bin/kafka-consumer-groups.sh --zookeeper 
> x.x.x.x:2181/dcos-service-kafka-framework --describe --group 
> console-consumer-9926
> GROUP  TOPIC  PARTITION  
> CURRENT-OFFSET  LOG-END-OFFSET  LAG OWNER
> console-consumer-9926  test3R3P3  0  
> 33265   33265   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  1  
> 4   4   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> console-consumer-9926  test3R3P3  2  
> 3   3   0   
> console-consumer-9926_node-44a8422fe1a0-1484127474935-c795478e-0
> could you please help me understand what this error means "err - 
> org.apache.kafka.common.errors.NetworkException: The server disconnected 
> before a response was received."?
> Could you please provide suggestion to fix this issue?
> we are seeing this behavior every-time we perform above test-scenario.
> my understanding is, there should not any data loss till n-1 broker is alive. 
> is message loss is an expected behavior in the above case?
> thanks
> Sandeep



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1171

2017-01-11 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4180; Support clients with different authentication credentials 
in

--
[...truncated 34351 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testCleanup PASSED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:501)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoinOperations[0] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > 
shouldCorrectlyRepartitionOnJoin

[jira] [Commented] (KAFKA-4604) Gradle Eclipse plugin creates projects for non project subfolders

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820176#comment-15820176
 ] 

Guozhang Wang commented on KAFKA-4604:
--

PR link:https://github.com/apache/kafka/pull/2324

> Gradle Eclipse plugin creates projects for non project subfolders
> -
>
> Key: KAFKA-4604
> URL: https://issues.apache.org/jira/browse/KAFKA-4604
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.11.0.0
> Environment: Tried with Gradle 3.2.1 and Eclipse Neon
>Reporter: Dhwani Katagade
>Assignee: Dhwani Katagade
>Priority: Minor
>  Labels: build, easyfix, patch
>
> Running the command *./gradlew eclipse* generates .project and .classpath 
> files for all projects. But it also generates these files for the root 
> project folder and the connect subfolder that holds the 4 connector projects 
> even though these folders are not actual project folders.
> The unnecessary connect project is benign, but the unnecessary kafka project 
> created for the root folder has a side effect. The root folder has a bin 
> directory that holds some scripts. When a _Clean all projects_ is done in 
> Eclipse, it cleans up the scripts in the bin directory. These have to be 
> restored by running *git checkout \-\- *. This same could become a 
> problem for the connect project as well if tomorrow we place some files under 
> connect/bin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3452) Support session windows

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820151#comment-15820151
 ] 

ASF GitHub Bot commented on KAFKA-3452:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2342


> Support session windows
> ---
>
> Key: KAFKA-3452
> URL: https://issues.apache.org/jira/browse/KAFKA-3452
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Damian Guy
>  Labels: api, kip
> Fix For: 0.10.2.0
>
>
> The Streams DSL currently does not provide session window as in the DataFlow 
> model. We have seen some common use cases for this feature and it's better 
> adding this support asap.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-94+Session+Windows



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2342: KAFKA-3452: follow-up -- introduce SesssionWindows

2017-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2342


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820146#comment-15820146
 ] 

ASF GitHub Bot commented on KAFKA-4114:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2007


> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
> Fix For: 0.10.2.0
>
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2007: KAFKA-4114: allow different offset reset strategie...

2017-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2007


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4114) Allow for different "auto.offset.reset" strategies for different input streams

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4114:
-
   Resolution: Fixed
Fix Version/s: 0.10.2.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2007
[https://github.com/apache/kafka/pull/2007]

> Allow for different "auto.offset.reset" strategies for different input streams
> --
>
> Key: KAFKA-4114
> URL: https://issues.apache.org/jira/browse/KAFKA-4114
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Bill Bejeck
> Fix For: 0.10.2.0
>
>
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> However, it would be useful to improve this settings to allow users have 
> finer control over different input stream. For example, with two input 
> streams, one of them always reading from offset 0 upon (re)-starting, and the 
> other reading for log end offset.
> This JIRA requires to extend {{KStreamBuilder}} API for methods 
> {{.stream(...)}} and {{.table(...)}} to add a new parameter that indicate the 
> initial offset to be used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jun Rao
Grant,

Thanks for all your contribution! Congratulations!

Jun

On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Jenkins build is back to normal : kafka-trunk-jdk7 #1825

2017-01-11 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression

2017-01-11 Thread Dongjin Lee
Okay, I will have a try.
Thanks Ewen for the guidance!!

Best,
Dongjin

On Thu, Jan 12, 2017 at 6:44 AM, Ismael Juma  wrote:

> That's a good point Ewen. Dongjin, you could use the branch that Ewen
> linked for the performance testing. It would also help validate the PR.
>
> Ismael
>
> On Wed, Jan 11, 2017 at 9:38 PM, Ewen Cheslack-Postava 
> wrote:
>
> > FYI, there's an outstanding patch for getting some JMH benchmarking
> setup:
> > https://github.com/apache/kafka/pull/1712 I haven't found time to review
> > it
> > (and don't really know JMH well anyway) but it might be worth getting
> that
> > landed so we can use it for this as well.
> >
> > -Ewen
> >
> > On Wed, Jan 11, 2017 at 6:35 AM, Dongjin Lee  wrote:
> >
> > > Hi Ismael,
> > >
> > > 1. In the case of compression output, yes, lz4 is producing the smaller
> > > output than gzip. In fact, my benchmark was inspired
> > > by MessageCompressionTest#testCompressSize unit test and the result is
> > > same - 396 bytes for gzip and 387 bytes for lz4.
> > > 2. I agree that my (former) approach can result in unreliable output.
> > > However, I am experiencing difficulties on how to acquire the benchmark
> > > metrics from Kafka. For you recommended JMH, I just started to google
> for
> > > it. If possible, could you give any example on how to use JMH against
> > > Kafka? If it is the case, it will be a great help.
> > > Regards,Dongjin
> > >
> > > _
> > > From: Ismael Juma 
> > > Sent: Wednesday, January 11, 2017 7:33 PM
> > > Subject: Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression
> > > To:  
> > >
> > >
> > > Thanks Dongjin. I highly recommend using JMH for the benchmark, the
> > > existing one has a few problems that could result in unreliable
> results.
> > > Also, it's a bit surprising that LZ4 is producing smaller output than
> > gzip.
> > > Is that right?
> > >
> > > Ismael
> > >
> > > On Wed, Jan 11, 2017 at 10:20 AM, Dongjin Lee 
> > wrote:
> > >
> > > > Ismael,
> > > >
> > > > I pushed the benchmark code I used, with some updates (iteration: 20
> ->
> > > > 1000). I also updated the KIP page with the updated benchmark
> results.
> > > > Please take a review when you are free. The attached screenshot shows
> > how
> > > > to run the benchmarker.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > On Tue, Jan 10, 2017 at 8:03 PM, Dongjin Lee 
> > wrote:
> > > >
> > > >> Ismael,
> > > >>
> > > >> I see. Then, I will share the benchmark code I used by tomorrow.
> > Thanks
> > > >> for your guidance.
> > > >>
> > > >> Best,
> > > >> Dongjin
> > > >>
> > > >> -
> > > >>
> > > >> Dongjin Lee
> > > >>
> > > >> Software developer in Line+.
> > > >> So interested in massive-scale machine learning.
> > > >>
> > > >> facebook: www.facebook.com/dongjin.lee.kr
> > > >> linkedin: kr.linkedin.com/in/dongjinleekr
> > > >> github: github.com/dongjinleekr
> > > >> twitter: www.twitter.com/dongjinleekr
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> On Tue, Jan 10, 2017 at 7:24 PM +0900, "Ismael Juma" <
> > ism...@juma.me.uk
> > > >
> > > >> wrote:
> > > >>
> > > >> Dongjin,
> > > >>>
> > > >>> The KIP states:
> > > >>>
> > > >>> "I compared the compressed size and compression time of 3 1kb-sized
> > > >>> messages (3102 bytes in total), with the Draft-implementation of
> > > ZStandard
> > > >>> Compression Codec and all currently available CompressionCodecs.
> All
> > > >>> elapsed times are the average of 20 trials."
> > > >>>
> > > >>> But doesn't give any details of how this was implemented. Is the
> > source
> > > >>> code available somewhere? Micro-benchmarking in the JVM is pretty
> > > tricky so
> > > >>> it needs verification before numbers can be trusted. A performance
> > test
> > > >>> with kafka-producer-perf-test.sh would be nice to have as well, if
> > > possible.
> > > >>>
> > > >>> Thanks,
> > > >>> Ismael
> > > >>>
> > > >>> On Tue, Jan 10, 2017 at 7:44 AM, Dongjin Lee  wrote:
> > > >>>
> > > >>> > Ismael,
> > > >>> >
> > > >>> > 1. Is the benchmark in the KIP page not enough? You mean we need
> a
> > > whole
> > > >>> > performance test using kafka-producer-perf-test.sh?
> > > >>> >
> > > >>> > 2. It seems like no major project is relying on it currently.
> > > However,
> > > >>> > after reviewing the code, I concluded that at least this project
> > has
> > > a good
> > > >>> > test coverage. And for the problem of upstream tracking -
> although
> > > there is
> > > >>> > no significant update on ZStandard to judge this problem, it
> seems
> > > not bad.
> > > >>> > If required, I can take responsibility of the tracking for this
> > > library.
> > > >>> >
> > > >>> > Thanks,
> > > >>> > Dongjin
> > > >>> >
> > > >>> > On Tue, Jan 10, 2017 at 7:09 AM, Ismael Juma  wrote:
> > > >>> >
> > > >>> > > Thanks for posting the KIP, ZStandard looks like a nice
> > > improvement over
> > > >>> > > the existing compression algorithms. A couple of questions:
> > > >>> > >
> > > >>> > > 1. Can you plea

[jira] [Created] (KAFKA-4622) KafkaConsumer does not properly handle authorization errors from offset fetches

2017-01-11 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-4622:
--

 Summary: KafkaConsumer does not properly handle authorization 
errors from offset fetches
 Key: KAFKA-4622
 URL: https://issues.apache.org/jira/browse/KAFKA-4622
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.1.1, 0.10.0.1, 0.9.0.1
Reporter: Jason Gustafson
Assignee: Jason Gustafson


It's possible to receive both group and topic authorization exceptions from an 
OffsetFetch, but the consumer currently treats this as generic unexpected 
errors. We should probably return {{GroupAuthorizationException}} and 
{{TopicAuthorizationException}} to be consistent with the other consumer APIs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4622) KafkaConsumer does not properly handle authorization errors from offset fetches

2017-01-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4622:
---
Fix Version/s: 0.10.2.0

> KafkaConsumer does not properly handle authorization errors from offset 
> fetches
> ---
>
> Key: KAFKA-4622
> URL: https://issues.apache.org/jira/browse/KAFKA-4622
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1, 0.10.0.1, 0.10.1.1
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
> Fix For: 0.10.2.0
>
>
> It's possible to receive both group and topic authorization exceptions from 
> an OffsetFetch, but the consumer currently treats this as generic unexpected 
> errors. We should probably return {{GroupAuthorizationException}} and 
> {{TopicAuthorizationException}} to be consistent with the other consumer 
> APIs. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2944) NullPointerException in KafkaConfigStorage when config storage starts right before shutdown request

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819890#comment-15819890
 ] 

ASF GitHub Bot commented on KAFKA-2944:
---

Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/723


> NullPointerException in KafkaConfigStorage when config storage starts right 
> before shutdown request
> ---
>
> Key: KAFKA-2944
> URL: https://issues.apache.org/jira/browse/KAFKA-2944
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Gwen Shapira
> Fix For: 0.10.0.0
>
>
> Relevant log where you can see a config update starting, then the request to 
> shutdown happens and we end up with a NullPointerException:
> {quote}
> [2015-12-03 09:12:55,712] DEBUG Change in connector task count from 2 to 3, 
> writing updated task configurations 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:56,224] INFO Kafka Connect stopping 
> (org.apache.kafka.connect.runtime.Connect)
> [2015-12-03 09:12:56,224] INFO Stopping REST server 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,227] INFO Stopped 
> ServerConnector@10cb550e{HTTP/1.1}{0.0.0.0:8083} 
> (org.eclipse.jetty.server.ServerConnector)
> [2015-12-03 09:12:56,234] INFO Stopped 
> o.e.j.s.ServletContextHandler@3f8a24d5{/,null,UNAVAILABLE} 
> (org.eclipse.jetty.server.handler.ContextHandler)
> [2015-12-03 09:12:56,235] INFO REST server stopped 
> (org.apache.kafka.connect.runtime.rest.RestServer)
> [2015-12-03 09:12:56,235] INFO Herder stopping 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> [2015-12-03 09:12:58,209] ERROR Unexpected exception in KafkaBasedLog's work 
> thread (org.apache.kafka.connect.util.KafkaBasedLog)
> java.lang.NullPointerException
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.completeTaskIdSet(KafkaConfigStorage.java:558)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.access$1200(KafkaConfigStorage.java:143)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:476)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage$1.onCompletion(KafkaConfigStorage.java:372)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.poll(KafkaBasedLog.java:235)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.readToLogEnd(KafkaBasedLog.java:275)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog.access$300(KafkaBasedLog.java:70)
>   at 
> org.apache.kafka.connect.util.KafkaBasedLog$WorkThread.run(KafkaBasedLog.java:307)
> [2015-12-03 09:13:26,704] ERROR Failed to write root configuration to Kafka:  
> (org.apache.kafka.connect.storage.KafkaConfigStorage)
> java.util.concurrent.TimeoutException: Timed out waiting for future
>   at 
> org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:74)
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:352)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:673)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.startWork(DistributedHerder.java:640)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.handleRebalanceCompleted(DistributedHerder.java:598)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:184)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:159)
>   at java.lang.Thread.run(Thread.java:745)
> [2015-12-03 09:13:26,704] ERROR Failed to reconfigure connector's tasks, 
> retrying after backoff: 
> (org.apache.kafka.connect.runtime.distributed.DistributedHerder)
> org.apache.kafka.connect.errors.ConnectException: Error writing root 
> configuration to Kafka
>   at 
> org.apache.kafka.connect.storage.KafkaConfigStorage.putTaskConfigs(KafkaConfigStorage.java:355)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:737)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithRetry(DistributedHerder.java:677)
>   at 
> org.apache.kafka.connect.runtime.distributed.DistributedHerder.st

[jira] [Commented] (KAFKA-2875) Class path contains multiple SLF4J bindings warnings when using scripts under bin

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819893#comment-15819893
 ] 

ASF GitHub Bot commented on KAFKA-2875:
---

Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/693


> Class path contains multiple SLF4J bindings warnings when using scripts under 
> bin
> -
>
> Key: KAFKA-2875
> URL: https://issues.apache.org/jira/browse/KAFKA-2875
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Ismael Juma
>Assignee: jin xing
>Priority: Minor
>  Labels: patch
> Fix For: 0.10.2.0
>
>
> This adds a lot of noise when running the scripts, see example when running 
> kafka-console-producer.sh:
> {code}
> ~/D/s/kafka-0.9.0.0-src ❯❯❯ ./bin/kafka-console-producer.sh --topic topic 
> --broker-list localhost:9092 ⏎
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/core/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/tools/build/dependant-libs-2.10.5/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/api/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/runtime/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/file/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/Users/ijuma/Downloads/scala-releases/kafka-0.9.0.0-src/connect/json/build/dependant-libs/slf4j-log4j12-1.7.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #693: KAFKA-2875: remove slf4j multi binding warnings whe...

2017-01-11 Thread jinxing64
Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/693


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #723: KAFKA-2944: fix NullPointerException in KafkaConfig...

2017-01-11 Thread jinxing64
Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/723


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #783: KAFKA-3106: Fix updating an existing connector conf...

2017-01-11 Thread jinxing64
Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/783


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3106) After PUT a connector config from REST API, GET a connector config will fail

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819889#comment-15819889
 ] 

ASF GitHub Bot commented on KAFKA-3106:
---

Github user jinxing64 closed the pull request at:

https://github.com/apache/kafka/pull/783


> After  PUT a connector config from REST API, GET a connector config will fail
> -
>
> Key: KAFKA-3106
> URL: https://issues.apache.org/jira/browse/KAFKA-3106
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: jin xing
>Assignee: jin xing
>Priority: Minor
>
> If there is already a connector in Connect, and we PUT a connector config by 
> REST API, the assignment.offset of DistributedHerder will below the 
> configStat.offset, thus GET connector config though REST API will fail 
> because of failed to pass "checkConfigSynced";
> The failed message is "Cannot get config data because config is not in sync 
> and this is not the leader";
> There need to be a rebalance process for  PUT to update the assignment.offset;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #1170

2017-01-11 Thread Apache Jenkins Server
See 

Changes:

[jason] KAFKA-4426; Add close with timeout for KafkaConsumer (KIP-102)

--
[...truncated 18310 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics STARTED

org.apache.kafka.streams.KafkaStreamsTest > testNumberDefaultMetrics PASSED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose STARTED

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integr

[jira] [Commented] (KAFKA-4621) Fetch Request V3 docs list max_bytes twice

2017-01-11 Thread Jeff Widman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819844#comment-15819844
 ] 

Jeff Widman commented on KAFKA-4621:


Yeah, I saw that... perhaps the descriptions could also be cleaned up slightly 
to make it more clear what they're referring to... I'll try to put together a 
PR at some point here... 

> Fetch Request V3 docs list max_bytes twice
> --
>
> Key: KAFKA-4621
> URL: https://issues.apache.org/jira/browse/KAFKA-4621
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0
>Reporter: Jeff Widman
>Priority: Minor
>
> http://kafka.apache.org/protocol.html
> Fetch Request (Version: 3)  lists "max_bytes" twice, but with different 
> descriptions. This is confusing, as it's not apparent if this is an 
> accidental mistake or a purposeful inclusion... if purposeful, it's not clear 
> why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4180) Shared authentication with multiple active Kafka producers/consumers

2017-01-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4180.

   Resolution: Fixed
Fix Version/s: 0.10.2.0

Issue resolved by pull request 2293
[https://github.com/apache/kafka/pull/2293]

> Shared authentication with multiple active Kafka producers/consumers
> 
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
> Fix For: 0.10.2.0
>
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4180) Shared authentication with multiple active Kafka producers/consumers

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819830#comment-15819830
 ] 

ASF GitHub Bot commented on KAFKA-4180:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2293


> Shared authentication with multiple active Kafka producers/consumers
> 
>
> Key: KAFKA-4180
> URL: https://issues.apache.org/jira/browse/KAFKA-4180
> Project: Kafka
>  Issue Type: Bug
>  Components: producer , security
>Affects Versions: 0.10.0.1
>Reporter: Guillaume Grossetie
>Assignee: Mickael Maison
>  Labels: authentication, jaas, loginmodule, plain, producer, 
> sasl, user
> Fix For: 0.10.2.0
>
>
> I'm using Kafka 0.10.0.1 with an SASL authentication on the client:
> {code:title=kafka_client_jaas.conf|borderStyle=solid}
> KafkaClient {
> org.apache.kafka.common.security.plain.PlainLoginModule required
> username="guillaume"
> password="secret";
> };
> {code}
> When using multiple Kafka producers the authentification is shared [1]. In 
> other words it's not currently possible to have multiple Kafka producers in a 
> JVM process.
> Am I missing something ? How can I have multiple active Kafka producers with 
> different credentials ?
> My use case is that I have an application that send messages to multiples 
> clusters (one cluster for logs, one cluster for metrics, one cluster for 
> business data).
> [1] 
> https://github.com/apache/kafka/blob/69ebf6f7be2fc0e471ebd5b7a166468017ff2651/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L35



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2293: KAFKA-4180 : Authentication with multiple actives ...

2017-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2293


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #2348: MINOR: Minor improvements in consumer close timeou...

2017-01-11 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/2348

MINOR: Minor improvements in consumer close timeout handling



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka minor-cleanup-for-kip-102

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2348.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2348


commit c3708b0ee3bb7837b296780e4485970e7d13b0f5
Author: Jason Gustafson 
Date:   2017-01-12T01:18:02Z

MINOR: Minor improvements in consumer close timeout handling




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4621) Fetch Request V3 docs list max_bytes twice

2017-01-11 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819790#comment-15819790
 ] 

Ismael Juma commented on KAFKA-4621:


One of them is for the whole request and the other is for the partition. If you 
look at the definition of the fetch request instead of just the table, it 
should be clear. Looks like we need to improve the table generation to handle 
such cases better.

> Fetch Request V3 docs list max_bytes twice
> --
>
> Key: KAFKA-4621
> URL: https://issues.apache.org/jira/browse/KAFKA-4621
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.10.2.0
>Reporter: Jeff Widman
>Priority: Minor
>
> http://kafka.apache.org/protocol.html
> Fetch Request (Version: 3)  lists "max_bytes" twice, but with different 
> descriptions. This is confusing, as it's not apparent if this is an 
> accidental mistake or a purposeful inclusion... if purposeful, it's not clear 
> why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1824

2017-01-11 Thread Apache Jenkins Server
See 

--
[...truncated 18225 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning 
STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > 
shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:501)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[0] FAILED
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 1 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
java.lang.AssertionError: Condition not met within timeout 3. waiting 
for store count-by-key
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:501)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > 
testRegexMatchesTopicsAWhenDeleted STARTED

org.a

[jira] [Created] (KAFKA-4621) Fetch Request V3 docs list max_bytes twice

2017-01-11 Thread Jeff Widman (JIRA)
Jeff Widman created KAFKA-4621:
--

 Summary: Fetch Request V3 docs list max_bytes twice
 Key: KAFKA-4621
 URL: https://issues.apache.org/jira/browse/KAFKA-4621
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.10.2.0
Reporter: Jeff Widman
Priority: Minor


http://kafka.apache.org/protocol.html

Fetch Request (Version: 3)  lists "max_bytes" twice, but with different 
descriptions. This is confusing, as it's not apparent if this is an accidental 
mistake or a purposeful inclusion... if purposeful, it's not clear why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2347: Initial commit of partition assignment check in co...

2017-01-11 Thread apurvam
GitHub user apurvam opened a pull request:

https://github.com/apache/kafka/pull/2347

Initial commit of partition assignment check in console consumer.

With this patch, the consumer will considered initialized in the 
ProduceConsumeValidate tests only if it has partitions assigned.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apurvam/kafka 
KAFKA-4588-fix-race-between-producer-consumer-start

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2347


commit ea68caf42c7e8811163583a44b0d7fb1d0150514
Author: Apurva Mehta 
Date:   2017-01-10T23:02:11Z

Initial commit of partition assignment check in console consumer as a
proxy for the consumer being initalized. With this patch, the consumer
will considered initialized in the ProduceConsumeValidate tests only if
it has partitions assigned.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4426) Add consumer.close(timeout, unit) for graceful close with timeout

2017-01-11 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-4426:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 2285
[https://github.com/apache/kafka/pull/2285]

> Add consumer.close(timeout, unit) for graceful close with timeout
> -
>
> Key: KAFKA-4426
> URL: https://issues.apache.org/jira/browse/KAFKA-4426
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> KAFKA-3703 implements graceful close of consumers with a hard-coded timeout 
> of 5 seconds. For consistency with the producer, add a close method with 
> configurable timeout for Consumer.
> {quote}
> public void close(long timeout, TimeUnit unit);
> {quote}
> Since this is a public interface change, this change requires a KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2285: KAFKA-4426: Add close with timeout for KafkaConsum...

2017-01-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2285


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4426) Add consumer.close(timeout, unit) for graceful close with timeout

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819632#comment-15819632
 ] 

ASF GitHub Bot commented on KAFKA-4426:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/2285


> Add consumer.close(timeout, unit) for graceful close with timeout
> -
>
> Key: KAFKA-4426
> URL: https://issues.apache.org/jira/browse/KAFKA-4426
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.1.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.2.0
>
>
> KAFKA-3703 implements graceful close of consumers with a hard-coded timeout 
> of 5 seconds. For consistency with the producer, add a close method with 
> configurable timeout for Consumer.
> {quote}
> public void close(long timeout, TimeUnit unit);
> {quote}
> Since this is a public interface change, this change requires a KIP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-11 Thread Jason Gustafson
+1 from me. I'm in favor of deprecating in 0.10.2 if possible, or in the
next release at the latest. As Ewen and Stevo have pointed out, it is
already effectively deprecated.

-Jason

On Wed, Jan 11, 2017 at 4:01 PM, Stevo Slavić  wrote:

> +1 (non-binding) and for deprecating it ASAP. It's already actually
> deprecated, not supported, new features and bug fixes end up only in new
> clients API, so would be fair to communicate clearly to users in old
> consumer API that it's deprecated, it's further or new use is discouraged
> and if one still continues to or especially decides to starts using it that
> you're using it at your own risk. Deprecation is just recommendation.
>
> Wish SimpleConsumer was never part of public API.
>
> On Thu, Jan 12, 2017 at 12:24 AM, Ismael Juma  wrote:
>
> > Ewen,
> >
> > I think a policy of giving it a minimum of one year between deprecation
> and
> > removal for this case seems reasonable.
> >
> > Ismael
> >
> > On Wed, Jan 11, 2017 at 5:45 AM, Ewen Cheslack-Postava <
> e...@confluent.io>
> > wrote:
> >
> > > Ismael,
> > >
> > > Is that regardless of whether it ends up being a major/minor version?
> > i.e.
> > > given the way we've phrased (and I think started to follow through on)
> > > deprecations, if the next releases were 0.10.3.0 and then 0.11.0.0, the
> > > deprecation period would only be one release. That would be a tiny
> window
> > > for a huge deprecation. If the next release ended up 0.11.0.0, then
> we'd
> > > wait (presumably multiple releases until) 0.12.0.0 which could be
> > something
> > > like a year.
> > >
> > > I think we should deprecate the APIs ASAP since they are effectively
> > > unmaintained (or very minimally maintained at best). And I'd actually
> > even
> > > like to do so in 0.10.2.0.
> > >
> > > Perhaps we should consider a slightly customized policy instead? Major
> > > deprecations like this might require something slightly different. For
> > > example, I think a KIP + release notes that explain we're marking the
> > > consumer as deprecated now but it will continue to exist for at least 1
> > > year (regardless of release versions) and will be removed in the next
> > major
> > > release *after* 1 year would give users plenty of warning and not
> result
> > in
> > > any weirdness if a major version bump happens relatively soon.
> > >
> > > (Sorry to drag this into the VOTE thread... If we can agree on that
> > > deprecation/removal schedule, I'd love to still get this in by feature
> > > freeze, especially since the patch is presumably trivial.)
> > >
> > > -Ewen
> > >
> > > On Tue, Jan 10, 2017 at 11:58 AM, Gwen Shapira 
> > wrote:
> > >
> > > > +1
> > > >
> > > > On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
> > > >  wrote:
> > > > > Happy Monday,
> > > > >
> > > > > I'd like to thank everyone who participated in the discussion
> around
> > > this
> > > > > KIP and shared their opinion.
> > > > >
> > > > > The only concern that was raised was not having a defined migration
> > > plan
> > > > > yet for existing users of the old consumer.
> > > > > I hope that responses to this concern (on the discussion thread)
> have
> > > > been
> > > > > satisfactory.
> > > > >
> > > > > Given the short time we have until the 0.10.2.0 cut-off date I'd
> like
> > > to
> > > > > start voting on this KIP.
> > > > >
> > > > > KIP:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > 109%3A+Old+Consumer+Deprecation
> > > > > Discussion thread:
> > > > > https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
> > > > >
> > > > > Thanks.
> > > > > --Vahid
> > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Product Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > >
> >
>


[jira] [Created] (KAFKA-4620) Connection exceptions in JMXTool do not make it to the top level

2017-01-11 Thread Apurva Mehta (JIRA)
Apurva Mehta created KAFKA-4620:
---

 Summary: Connection exceptions in JMXTool do not make it to the 
top level
 Key: KAFKA-4620
 URL: https://issues.apache.org/jira/browse/KAFKA-4620
 Project: Kafka
  Issue Type: Bug
Reporter: Apurva Mehta
Assignee: Apurva Mehta


If you run JMXTool before the target process is initialized, the JMX connection 
is refused and the tool quits. 

Adding the following retry :

{code:java}
while (retries < maxNumRetries && !connected) {
  try {
System.err.println("Trying to connect to JMX url: %s".format(url))
jmxc = JMXConnectorFactory.connect(url, null)
mbsc = jmxc.getMBeanServerConnection()
connected = true
  } catch {
case e : Exception => {
  System.err.println("Could not connect to JMX url: %s. Exception 
%s".format(url, e.getMessage))
  retries += 1
  Thread.sleep(500)
}
  }
}
{code}

does not work because the exceptions do not make it to the top level. Running 
the above code results in the following output on stderr

{noformat}
Trying to connect to JMX url: 
service:jmx:rmi:///jndi/rmi://127.0.0.1:9192/jmxrmi
Jan 11, 2017 8:20:33 PM ClientCommunicatorAdmin restart
WARNING: Failed to restart: java.io.IOException: Failed to get a RMI stub: 
javax.naming.ServiceUnavailableException [Root exception is 
java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested 
exception is:
java.net.ConnectException: Connection refused]
Jan 11, 2017 8:20:33 PM RMIConnector RMIClientCommunicatorAdmin-doStop
WARNING: Failed to call the method close():java.rmi.ConnectException: 
Connection refused to host: 172.31.15.109; nested exception is:
java.net.ConnectException: Connection refused
Jan 11, 2017 8:20:33 PM ClientCommunicatorAdmin Checker-run
WARNING: Failed to check connection: java.net.ConnectException: Connection 
refused
Jan 11, 2017 8:20:33 PM ClientCommunicatorAdmin Checker-run
WARNING: stopping
{noformat}

We need to add working retry logic to JMXTool so that it can start correctly 
even if the target process is not ready initially. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3875) Transient test failure: kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819602#comment-15819602
 ] 

Guozhang Wang commented on KAFKA-3875:
--

Another failure with different error message:

{code}
Stacktrace

java.lang.AssertionError: No request is complete.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.api.BaseProducerSendTest$$anonfun$testCloseWithZeroTimeoutFromCallerThread$1.apply$mcVI$sp(BaseProducerSendTest.scala:398)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
kafka.api.BaseProducerSendTest.testCloseWithZeroTimeoutFromCallerThread(BaseProducerSendTest.scala:395)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> Transient test failure: 
> kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime
> 
>
> Key: KAFKA-3875
> URL: https://issues.apache.org/jira/brows

Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-11 Thread Stevo Slavić
+1 (non-binding) and for deprecating it ASAP. It's already actually
deprecated, not supported, new features and bug fixes end up only in new
clients API, so would be fair to communicate clearly to users in old
consumer API that it's deprecated, it's further or new use is discouraged
and if one still continues to or especially decides to starts using it that
you're using it at your own risk. Deprecation is just recommendation.

Wish SimpleConsumer was never part of public API.

On Thu, Jan 12, 2017 at 12:24 AM, Ismael Juma  wrote:

> Ewen,
>
> I think a policy of giving it a minimum of one year between deprecation and
> removal for this case seems reasonable.
>
> Ismael
>
> On Wed, Jan 11, 2017 at 5:45 AM, Ewen Cheslack-Postava 
> wrote:
>
> > Ismael,
> >
> > Is that regardless of whether it ends up being a major/minor version?
> i.e.
> > given the way we've phrased (and I think started to follow through on)
> > deprecations, if the next releases were 0.10.3.0 and then 0.11.0.0, the
> > deprecation period would only be one release. That would be a tiny window
> > for a huge deprecation. If the next release ended up 0.11.0.0, then we'd
> > wait (presumably multiple releases until) 0.12.0.0 which could be
> something
> > like a year.
> >
> > I think we should deprecate the APIs ASAP since they are effectively
> > unmaintained (or very minimally maintained at best). And I'd actually
> even
> > like to do so in 0.10.2.0.
> >
> > Perhaps we should consider a slightly customized policy instead? Major
> > deprecations like this might require something slightly different. For
> > example, I think a KIP + release notes that explain we're marking the
> > consumer as deprecated now but it will continue to exist for at least 1
> > year (regardless of release versions) and will be removed in the next
> major
> > release *after* 1 year would give users plenty of warning and not result
> in
> > any weirdness if a major version bump happens relatively soon.
> >
> > (Sorry to drag this into the VOTE thread... If we can agree on that
> > deprecation/removal schedule, I'd love to still get this in by feature
> > freeze, especially since the patch is presumably trivial.)
> >
> > -Ewen
> >
> > On Tue, Jan 10, 2017 at 11:58 AM, Gwen Shapira 
> wrote:
> >
> > > +1
> > >
> > > On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
> > >  wrote:
> > > > Happy Monday,
> > > >
> > > > I'd like to thank everyone who participated in the discussion around
> > this
> > > > KIP and shared their opinion.
> > > >
> > > > The only concern that was raised was not having a defined migration
> > plan
> > > > yet for existing users of the old consumer.
> > > > I hope that responses to this concern (on the discussion thread) have
> > > been
> > > > satisfactory.
> > > >
> > > > Given the short time we have until the 0.10.2.0 cut-off date I'd like
> > to
> > > > start voting on this KIP.
> > > >
> > > > KIP:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 109%3A+Old+Consumer+Deprecation
> > > > Discussion thread:
> > > > https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
> > > >
> > > > Thanks.
> > > > --Vahid
> > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>


[jira] [Updated] (KAFKA-4588) QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable is occasionally failing on jenkins

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4588:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable
>  is occasionally failing on jenkins
> ---
>
> Key: KAFKA-4588
> URL: https://issues.apache.org/jira/browse/KAFKA-4588
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 0.10.2.0
>
>
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for store count-by-key
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:502)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4222) Transient failure in QueryableStateIntegrationTest.queryOnRebalance

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-4222:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Transient failure in QueryableStateIntegrationTest.queryOnRebalance
> ---
>
> Key: KAFKA-4222
> URL: https://issues.apache.org/jira/browse/KAFKA-4222
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams, unit tests
>Reporter: Jason Gustafson
>Assignee: Matthias J. Sax
> Fix For: 0.10.1.0
>
>
> Seen here: https://builds.apache.org/job/kafka-trunk-jdk8/915/console
> {code}
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest > 
> queryOnRebalance[1] FAILED
> java.lang.AssertionError: Condition not met within timeout 3. waiting 
> for metadata, store and value to be non null
> at 
> org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:276)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.verifyAllKVKeys(QueryableStateIntegrationTest.java:263)
> at 
> org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:342)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3875) Transient test failure: kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819596#comment-15819596
 ] 

Guozhang Wang commented on KAFKA-3875:
--

Another failure case of this: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/758/testReport/junit/kafka.api/SslProducerSendTest/testSendCompressedMessageWithCreateTime/

{code}
Stacktrace

java.lang.AssertionError: Should have offset 100 but only successfully sent 0 
expected:<100> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
kafka.api.BaseProducerSendTest.sendAndVerifyTimestamp(BaseProducerSendTest.scala:219)
at 
kafka.api.BaseProducerSendTest.testSendCompressedMessageWithCreateTime(BaseProducerSendTest.scala:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:377)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

Seems this is not resolved yet.

> Transient test failure: 
> kafka.api.SslProducerSendTest.testSendNonCompressedMessageWithCreateTime
> --

[jira] [Updated] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3896:
-
Issue Type: Sub-task  (was: Bug)
Parent: KAFKA-2054

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819555#comment-15819555
 ] 

Guozhang Wang commented on KAFKA-3896:
--

{code}
java.lang.AssertionError: Condition not met within timeout 6. Did not 
receive 5 number of records
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.receiveMessages(KStreamRepartitionJoinTest.java:401)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.verifyCorrectOutput(KStreamRepartitionJoinTest.java:311)
at 
org.apache.kafka.streams.integration.KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations(KStreamRepartitionJoinTest.java:145)
{code}

See another occurrence of this failure, re-opening.

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (KAFKA-3896) Unstable test KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reopened KAFKA-3896:
--

> Unstable test 
> KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations
> ---
>
> Key: KAFKA-3896
> URL: https://issues.apache.org/jira/browse/KAFKA-3896
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Reporter: Ashish K Singh
>Assignee: Damian Guy
> Fix For: 0.10.1.0
>
>
> {{KStreamRepartitionJoinTest.shouldCorrectlyRepartitionOnJoinOperations}} 
> seems to be unstable. A failure can be found 
> [here|https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/4363/]. Could not 
> reproduce the test failure locally though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1823

2017-01-11 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4507; Clients should support older brokers (KIP-97)

[wangguoz] KAFKA-3715: Add granular metrics to Kafka Streams and add hierarhical

--
[...truncated 15590 lines...]

org.apache.kafka.common.security.JaasUtilsTest > testControlFlag PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
:492:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (offsetAndMetadata.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
:319:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
 ^
:93:
 class ProducerConfig in package producer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.producer.ProducerConfig instead.
val producerConfig = new ProducerConfig(props)
 ^
:94:
 method fetchTopicMetadata in object ClientUtils is deprecated: This method has 
been deprecated and will be removed in a future release.
fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
^
:187:
 object ProducerRequestStatsRegistry in package producer is deprecated: This 
object has been deprecated and will be removed in a future release.
ProducerRequestStatsRegistry.removeProducerRequestStats(clientId)
^
:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
:319:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
  if (partitionData.timestamp == 
OffsetCommitRequest.DEFAULT_TIMESTAMP)
^
:322:
 value timestamp in class PartitionData is deprecated: see corresponding 
Javadoc for more information.
offsetRetention + partitionData.timestamp
^
:576:
 method offsetData in class ListOffsetRequest is deprecated: see corresponding 
Javadoc for more information.
val (authorizedRequestInfo, unauthorizedRequestInfo) = 
offsetRequest.offsetData.asScala.pa

Re: [VOTE] KIP-109: Old Consumer Deprecation

2017-01-11 Thread Ismael Juma
Ewen,

I think a policy of giving it a minimum of one year between deprecation and
removal for this case seems reasonable.

Ismael

On Wed, Jan 11, 2017 at 5:45 AM, Ewen Cheslack-Postava 
wrote:

> Ismael,
>
> Is that regardless of whether it ends up being a major/minor version? i.e.
> given the way we've phrased (and I think started to follow through on)
> deprecations, if the next releases were 0.10.3.0 and then 0.11.0.0, the
> deprecation period would only be one release. That would be a tiny window
> for a huge deprecation. If the next release ended up 0.11.0.0, then we'd
> wait (presumably multiple releases until) 0.12.0.0 which could be something
> like a year.
>
> I think we should deprecate the APIs ASAP since they are effectively
> unmaintained (or very minimally maintained at best). And I'd actually even
> like to do so in 0.10.2.0.
>
> Perhaps we should consider a slightly customized policy instead? Major
> deprecations like this might require something slightly different. For
> example, I think a KIP + release notes that explain we're marking the
> consumer as deprecated now but it will continue to exist for at least 1
> year (regardless of release versions) and will be removed in the next major
> release *after* 1 year would give users plenty of warning and not result in
> any weirdness if a major version bump happens relatively soon.
>
> (Sorry to drag this into the VOTE thread... If we can agree on that
> deprecation/removal schedule, I'd love to still get this in by feature
> freeze, especially since the patch is presumably trivial.)
>
> -Ewen
>
> On Tue, Jan 10, 2017 at 11:58 AM, Gwen Shapira  wrote:
>
> > +1
> >
> > On Mon, Jan 9, 2017 at 8:58 AM, Vahid S Hashemian
> >  wrote:
> > > Happy Monday,
> > >
> > > I'd like to thank everyone who participated in the discussion around
> this
> > > KIP and shared their opinion.
> > >
> > > The only concern that was raised was not having a defined migration
> plan
> > > yet for existing users of the old consumer.
> > > I hope that responses to this concern (on the discussion thread) have
> > been
> > > satisfactory.
> > >
> > > Given the short time we have until the 0.10.2.0 cut-off date I'd like
> to
> > > start voting on this KIP.
> > >
> > > KIP:
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 109%3A+Old+Consumer+Deprecation
> > > Discussion thread:
> > > https://www.mail-archive.com/dev@kafka.apache.org/msg63427.html
> > >
> > > Thanks.
> > > --Vahid
> > >
> > >
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [VOTE] KIP-108: Create Topic Policy

2017-01-11 Thread Ismael Juma
Thanks to everyone who voted and provided feedback.

The vote has passed with 4 binding +1s (Joel, Gwen, Neha, Sriram) and 4
non-binding +1s (Edoardo, Roger, Jon, Apurva).

I have updated the relevant wiki pages.

Ismael

On Tue, Jan 10, 2017 at 1:00 AM, Sriram Subramanian 
wrote:

> +1
>
> On Mon, Jan 9, 2017 at 3:29 PM, Apurva Mehta  wrote:
>
> > (hit send too soon)
> >
> > +1 (non-binding).. that is a very well written KIP!
> >
> > On Mon, Jan 9, 2017 at 3:28 PM, Apurva Mehta 
> wrote:
> >
> > > +1, that1
> > >
> > > On Mon, Jan 9, 2017 at 2:47 PM, Neha Narkhede 
> wrote:
> > >
> > >> +1 - thanks Ismael!
> > >>
> > >> On Mon, Jan 9, 2017 at 2:30 PM Gwen Shapira 
> wrote:
> > >>
> > >> > +1 - thanks for the proposal, I think it will be super useful for
> > >> admins.
> > >> >
> > >> > On Sun, Jan 8, 2017 at 6:50 AM, Ismael Juma 
> > wrote:
> > >> > > Hi all,
> > >> > >
> > >> > > As the discussion seems to have settled down, I would like to
> > initiate
> > >> > the
> > >> > > voting process for KIP-108: Create Topic Policy:
> > >> > >
> > >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-108
> > >> > > %3A+Create+Topic+Policy
> > >> > >
> > >> > > The vote will run for a minimum of 72 hours.
> > >> > >
> > >> > > Thanks,
> > >> > > Ismael
> > >> >
> > >> >
> > >> >
> > >> > --
> > >> > Gwen Shapira
> > >> > Product Manager | Confluent
> > >> > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > >> > Follow us: Twitter | blog
> > >> >
> > >> --
> > >> Thanks,
> > >> Neha
> > >>
> > >
> > >
> >
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Tom Crayford
+1 (non-binding)

On Wed, Jan 11, 2017 at 11:12 PM, Stevo Slavić  wrote:

> +1 (non-binding)
>
> On Thu, Jan 12, 2017 at 12:11 AM, Guozhang Wang 
> wrote:
>
> > +1
> >
> > On Wed, Jan 11, 2017 at 12:09 PM, Jeff Widman  wrote:
> >
> > > +1 nonbinding. We were bit by this in a production environment.
> > >
> > > On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley 
> wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> > > > >
> > > > > +1
> > > > >
> > > > > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford 
> > > wrote:
> > > > >
> > > > >> Looks like there was a good consensus on the discuss thread for
> > > KIP-106
> > > > so
> > > > >> lets move to a vote.
> > > > >>
> > > > >> Please chime in if you would like to change the default for
> > > > >> unclean.leader.election.enabled from true to false.
> > > > >>
> > > > >> https://cwiki.apache.org/confluence/display/KAFKA/%
> > > > >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > > > >> election.enabled+from+True+to+False
> > > > >>
> > > > >> B
> > > > >>
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


[GitHub] kafka-site pull request #39: New europe meetup links

2017-01-11 Thread derrickdoo
GitHub user derrickdoo opened a pull request:

https://github.com/apache/kafka-site/pull/39

New europe meetup links

Added 2 new meetups to the Events page

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/derrickdoo/kafka-site 0117events

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka-site/pull/39.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #39


commit 1f64e7abe981754f0173350dccaa9fa0ed78fe11
Author: Derrick Or 
Date:   2017-01-11T23:12:42Z

new europe meetup links




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Stevo Slavić
+1 (non-binding)

On Thu, Jan 12, 2017 at 12:11 AM, Guozhang Wang  wrote:

> +1
>
> On Wed, Jan 11, 2017 at 12:09 PM, Jeff Widman  wrote:
>
> > +1 nonbinding. We were bit by this in a production environment.
> >
> > On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley  wrote:
> >
> > > +1 (non-binding)
> > >
> > > > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> > > >
> > > > +1
> > > >
> > > > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford 
> > wrote:
> > > >
> > > >> Looks like there was a good consensus on the discuss thread for
> > KIP-106
> > > so
> > > >> lets move to a vote.
> > > >>
> > > >> Please chime in if you would like to change the default for
> > > >> unclean.leader.election.enabled from true to false.
> > > >>
> > > >> https://cwiki.apache.org/confluence/display/KAFKA/%
> > > >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > > >> election.enabled+from+True+to+False
> > > >>
> > > >> B
> > > >>
> > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Guozhang Wang
+1

On Wed, Jan 11, 2017 at 12:09 PM, Jeff Widman  wrote:

> +1 nonbinding. We were bit by this in a production environment.
>
> On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley  wrote:
>
> > +1 (non-binding)
> >
> > > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> > >
> > > +1
> > >
> > > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford 
> wrote:
> > >
> > >> Looks like there was a good consensus on the discuss thread for
> KIP-106
> > so
> > >> lets move to a vote.
> > >>
> > >> Please chime in if you would like to change the default for
> > >> unclean.leader.election.enabled from true to false.
> > >>
> > >> https://cwiki.apache.org/confluence/display/KAFKA/%
> > >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > >> election.enabled+from+True+to+False
> > >>
> > >> B
> > >>
> >
> >
>



-- 
-- Guozhang


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread James Cheng
Congrats, Grant!!

-James

> On Jan 11, 2017, at 11:51 AM, Gwen Shapira  wrote:
> 
> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
> 
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
> 
> Thank you for your contributions, Grant :)
> 
> -- 
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Becket Qin
Congrats Grant!

On Wed, Jan 11, 2017 at 2:17 PM, Kaufman Ng  wrote:

> Congrats Grant!
>
> On Wed, Jan 11, 2017 at 4:28 PM, Jay Kreps  wrote:
>
> > Congrats Grant!
> >
> > -Jay
> >
> > On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira 
> wrote:
> >
> > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > committer and we are pleased to announce that he has accepted!
> > >
> > > Grant contributed 88 patches, 90 code reviews, countless great
> > > comments on discussions, a much-needed cleanup to our protocol and the
> > > on-going and critical work on the Admin protocol. Throughout this, he
> > > displayed great technical judgment, high-quality work and willingness
> > > to contribute where needed to make Apache Kafka awesome.
> > >
> > > Thank you for your contributions, Grant :)
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>


Build failed in Jenkins: kafka-trunk-jdk8 #1169

2017-01-11 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4507; Clients should support older brokers (KIP-97)

[wangguoz] KAFKA-3715: Add granular metrics to Kafka Streams and add hierarhical

--
[...truncated 3936 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetRes

[jira] [Updated] (KAFKA-1894) Avoid long or infinite blocking in the consumer

2017-01-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-1894:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Avoid long or infinite blocking in the consumer
> ---
>
> Key: KAFKA-1894
> URL: https://issues.apache.org/jira/browse/KAFKA-1894
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Reporter: Jay Kreps
>Assignee: Jason Gustafson
> Fix For: 0.10.3.0
>
>
> The new consumer has a lot of loops that look something like
> {code}
>   while(!isThingComplete())
> client.poll();
> {code}
> This occurs both in KafkaConsumer but also in NetworkClient.completeAll. 
> These retry loops are actually mostly the behavior we want but there are 
> several cases where they may cause problems:
>  - In the case of a hard failure we may hang for a long time or indefinitely 
> before realizing the connection is lost.
>  - In the case where the cluster is malfunctioning or down we may retry 
> forever.
> It would probably be better to give a timeout to these. The proposed approach 
> would be to add something like retry.time.ms=6 and only continue retrying 
> for that period of time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4507) The client should send older versions of requests to the broker if necessary

2017-01-11 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4507:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> The client should send older versions of requests to the broker if necessary
> 
>
> Key: KAFKA-4507
> URL: https://issues.apache.org/jira/browse/KAFKA-4507
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Blocker
> Fix For: 0.10.2.0
>
>
> The client should send older versions of requests to the broker if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression

2017-01-11 Thread Ismael Juma
That's a good point Ewen. Dongjin, you could use the branch that Ewen
linked for the performance testing. It would also help validate the PR.

Ismael

On Wed, Jan 11, 2017 at 9:38 PM, Ewen Cheslack-Postava 
wrote:

> FYI, there's an outstanding patch for getting some JMH benchmarking setup:
> https://github.com/apache/kafka/pull/1712 I haven't found time to review
> it
> (and don't really know JMH well anyway) but it might be worth getting that
> landed so we can use it for this as well.
>
> -Ewen
>
> On Wed, Jan 11, 2017 at 6:35 AM, Dongjin Lee  wrote:
>
> > Hi Ismael,
> >
> > 1. In the case of compression output, yes, lz4 is producing the smaller
> > output than gzip. In fact, my benchmark was inspired
> > by MessageCompressionTest#testCompressSize unit test and the result is
> > same - 396 bytes for gzip and 387 bytes for lz4.
> > 2. I agree that my (former) approach can result in unreliable output.
> > However, I am experiencing difficulties on how to acquire the benchmark
> > metrics from Kafka. For you recommended JMH, I just started to google for
> > it. If possible, could you give any example on how to use JMH against
> > Kafka? If it is the case, it will be a great help.
> > Regards,Dongjin
> >
> > _
> > From: Ismael Juma 
> > Sent: Wednesday, January 11, 2017 7:33 PM
> > Subject: Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression
> > To:  
> >
> >
> > Thanks Dongjin. I highly recommend using JMH for the benchmark, the
> > existing one has a few problems that could result in unreliable results.
> > Also, it's a bit surprising that LZ4 is producing smaller output than
> gzip.
> > Is that right?
> >
> > Ismael
> >
> > On Wed, Jan 11, 2017 at 10:20 AM, Dongjin Lee 
> wrote:
> >
> > > Ismael,
> > >
> > > I pushed the benchmark code I used, with some updates (iteration: 20 ->
> > > 1000). I also updated the KIP page with the updated benchmark results.
> > > Please take a review when you are free. The attached screenshot shows
> how
> > > to run the benchmarker.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > On Tue, Jan 10, 2017 at 8:03 PM, Dongjin Lee 
> wrote:
> > >
> > >> Ismael,
> > >>
> > >> I see. Then, I will share the benchmark code I used by tomorrow.
> Thanks
> > >> for your guidance.
> > >>
> > >> Best,
> > >> Dongjin
> > >>
> > >> -
> > >>
> > >> Dongjin Lee
> > >>
> > >> Software developer in Line+.
> > >> So interested in massive-scale machine learning.
> > >>
> > >> facebook: www.facebook.com/dongjin.lee.kr
> > >> linkedin: kr.linkedin.com/in/dongjinleekr
> > >> github: github.com/dongjinleekr
> > >> twitter: www.twitter.com/dongjinleekr
> > >>
> > >>
> > >>
> > >>
> > >> On Tue, Jan 10, 2017 at 7:24 PM +0900, "Ismael Juma" <
> ism...@juma.me.uk
> > >
> > >> wrote:
> > >>
> > >> Dongjin,
> > >>>
> > >>> The KIP states:
> > >>>
> > >>> "I compared the compressed size and compression time of 3 1kb-sized
> > >>> messages (3102 bytes in total), with the Draft-implementation of
> > ZStandard
> > >>> Compression Codec and all currently available CompressionCodecs. All
> > >>> elapsed times are the average of 20 trials."
> > >>>
> > >>> But doesn't give any details of how this was implemented. Is the
> source
> > >>> code available somewhere? Micro-benchmarking in the JVM is pretty
> > tricky so
> > >>> it needs verification before numbers can be trusted. A performance
> test
> > >>> with kafka-producer-perf-test.sh would be nice to have as well, if
> > possible.
> > >>>
> > >>> Thanks,
> > >>> Ismael
> > >>>
> > >>> On Tue, Jan 10, 2017 at 7:44 AM, Dongjin Lee  wrote:
> > >>>
> > >>> > Ismael,
> > >>> >
> > >>> > 1. Is the benchmark in the KIP page not enough? You mean we need a
> > whole
> > >>> > performance test using kafka-producer-perf-test.sh?
> > >>> >
> > >>> > 2. It seems like no major project is relying on it currently.
> > However,
> > >>> > after reviewing the code, I concluded that at least this project
> has
> > a good
> > >>> > test coverage. And for the problem of upstream tracking - although
> > there is
> > >>> > no significant update on ZStandard to judge this problem, it seems
> > not bad.
> > >>> > If required, I can take responsibility of the tracking for this
> > library.
> > >>> >
> > >>> > Thanks,
> > >>> > Dongjin
> > >>> >
> > >>> > On Tue, Jan 10, 2017 at 7:09 AM, Ismael Juma  wrote:
> > >>> >
> > >>> > > Thanks for posting the KIP, ZStandard looks like a nice
> > improvement over
> > >>> > > the existing compression algorithms. A couple of questions:
> > >>> > >
> > >>> > > 1. Can you please elaborate on the details of the benchmark?
> > >>> > > 2. About https://github.com/luben/zstd-jni, can we rely on it? A
> > few
> > >>> > > things
> > >>> > > to consider: are there other projects using it, does it have good
> > test
> > >>> > > coverage, are there performance tests, does it track upstream
> > closely?
> > >>> > >
> > >>> > > Thanks,
> > >>> > > Ismael
> > >>> > >
> > >>> > > On Fri, Jan 6, 2017 at 2:40 AM, 

Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression

2017-01-11 Thread Ewen Cheslack-Postava
FYI, there's an outstanding patch for getting some JMH benchmarking setup:
https://github.com/apache/kafka/pull/1712 I haven't found time to review it
(and don't really know JMH well anyway) but it might be worth getting that
landed so we can use it for this as well.

-Ewen

On Wed, Jan 11, 2017 at 6:35 AM, Dongjin Lee  wrote:

> Hi Ismael,
>
> 1. In the case of compression output, yes, lz4 is producing the smaller
> output than gzip. In fact, my benchmark was inspired
> by MessageCompressionTest#testCompressSize unit test and the result is
> same - 396 bytes for gzip and 387 bytes for lz4.
> 2. I agree that my (former) approach can result in unreliable output.
> However, I am experiencing difficulties on how to acquire the benchmark
> metrics from Kafka. For you recommended JMH, I just started to google for
> it. If possible, could you give any example on how to use JMH against
> Kafka? If it is the case, it will be a great help.
> Regards,Dongjin
>
> _
> From: Ismael Juma 
> Sent: Wednesday, January 11, 2017 7:33 PM
> Subject: Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression
> To:  
>
>
> Thanks Dongjin. I highly recommend using JMH for the benchmark, the
> existing one has a few problems that could result in unreliable results.
> Also, it's a bit surprising that LZ4 is producing smaller output than gzip.
> Is that right?
>
> Ismael
>
> On Wed, Jan 11, 2017 at 10:20 AM, Dongjin Lee  wrote:
>
> > Ismael,
> >
> > I pushed the benchmark code I used, with some updates (iteration: 20 ->
> > 1000). I also updated the KIP page with the updated benchmark results.
> > Please take a review when you are free. The attached screenshot shows how
> > to run the benchmarker.
> >
> > Thanks,
> > Dongjin
> >
> > On Tue, Jan 10, 2017 at 8:03 PM, Dongjin Lee  wrote:
> >
> >> Ismael,
> >>
> >> I see. Then, I will share the benchmark code I used by tomorrow. Thanks
> >> for your guidance.
> >>
> >> Best,
> >> Dongjin
> >>
> >> -
> >>
> >> Dongjin Lee
> >>
> >> Software developer in Line+.
> >> So interested in massive-scale machine learning.
> >>
> >> facebook: www.facebook.com/dongjin.lee.kr
> >> linkedin: kr.linkedin.com/in/dongjinleekr
> >> github: github.com/dongjinleekr
> >> twitter: www.twitter.com/dongjinleekr
> >>
> >>
> >>
> >>
> >> On Tue, Jan 10, 2017 at 7:24 PM +0900, "Ismael Juma"  >
> >> wrote:
> >>
> >> Dongjin,
> >>>
> >>> The KIP states:
> >>>
> >>> "I compared the compressed size and compression time of 3 1kb-sized
> >>> messages (3102 bytes in total), with the Draft-implementation of
> ZStandard
> >>> Compression Codec and all currently available CompressionCodecs. All
> >>> elapsed times are the average of 20 trials."
> >>>
> >>> But doesn't give any details of how this was implemented. Is the source
> >>> code available somewhere? Micro-benchmarking in the JVM is pretty
> tricky so
> >>> it needs verification before numbers can be trusted. A performance test
> >>> with kafka-producer-perf-test.sh would be nice to have as well, if
> possible.
> >>>
> >>> Thanks,
> >>> Ismael
> >>>
> >>> On Tue, Jan 10, 2017 at 7:44 AM, Dongjin Lee  wrote:
> >>>
> >>> > Ismael,
> >>> >
> >>> > 1. Is the benchmark in the KIP page not enough? You mean we need a
> whole
> >>> > performance test using kafka-producer-perf-test.sh?
> >>> >
> >>> > 2. It seems like no major project is relying on it currently.
> However,
> >>> > after reviewing the code, I concluded that at least this project has
> a good
> >>> > test coverage. And for the problem of upstream tracking - although
> there is
> >>> > no significant update on ZStandard to judge this problem, it seems
> not bad.
> >>> > If required, I can take responsibility of the tracking for this
> library.
> >>> >
> >>> > Thanks,
> >>> > Dongjin
> >>> >
> >>> > On Tue, Jan 10, 2017 at 7:09 AM, Ismael Juma  wrote:
> >>> >
> >>> > > Thanks for posting the KIP, ZStandard looks like a nice
> improvement over
> >>> > > the existing compression algorithms. A couple of questions:
> >>> > >
> >>> > > 1. Can you please elaborate on the details of the benchmark?
> >>> > > 2. About https://github.com/luben/zstd-jni, can we rely on it? A
> few
> >>> > > things
> >>> > > to consider: are there other projects using it, does it have good
> test
> >>> > > coverage, are there performance tests, does it track upstream
> closely?
> >>> > >
> >>> > > Thanks,
> >>> > > Ismael
> >>> > >
> >>> > > On Fri, Jan 6, 2017 at 2:40 AM, Dongjin Lee  wrote:
> >>> > >
> >>> > > > Hi all,
> >>> > > >
> >>> > > > I've just posted a new KIP "KIP-110: Add Codec for ZStandard
> >>> > Compression"
> >>> > > > for
> >>> > > > discussion:
> >>> > > >
> >>> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > > 110%3A+Add+Codec+for+ZStandard+Compression
> >>> > > >
> >>> > > > Please have a look when you are free.
> >>> > > >
> >>> > > > Best,
> >>> > > > Dongjin
> >>> > > >
> >>> > > > --
> >>> > > > *Dongjin Lee*
> >>> > > >
> >>> > > >
> >>> > > 

[jira] [Comment Edited] (KAFKA-2610) Metrics for SSL handshake

2017-01-11 Thread Felix A Mercado (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819238#comment-15819238
 ] 

Felix A Mercado edited comment on KAFKA-2610 at 1/11/17 9:31 PM:
-

So, i guess I will take a stab at this:

secureConnectionStart  <--- We definitely want to capture this time
connectionEnd <--- We definitely want to capture this time
Obviously, connectionEnd - secureConnectionStart will give us the SSL handshake 
time.

Do we want to capture the time for the ConnectionStart? Are we already 
capturing this? I guess with this we can get an idea of the percent of time 
that the SSL handshake is taking for the connection time.

ps. I can take on this task, I will need some questions answered and stuff. It 
would be my first time contributing on this project. Cheers.

Do we care about SSL connection tear down time?


was (Author: fmercado):
So, i guess I will take a stab at this:

secureConnectionStart  <--- We definitely want to capture this time
connectionEnd <--- We definitely want to capture this time
Obviously, connectionEnd - secureConnectionStart will give us the SSL handshake 
time.

Do we want to capture the time for the ConnectionStart? Are we already 
capturing this? I guess with this we can get an idea of the percent of time 
that the SSL handshake is taking for the connection time.

Do we care about SSL connection tear down time?

> Metrics for SSL handshake
> -
>
> Key: KAFKA-2610
> URL: https://issues.apache.org/jira/browse/KAFKA-2610
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>
> It would be useful to report some metrics on SSL handshakes to make sure that 
> they are taking a reasonable amount of time and that they are only happening 
> when we expect them to happen. As part of this ticket, we should define the 
> metrics to be implemented and implement them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2610) Metrics for SSL handshake

2017-01-11 Thread Felix A Mercado (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819238#comment-15819238
 ] 

Felix A Mercado commented on KAFKA-2610:


So, i guess I will take a stab at this:

secureConnectionStart  <--- We definitely want to capture this time
connectionEnd <--- We definitely want to capture this time
Obviously, connectionEnd - secureConnectionStart will give us the SSL handshake 
time.

Do we want to capture the time for the ConnectionStart? Are we already 
capturing this? I guess with this we can get an idea of the percent of time 
that the SSL handshake is taking for the connection time.

Do we care about SSL connection tear down time?

> Metrics for SSL handshake
> -
>
> Key: KAFKA-2610
> URL: https://issues.apache.org/jira/browse/KAFKA-2610
> Project: Kafka
>  Issue Type: Sub-task
>  Components: security
>Reporter: Ismael Juma
>
> It would be useful to report some metrics on SSL handshakes to make sure that 
> they are taking a reasonable amount of time and that they are only happening 
> when we expect them to happen. As part of this ticket, we should define the 
> metrics to be implemented and implement them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jay Kreps
Congrats Grant!

-Jay

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[jira] [Updated] (KAFKA-4619) Dissallow to output records with unknown keys in TransformValues

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4619:
---
Status: Patch Available  (was: In Progress)

> Dissallow to output records with unknown keys in TransformValues
> 
>
> Key: KAFKA-4619
> URL: https://issues.apache.org/jira/browse/KAFKA-4619
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1, 0.10.0.1, 0.10.0.0, 0.10.1.0
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> {{KStream#transformValues}} allows the user to return a new value in 
> {{punctuate}} and it also allows the user to return any new key value pair 
> using {{ProcesserContext#forward}}. For {{punctuate}} the key gets set to 
> {{null}} under the hood and for {{forward}} the user can put any new key they 
> want. However, Kafka Streams assumes that using {{transformValue}} does not 
> change the key -- thus, this assumption might not hold right now resulting 
> potentially incorrectly partitioned data.
> Thus, it should not be possible to return any data in {{punctuate}} and 
> {{forward}} and we should raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #1168

2017-01-11 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4612) Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot be cast to [B"

2017-01-11 Thread Kurt Ostfeld (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819196#comment-15819196
 ] 

Kurt Ostfeld commented on KAFKA-4612:
-

Thank you so much! That all makes perfect sense.

> Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot 
> be cast to [B"
> ---
>
> Key: KAFKA-4612
> URL: https://issues.apache.org/jira/browse/KAFKA-4612
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1
> Environment: Virtual Machine using Debian 8 + Confluent Platform 
> 3.1.1.
>Reporter: Kurt Ostfeld
> Attachments: KafkaIsolatedBug.tar.gz
>
>
> I've attached a minimal single source file project that reliably reproduces 
> this issue.
> This project does the following:
> 1) Create test input data. Produces a single random (String,String) record 
> into two diferent topics "topicInput" and "topicTable"
> 2) Creates and runs a Kafka Streams application:
> val kafkaTable: KTable[String, String] = builder.table(Serdes.String, 
> Serdes.String, "topicTable", "topicTable")
> val incomingRecords: KStream[String, String] = 
> builder.stream(Serdes.String, Serdes.String, "topicInput")
> val reKeyedRecords: KStream[String, String] = 
> incomingRecords.selectKey((k, _) => k)
> val joinedRecords: KStream[String, String] = 
> reKeyedRecords.leftJoin(kafkaTable, (s1: String, _: String) => s1)
> joinedRecords.to(Serdes.String, Serdes.String, "topicOutput")
> This reliably generates the following error:
> [error] (StreamThread-1) java.lang.ClassCastException: java.lang.String 
> cannot be cast to [B
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at 
> org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:18)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:63)
>   at 
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> One caveat: I'm running this on a Confluent Platform 3.1.1 instance which 
> uses Kafka 0.10.1.0 since there is no newer Confluent Platform available. The 
> Kafka Streams project is built using "kafka-clients" and "kafka-streams" 
> version 0.10.1.1. If I use 0.10.1.0, I reliably hit bug 
> https://issues.apache.org/jira/browse/KAFKA-4355. I am not sure if there is 
> any issue using 0.10.1.1 libraries with a Confluent Platform running Kafka 
> 0.10.1.0. I will obviously try the next Confluent Platform binary when it is 
> available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4619) Dissallow to output records with unknown keys in TransformValues

2017-01-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819185#comment-15819185
 ] 

ASF GitHub Bot commented on KAFKA-4619:
---

GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2346

KAFKA-4619: Dissallow to output records with unknown keys in TransformValues



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4619-fixTransformValues

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2346.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2346


commit c988d4015dc678a86b3fa48d06169d4aad7d2504
Author: Matthias J. Sax 
Date:   2017-01-11T21:10:35Z

KAFKA-4619: Dissallow to output records with unknown keys in TransformValues




> Dissallow to output records with unknown keys in TransformValues
> 
>
> Key: KAFKA-4619
> URL: https://issues.apache.org/jira/browse/KAFKA-4619
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0, 0.10.0.1, 0.10.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> {{KStream#transformValues}} allows the user to return a new value in 
> {{punctuate}} and it also allows the user to return any new key value pair 
> using {{ProcesserContext#forward}}. For {{punctuate}} the key gets set to 
> {{null}} under the hood and for {{forward}} the user can put any new key they 
> want. However, Kafka Streams assumes that using {{transformValue}} does not 
> change the key -- thus, this assumption might not hold right now resulting 
> potentially incorrectly partitioned data.
> Thus, it should not be possible to return any data in {{punctuate}} and 
> {{forward}} and we should raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #2346: KAFKA-4619: Dissallow to output records with unkno...

2017-01-11 Thread mjsax
GitHub user mjsax opened a pull request:

https://github.com/apache/kafka/pull/2346

KAFKA-4619: Dissallow to output records with unknown keys in TransformValues



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mjsax/kafka kafka-4619-fixTransformValues

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/2346.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2346


commit c988d4015dc678a86b3fa48d06169d4aad7d2504
Author: Matthias J. Sax 
Date:   2017-01-11T21:10:35Z

KAFKA-4619: Dissallow to output records with unknown keys in TransformValues




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4612) Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot be cast to [B"

2017-01-11 Thread Matthias J. Sax (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819174#comment-15819174
 ] 

Matthias J. Sax commented on KAFKA-4612:


If you add {{.through()}} there will be not internally created repartitioning 
topic because this it not created by {{selectKey}} (it only labels its output 
as {{requiredRepartitioning}}) but by the following {{leftJoin}}. If you add 
{{through}} the flag {{requiresRepartitioning}} will be {{false}} for 
{{leftLeft}} input and thus no internal topic is created.

Yes, {{map}}, {{flatMap}}, and {{transform}} have the same issue.

> Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot 
> be cast to [B"
> ---
>
> Key: KAFKA-4612
> URL: https://issues.apache.org/jira/browse/KAFKA-4612
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1
> Environment: Virtual Machine using Debian 8 + Confluent Platform 
> 3.1.1.
>Reporter: Kurt Ostfeld
> Attachments: KafkaIsolatedBug.tar.gz
>
>
> I've attached a minimal single source file project that reliably reproduces 
> this issue.
> This project does the following:
> 1) Create test input data. Produces a single random (String,String) record 
> into two diferent topics "topicInput" and "topicTable"
> 2) Creates and runs a Kafka Streams application:
> val kafkaTable: KTable[String, String] = builder.table(Serdes.String, 
> Serdes.String, "topicTable", "topicTable")
> val incomingRecords: KStream[String, String] = 
> builder.stream(Serdes.String, Serdes.String, "topicInput")
> val reKeyedRecords: KStream[String, String] = 
> incomingRecords.selectKey((k, _) => k)
> val joinedRecords: KStream[String, String] = 
> reKeyedRecords.leftJoin(kafkaTable, (s1: String, _: String) => s1)
> joinedRecords.to(Serdes.String, Serdes.String, "topicOutput")
> This reliably generates the following error:
> [error] (StreamThread-1) java.lang.ClassCastException: java.lang.String 
> cannot be cast to [B
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at 
> org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:18)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:63)
>   at 
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> One caveat: I'm running this on a Confluent Platform 3.1.1 instance which 
> uses Kafka 0.10.1.0 since there is no newer Confluent Platform available. The 
> Kafka Streams project is built using "kafka-clients" and "kafka-streams" 
> version 0.10.1.1. If I use 0.10.1.0, I reliably hit bug 
> https://issues.apache.org/jira/browse/KAFKA-4355. I am not sure if there is 
> any issue using 0.10.1.1 libraries with a Confluent Platform running Kafka 
> 0.10.1.0. I will obviously try the next Confluent Platform binary when it is 
> available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-107: Add purgeDataBefore() API in AdminClient

2017-01-11 Thread Dong Lin
Sorry for the duplicated email. It seems that gmail will put the voting
email in this thread if I simply replace DISCUSS with VOTE in the subject.

On Wed, Jan 11, 2017 at 12:57 PM, Dong Lin  wrote:

> Hi all,
>
> It seems that there is no further concern with the KIP-107. At this point
> we would like to start the voting process. The KIP can be found at
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-107
> %3A+Add+purgeDataBefore%28%29+API+in+AdminClient.
>
> Thanks,
> Dong
>


[VOTE] KIP-107 Add purgeDataBefore() API in AdminClient

2017-01-11 Thread Dong Lin
Hi all,

It seems that there is no further concern with the KIP-107. At this point
we would like to start the voting process. The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-107
%3A+Add+purgeDataBefore%28%29+API+in+AdminClient.

Thanks,
Dong


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4064:
---
Fix Version/s: (was: 0.10.2.0)
   0.10.3.0

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.10.3.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4064:
---
Affects Version/s: 0.10.2.0

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.2.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.10.3.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4064:
---
Labels: needs-kip  (was: )

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4064) Add support for infinite endpoints for range queries in Kafka Streams KV stores

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-4064:
---
Assignee: Xavier Léauté  (was: Roger Hoover)

> Add support for infinite endpoints for range queries in Kafka Streams KV 
> stores
> ---
>
> Key: KAFKA-4064
> URL: https://issues.apache.org/jira/browse/KAFKA-4064
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Roger Hoover
>Assignee: Xavier Léauté
>Priority: Minor
>  Labels: needs-kip
> Fix For: 0.10.2.0
>
>
> In some applications, it's useful to iterate over the key-value store either:
> 1. from the beginning up to a certain key
> 2. from a certain key to the end
> We can add two new methods rangeUtil() and rangeFrom() easily to support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] KIP-107: Add purgeDataBefore() API in AdminClient

2017-01-11 Thread Dong Lin
Hi all,

It seems that there is no further concern with the KIP-107. At this point
we would like to start the voting process. The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-
107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient.

Thanks,
Dong


[VOTE] KIP-107: Add purgeDataBefore() API in AdminClient

2017-01-11 Thread Dong Lin
Hi all,

It seems that there is no further concern with the KIP-107. At this point
we would like to start the voting process. The KIP can be found at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-107%3A+Add+purgeDataBefore%28%29+API+in+AdminClient
.

Thanks,
Dong


[jira] [Commented] (KAFKA-4612) Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot be cast to [B"

2017-01-11 Thread Kurt Ostfeld (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819134#comment-15819134
 ] 

Kurt Ostfeld commented on KAFKA-4612:
-

Thank you! You explained the issue, it makes perfect sense, and adding a 
{{.through}} call after {{.selectKey}} is a working work around. No, I am not 
setting a custom global serdes class.

The {{.selectKey}} API really should require a new key Serdes rather than using 
a global value. I presume {{map}} and {{flatMap}} have the same issues? So, 
using {{selectKey}} seems to create an internal repartition topic. It seems 
wasteful to do {{through}} to another topic just to assign the correct Serdes 
classes.

> Simple Kafka Streams app causes "ClassCastException: java.lang.String cannot 
> be cast to [B"
> ---
>
> Key: KAFKA-4612
> URL: https://issues.apache.org/jira/browse/KAFKA-4612
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.1
> Environment: Virtual Machine using Debian 8 + Confluent Platform 
> 3.1.1.
>Reporter: Kurt Ostfeld
> Attachments: KafkaIsolatedBug.tar.gz
>
>
> I've attached a minimal single source file project that reliably reproduces 
> this issue.
> This project does the following:
> 1) Create test input data. Produces a single random (String,String) record 
> into two diferent topics "topicInput" and "topicTable"
> 2) Creates and runs a Kafka Streams application:
> val kafkaTable: KTable[String, String] = builder.table(Serdes.String, 
> Serdes.String, "topicTable", "topicTable")
> val incomingRecords: KStream[String, String] = 
> builder.stream(Serdes.String, Serdes.String, "topicInput")
> val reKeyedRecords: KStream[String, String] = 
> incomingRecords.selectKey((k, _) => k)
> val joinedRecords: KStream[String, String] = 
> reKeyedRecords.leftJoin(kafkaTable, (s1: String, _: String) => s1)
> joinedRecords.to(Serdes.String, Serdes.String, "topicOutput")
> This reliably generates the following error:
> [error] (StreamThread-1) java.lang.ClassCastException: java.lang.String 
> cannot be cast to [B
> java.lang.ClassCastException: java.lang.String cannot be cast to [B
>   at 
> org.apache.kafka.common.serialization.ByteArraySerializer.serialize(ByteArraySerializer.java:18)
>   at 
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:63)
>   at 
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:44)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:43)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> One caveat: I'm running this on a Confluent Platform 3.1.1 instance which 
> uses Kafka 0.10.1.0 since there is no newer Confluent Platform available. The 
> Kafka Streams project is built using "kafka-clients" and "kafka-streams" 
> version 0.10.1.1. If I use 0.10.1.0, I reliably hit bug 
> https://issues.apache.org/jira/browse/KAFKA-4355. I am not sure if there is 
> any issue using 0.10.1.1 libraries with a Confluent Platform running Kafka 
> 0.10.1.0. I will obviously try the next Confluent Platform binary when it is 
> available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4619) Dissallow to output records with unknown keys in TransformValues

2017-01-11 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-4619:
--

 Summary: Dissallow to output records with unknown keys in 
TransformValues
 Key: KAFKA-4619
 URL: https://issues.apache.org/jira/browse/KAFKA-4619
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.10.1.1, 0.10.0.1, 0.10.0.0, 0.10.1.0
Reporter: Matthias J. Sax
Assignee: Matthias J. Sax
 Fix For: 0.10.2.0


{{KStream#transformValues}} allows the user to return a new value in 
{{punctuate}} and it also allows the user to return any new key value pair 
using {{ProcesserContext#forward}}. For {{punctuate}} the key gets set to 
{{null}} under the hood and for {{forward}} the user can put any new key they 
want. However, Kafka Streams assumes that using {{transformValue}} does not 
change the key -- thus, this assumption might not hold right now resulting 
potentially incorrectly partitioned data.

Thus, it should not be possible to return any data in {{punctuate}} and 
{{forward}} and we should raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-4619) Dissallow to output records with unknown keys in TransformValues

2017-01-11 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-4619 started by Matthias J. Sax.
--
> Dissallow to output records with unknown keys in TransformValues
> 
>
> Key: KAFKA-4619
> URL: https://issues.apache.org/jira/browse/KAFKA-4619
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.1.0, 0.10.0.0, 0.10.0.1, 0.10.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
> Fix For: 0.10.2.0
>
>
> {{KStream#transformValues}} allows the user to return a new value in 
> {{punctuate}} and it also allows the user to return any new key value pair 
> using {{ProcesserContext#forward}}. For {{punctuate}} the key gets set to 
> {{null}} under the hood and for {{forward}} the user can put any new key they 
> want. However, Kafka Streams assumes that using {{transformValue}} does not 
> change the key -- thus, this assumption might not hold right now resulting 
> potentially incorrectly partitioned data.
> Thus, it should not be possible to return any data in {{punctuate}} and 
> {{forward}} and we should raise an exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4611) Support custom authentication mechanism

2017-01-11 Thread mahendiran chandrasekar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahendiran chandrasekar resolved KAFKA-4611.

   Resolution: Workaround
Fix Version/s: 0.10.2.0

> Support custom authentication mechanism
> ---
>
> Key: KAFKA-4611
> URL: https://issues.apache.org/jira/browse/KAFKA-4611
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: mahendiran chandrasekar
> Fix For: 0.10.2.0
>
>
> Currently there are two login mechanisms supported by kafka client.
> 1) Default Login / Abstract Login which uses JAAS authentication
> 2) Kerberos Login
> Supporting user defined login mechanism's would be nice. 
> This could be achieved by removing the limitation from 
> [here](https://github.com/apache/kafka/blob/0.10.0/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L44)
>  ... Instead get custom login module implemented by user from the configs, 
> gives users the option to implement custom login mechanism. 
> I am running into an issue in setting JAAS authentication system property on 
> all executors of my spark cluster. Having custom mechanism to authorize kafka 
> would be a good improvement for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4611) Support custom authentication mechanism

2017-01-11 Thread mahendiran chandrasekar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15819067#comment-15819067
 ] 

mahendiran chandrasekar commented on KAFKA-4611:


I think changes in 0.10.2.0 are pretty good... That was along the lines to what 
i was thinking... Thanks for the reploy. I will close this one

> Support custom authentication mechanism
> ---
>
> Key: KAFKA-4611
> URL: https://issues.apache.org/jira/browse/KAFKA-4611
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.0.0
>Reporter: mahendiran chandrasekar
>
> Currently there are two login mechanisms supported by kafka client.
> 1) Default Login / Abstract Login which uses JAAS authentication
> 2) Kerberos Login
> Supporting user defined login mechanism's would be nice. 
> This could be achieved by removing the limitation from 
> [here](https://github.com/apache/kafka/blob/0.10.0/clients/src/main/java/org/apache/kafka/common/security/authenticator/LoginManager.java#L44)
>  ... Instead get custom login module implemented by user from the configs, 
> gives users the option to implement custom login mechanism. 
> I am running into an issue in setting JAAS authentication system property on 
> all executors of my spark cluster. Having custom mechanism to authorize kafka 
> would be a good improvement for me



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Eno Thereska
Congrats!

Eno
> On 11 Jan 2017, at 20:06, Ben Stopford  wrote:
> 
> Congrats Grant!!
> On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:
> 
>> Congratulations Grant, well deserved. :)
>> 
>> Ismael
>> 
>> On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
>> 
>>> The PMC for Apache Kafka has invited Grant Henke to join as a
>>> committer and we are pleased to announce that he has accepted!
>>> 
>>> Grant contributed 88 patches, 90 code reviews, countless great
>>> comments on discussions, a much-needed cleanup to our protocol and the
>>> on-going and critical work on the Admin protocol. Throughout this, he
>>> displayed great technical judgment, high-quality work and willingness
>>> to contribute where needed to make Apache Kafka awesome.
>>> 
>>> Thank you for your contributions, Grant :)
>>> 
>>> --
>>> Gwen Shapira
>>> Product Manager | Confluent
>>> 650.450.2760 | @gwenshap
>>> Follow us: Twitter | blog
>>> 
>> 



Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Jeff Widman
+1 nonbinding. We were bit by this in a production environment.

On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley  wrote:

> +1 (non-binding)
>
> > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> >
> > +1
> >
> > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
> >
> >> Looks like there was a good consensus on the discuss thread for KIP-106
> so
> >> lets move to a vote.
> >>
> >> Please chime in if you would like to change the default for
> >> unclean.leader.election.enabled from true to false.
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/%
> >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> >> election.enabled+from+True+to+False
> >>
> >> B
> >>
>
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Ben Stopford
Thanks all. We can consider this accepted.

B
On Wed, 11 Jan 2017 at 19:49, Apurva Mehta  wrote:

> +1 (non-binding)
>
> On Wed, Jan 11, 2017 at 11:45 AM, Gwen Shapira  wrote:
>
> > +1
> >
> > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
> > > Looks like there was a good consensus on the discuss thread for KIP-106
> > so
> > > lets move to a vote.
> > >
> > > Please chime in if you would like to change the default for
> > > unclean.leader.election.enabled from true to false.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/%
> > 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > election.enabled+from+True+to+False
> > >
> > > B
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Ben Stopford
Congrats Grant!!
On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:

> Congratulations Grant, well deserved. :)
>
> Ismael
>
> On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Grant contributed 88 patches, 90 code reviews, countless great
> > comments on discussions, a much-needed cleanup to our protocol and the
> > on-going and critical work on the Admin protocol. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Ismael Juma
Congratulations Grant, well deserved. :)

Ismael

On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[jira] [Resolved] (KAFKA-3715) Higher granularity streams metrics

2017-01-11 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3715.
--
Resolution: Fixed

> Higher granularity streams metrics 
> ---
>
> Key: KAFKA-3715
> URL: https://issues.apache.org/jira/browse/KAFKA-3715
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Jeff Klukas
>Assignee: aarti gupta
>  Labels: api
> Fix For: 0.10.2.0
>
>
> Originally proposed by [~guozhang] in 
> https://github.com/apache/kafka/pull/1362#issuecomment-218326690
> We can consider adding metrics for process / punctuate / commit rate at the 
> granularity of each processor node in addition to the global rate mentioned 
> above. This is very helpful in debugging.
> We can consider adding rate / total cumulated metrics for context.forward 
> indicating how many records were forwarded downstream from this processor 
> node as well. This is helpful in debugging.
> We can consider adding metrics for each stream partition's timestamp. This is 
> helpful in debugging.
> Besides the latency metrics, we can also add throughput latency in terms of 
> source records consumed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jason Gustafson
Congrats!

On Wed, Jan 11, 2017 at 11:57 AM, Sriram Subramanian 
wrote:

> Congratulations Grant!
>
> On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Grant contributed 88 patches, 90 code reviews, countless great
> > comments on discussions, a much-needed cleanup to our protocol and the
> > on-going and critical work on the Admin protocol. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Vahid S Hashemian
Congrats Grant!

--Vahid



From:   Sriram Subramanian 
To: us...@kafka.apache.org
Cc: dev@kafka.apache.org, priv...@kafka.apache.org
Date:   01/11/2017 11:58 AM
Subject:Re: [ANNOUNCE] New committer: Grant Henke



Congratulations Grant!

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>






Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Sriram Subramanian
Congratulations Grant!

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [DISCUSS] KIP-107: Add purgeDataBefore() API in AdminClient

2017-01-11 Thread Dong Lin
Hi Mayuresh,

low_watermark will be updated when log retention fires on the broker. It
may also be updated on the follower when follower receives FetchResponse
from leader; and it may be updated on the leader when leader receives
PurgeRequest from admin client.

Thanks,
Dong

On Wed, Jan 11, 2017 at 7:37 AM, Mayuresh Gharat  wrote:

> Hi Dong,
>
> As per  "If the message's offset is below low_watermark,
> then it should have been deleted by log retention policy."
> ---> I am not sure if  I understand this correctly. Do you mean to say that
> the low_watermark will be updated only when the log retention fires on the
> broker?
>
> Thanks,
>
> Mayuresh
>
> On Tue, Jan 10, 2017 at 2:56 PM, Dong Lin  wrote:
>
> > Bump up. I am going to initiate the vote If there is no further concern
> > with the KIP.
> >
> > On Fri, Jan 6, 2017 at 11:23 PM, Dong Lin  wrote:
> >
> > > Hey Mayuresh,
> > >
> > > Thanks for the comment. If the message's offset is below low_watermark,
> > > then it should have been deleted by log retention policy. Thus it is OK
> > not
> > > to expose this message to consumer. Does this answer your question?
> > >
> > > Thanks,
> > > Dong
> > >
> > > On Fri, Jan 6, 2017 at 4:21 PM, Mayuresh Gharat <
> > > gharatmayures...@gmail.com> wrote:
> > >
> > >> Hi Dong,
> > >>
> > >> Thanks for the KIP.
> > >>
> > >> I had a question (which might have been answered before).
> > >>
> > >> 1) The KIP says that the low_water_mark will be updated periodically
> by
> > >> the
> > >> broker like high_water_mark.
> > >> Essentially we want to use low_water_mark for cases where an entire
> > >> segment
> > >> cannot be deleted because may be the segment_start_offset <
> PurgeOffset
> > <
> > >> segment_end_offset, in which case we will set the low_water_mark to
> > >> PurgeOffset+1.
> > >>
> > >> 2) The KIP also says that messages below low_water_mark will not be
> > >> exposed
> > >> for consumers, which does make sense since we want say that data below
> > >> low_water_mark is purged.
> > >>
> > >> Looking at above conditions, does it make sense not to update the
> > >> low_water_mark periodically but only on PurgeRequest?
> > >> The reason being, if we update it periodically then as per 2) we will
> > not
> > >> be allowing consumers to re-consume data that is not purged but is
> below
> > >> low_water_mark.
> > >>
> > >> Thanks,
> > >>
> > >> Mayuresh
> > >>
> > >>
> > >> On Fri, Jan 6, 2017 at 11:18 AM, Dong Lin 
> wrote:
> > >>
> > >> > Hey Jun,
> > >> >
> > >> > Thanks for reviewing the KIP!
> > >> >
> > >> > 1. The low_watermark will be checkpointed in a new file named
> > >> >  "replication-low-watermark-checkpoint". It will have the same
> format
> > >> as
> > >> > the existing replication-offset-checkpoint file. This allows us the
> > keep
> > >> > the existing format of checkpoint files which maps TopicPartition to
> > >> Long.
> > >> > I just updated the "Public Interface" section in the KIP wiki to
> > explain
> > >> > this file.
> > >> >
> > >> > 2. I think using low_watermark from leader to trigger log retention
> in
> > >> the
> > >> > follower will work correctly in the sense that all messages with
> > offset
> > >> <
> > >> > low_watermark can be deleted. But I am not sure that the efficiency
> is
> > >> the
> > >> > same, i.e. offset of messages which should be deleted (i.e. due to
> > time
> > >> or
> > >> > size-based log retention policy) will be smaller than low_watermark
> > from
> > >> > the leader.
> > >> >
> > >> > For example, say both the follower and the leader have messages with
> > >> > offsets in range [0, 2000]. If the follower does log rolling
> slightly
> > >> later
> > >> > than leader, the segments on follower would be [0, 1001], [1002,
> 2000]
> > >> and
> > >> > segments on leader would be [0, 1000], [1001, 2000]. After leader
> > >> deletes
> > >> > the first segment, the low_watermark would be 1001. Thus the first
> > >> segment
> > >> > would stay on follower's disk unnecessarily which may double disk
> > usage
> > >> at
> > >> > worst.
> > >> >
> > >> > Since this approach doesn't save us much, I am inclined to not
> include
> > >> this
> > >> > change to keep the KIP simple.
> > >> >
> > >> > Dong
> > >> >
> > >> >
> > >> >
> > >> > On Fri, Jan 6, 2017 at 10:05 AM, Jun Rao  wrote:
> > >> >
> > >> > > Hi, Dong,
> > >> > >
> > >> > > Thanks for the proposal. Looks good overall. A couple of comments.
> > >> > >
> > >> > > 1. Where is the low_watermark checkpointed? Is that
> > >> > > in replication-offset-checkpoint? If so, do we need to bump up the
> > >> > version?
> > >> > > Could you also describe the format change?
> > >> > >
> > >> > > 2. For topics with "delete" retention, currently we let each
> replica
> > >> > delete
> > >> > > old segments independently. With low_watermark, we could just let
> > >> leaders
> > >> > > delete old segments through the deletion policy and the followers
> > will
> > >> > > simply delete old segments based on low_watermark. Not sure i

[ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Gwen Shapira
The PMC for Apache Kafka has invited Grant Henke to join as a
committer and we are pleased to announce that he has accepted!

Grant contributed 88 patches, 90 code reviews, countless great
comments on discussions, a much-needed cleanup to our protocol and the
on-going and critical work on the Admin protocol. Throughout this, he
displayed great technical judgment, high-quality work and willingness
to contribute where needed to make Apache Kafka awesome.

Thank you for your contributions, Grant :)

-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Apurva Mehta
+1 (non-binding)

On Wed, Jan 11, 2017 at 11:45 AM, Gwen Shapira  wrote:

> +1
>
> On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
> > Looks like there was a good consensus on the discuss thread for KIP-106
> so
> > lets move to a vote.
> >
> > Please chime in if you would like to change the default for
> > unclean.leader.election.enabled from true to false.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/%
> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> election.enabled+from+True+to+False
> >
> > B
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


  1   2   >