[jira] [Created] (KAFKA-8060) The Kafka protocol generator should allow null default values for strings

2019-03-07 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8060:
--

 Summary: The Kafka protocol generator should allow null default 
values for strings
 Key: KAFKA-8060
 URL: https://issues.apache.org/jira/browse/KAFKA-8060
 Project: Kafka
  Issue Type: Improvement
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


The Kafka protocol generator should allow null default values for strings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8060) The Kafka protocol generator should allow null default values for strings

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786489#comment-16786489
 ] 

ASF GitHub Bot commented on KAFKA-8060:
---

cmccabe commented on pull request #6387: KAFKA-8060: The Kafka protocol 
generator should allow null defaults
URL: https://github.com/apache/kafka/pull/6387
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The Kafka protocol generator should allow null default values for strings
> -
>
> Key: KAFKA-8060
> URL: https://issues.apache.org/jira/browse/KAFKA-8060
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> The Kafka protocol generator should allow null default values for strings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7974) KafkaAdminClient loses worker thread/enters zombie state when initial DNS lookup fails

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786494#comment-16786494
 ] 

ASF GitHub Bot commented on KAFKA-7974:
---

cmccabe commented on pull request #6305: Fix for KAFKA-7974: Avoid zombie 
AdminClient when node host isn't resolvable
URL: https://github.com/apache/kafka/pull/6305
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KafkaAdminClient loses worker thread/enters zombie state when initial DNS 
> lookup fails
> --
>
> Key: KAFKA-7974
> URL: https://issues.apache.org/jira/browse/KAFKA-7974
> Project: Kafka
>  Issue Type: Bug
>Reporter: Nicholas Parker
>Priority: Major
>
> Version: kafka-clients-2.1.0
> I have some code that creates creates a KafkaAdminClient instance and then 
> invokes listTopics(). I was seeing the following stacktrace in the logs, 
> after which the KafkaAdminClient instance became unresponsive:
> {code:java}
> ERROR [kafka-admin-client-thread | adminclient-1] 2019-02-18 01:00:45,597 
> KafkaThread.java:51 - Uncaught exception in thread 'kafka-admin-client-thread 
> | adminclient-1':
> java.lang.IllegalStateException: No entry found for connection 0
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
>     at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
>     at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:898)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1113)
>     at java.lang.Thread.run(Thread.java:748){code}
> From looking at the code I was able to trace down a possible cause:
>  * NetworkClient.ready() invokes this.initiateConnect() as seen in the above 
> stacktrace
>  * NetworkClient.initiateConnect() invokes 
> ClusterConnectionStates.connecting(), which internally invokes 
> ClientUtils.resolve() to to resolve the host when creating an entry for the 
> connection.
>  * If this host lookup fails, a UnknownHostException can be thrown back to 
> NetworkClient.initiateConnect() and the connection entry is not created in 
> ClusterConnectionStates. This exception doesn't get logged so this is a guess 
> on my part.
>  * NetworkClient.initiateConnect() catches the exception and attempts to call 
> ClusterConnectionStates.disconnected(), which throws an IllegalStateException 
> because no entry had yet been created due to the lookup failure.
>  * This IllegalStateException ends up killing the worker thread and 
> KafkaAdminClient gets stuck, never returning from listTopics().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7974) KafkaAdminClient loses worker thread/enters zombie state when initial DNS lookup fails

2019-03-07 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe resolved KAFKA-7974.

   Resolution: Fixed
Fix Version/s: 2.2.0

> KafkaAdminClient loses worker thread/enters zombie state when initial DNS 
> lookup fails
> --
>
> Key: KAFKA-7974
> URL: https://issues.apache.org/jira/browse/KAFKA-7974
> Project: Kafka
>  Issue Type: Bug
>Reporter: Nicholas Parker
>Priority: Major
> Fix For: 2.2.0
>
>
> Version: kafka-clients-2.1.0
> I have some code that creates creates a KafkaAdminClient instance and then 
> invokes listTopics(). I was seeing the following stacktrace in the logs, 
> after which the KafkaAdminClient instance became unresponsive:
> {code:java}
> ERROR [kafka-admin-client-thread | adminclient-1] 2019-02-18 01:00:45,597 
> KafkaThread.java:51 - Uncaught exception in thread 'kafka-admin-client-thread 
> | adminclient-1':
> java.lang.IllegalStateException: No entry found for connection 0
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
>     at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
>     at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:898)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1113)
>     at java.lang.Thread.run(Thread.java:748){code}
> From looking at the code I was able to trace down a possible cause:
>  * NetworkClient.ready() invokes this.initiateConnect() as seen in the above 
> stacktrace
>  * NetworkClient.initiateConnect() invokes 
> ClusterConnectionStates.connecting(), which internally invokes 
> ClientUtils.resolve() to to resolve the host when creating an entry for the 
> connection.
>  * If this host lookup fails, a UnknownHostException can be thrown back to 
> NetworkClient.initiateConnect() and the connection entry is not created in 
> ClusterConnectionStates. This exception doesn't get logged so this is a guess 
> on my part.
>  * NetworkClient.initiateConnect() catches the exception and attempts to call 
> ClusterConnectionStates.disconnected(), which throws an IllegalStateException 
> because no entry had yet been created due to the lookup failure.
>  * This IllegalStateException ends up killing the worker thread and 
> KafkaAdminClient gets stuck, never returning from listTopics().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7974) KafkaAdminClient loses worker thread/enters zombie state when initial DNS lookup fails

2019-03-07 Thread Colin P. McCabe (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin P. McCabe updated KAFKA-7974:
---
Fix Version/s: (was: 2.2.0)
   2.3.0

> KafkaAdminClient loses worker thread/enters zombie state when initial DNS 
> lookup fails
> --
>
> Key: KAFKA-7974
> URL: https://issues.apache.org/jira/browse/KAFKA-7974
> Project: Kafka
>  Issue Type: Bug
>Reporter: Nicholas Parker
>Priority: Major
> Fix For: 2.3.0
>
>
> Version: kafka-clients-2.1.0
> I have some code that creates creates a KafkaAdminClient instance and then 
> invokes listTopics(). I was seeing the following stacktrace in the logs, 
> after which the KafkaAdminClient instance became unresponsive:
> {code:java}
> ERROR [kafka-admin-client-thread | adminclient-1] 2019-02-18 01:00:45,597 
> KafkaThread.java:51 - Uncaught exception in thread 'kafka-admin-client-thread 
> | adminclient-1':
> java.lang.IllegalStateException: No entry found for connection 0
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.nodeState(ClusterConnectionStates.java:330)
>     at 
> org.apache.kafka.clients.ClusterConnectionStates.disconnected(ClusterConnectionStates.java:134)
>     at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:921)
>     at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:898)
>     at 
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1113)
>     at java.lang.Thread.run(Thread.java:748){code}
> From looking at the code I was able to trace down a possible cause:
>  * NetworkClient.ready() invokes this.initiateConnect() as seen in the above 
> stacktrace
>  * NetworkClient.initiateConnect() invokes 
> ClusterConnectionStates.connecting(), which internally invokes 
> ClientUtils.resolve() to to resolve the host when creating an entry for the 
> connection.
>  * If this host lookup fails, a UnknownHostException can be thrown back to 
> NetworkClient.initiateConnect() and the connection entry is not created in 
> ClusterConnectionStates. This exception doesn't get logged so this is a guess 
> on my part.
>  * NetworkClient.initiateConnect() catches the exception and attempts to call 
> ClusterConnectionStates.disconnected(), which throws an IllegalStateException 
> because no entry had yet been created due to the lookup failure.
>  * This IllegalStateException ends up killing the worker thread and 
> KafkaAdminClient gets stuck, never returning from listTopics().



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8061) Use forceClose flag to check in the loop inside maybeWaitForProducerId.

2019-03-07 Thread Manikumar (JIRA)
Manikumar created KAFKA-8061:


 Summary: Use forceClose flag to check in the loop inside 
maybeWaitForProducerId.
 Key: KAFKA-8061
 URL: https://issues.apache.org/jira/browse/KAFKA-8061
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.1.1
Reporter: Manikumar
Assignee: Manikumar


In KAFKA-5503, we have added a check 
(https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
inside maybeWaitForProducerId.  This is to avoid blocking sender thread 
shutdown call, while we attempt to get the ProducerId.

This created a corner case where sender thread gets blocked, if we had 
concurrent producerId reset and shutdown call. The proposed fix is to check the 
forceClose flag in the loop inside maybeWaitForProducerId.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8061) Use forceClose flag to check in the loop inside maybeWaitForProducerId.

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786562#comment-16786562
 ] 

ASF GitHub Bot commented on KAFKA-8061:
---

omkreddy commented on pull request #6388: KAFKA-8061: Use forceClose flag to 
check in the loop inside maybeWaitForProducerId.
URL: https://github.com/apache/kafka/pull/6388
 
 
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use forceClose flag to check in the loop inside maybeWaitForProducerId.
> ---
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to avoid blocking sender thread 
> shutdown call, while we attempt to get the ProducerId.
> This created a corner case where sender thread gets blocked, if we had 
> concurrent producerId reset and shutdown call. The proposed fix is to check 
> the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8044) System Test Failure: ReassignPartitionsTest.test_reassign_partitions

2019-03-07 Thread Manikumar (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786563#comment-16786563
 ] 

Manikumar commented on KAFKA-8044:
--

Underlying issue for this test failure is KAFKA-8061. As I have seen few 
timeout failures for this, will use this JIRA to increase wait timeout for 
VerifiableProducer in ReassignPartitionsTest.


> System Test Failure: ReassignPartitionsTest.test_reassign_partitions
> 
>
> Key: KAFKA-8044
> URL: https://issues.apache.org/jira/browse/KAFKA-8044
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.1.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> {quote}
> Node ubuntu@worker10: did not stop within the specified timeout of 150 
> seconds Traceback (most recent call last): File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/tests/runner_client.py",
>  line 132, in run data = self.run_test() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/tests/runner_client.py",
>  line 189, in run_test return self.test_context.function(self.test) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/mark/_mark.py",
>  line 428, in wrapper return functools.partial(f, *args, **kwargs)(*w_args, 
> **w_kwargs) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 148, in test_reassign_partitions self.move_start_offset() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 121, in move_start_offset producer.stop() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/services/background_thread.py",
>  line 82, in stop super(BackgroundThreadService, self).stop() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/services/service.py",
>  line 279, in stop self.stop_node(node) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/services/verifiable_producer.py",
>  line 285, in stop_node (str(node.account), str(self.stop_timeout_sec)) 
> AssertionError: Node ubuntu@worker10: did not stop within the specified 
> timeout of 150 seconds
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8044) System Test Failure: ReassignPartitionsTest.test_reassign_partitions

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786568#comment-16786568
 ] 

ASF GitHub Bot commented on KAFKA-8044:
---

omkreddy commented on pull request #6389: KAFKA-8044: Increase stop timeout for 
VerifiableProducer in ReassignPartitionsTest
URL: https://github.com/apache/kafka/pull/6389
 
 
   - increase wait timeout from 150 to 250 seconds
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> System Test Failure: ReassignPartitionsTest.test_reassign_partitions
> 
>
> Key: KAFKA-8044
> URL: https://issues.apache.org/jira/browse/KAFKA-8044
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Affects Versions: 2.1.0
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> {quote}
> Node ubuntu@worker10: did not stop within the specified timeout of 150 
> seconds Traceback (most recent call last): File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/tests/runner_client.py",
>  line 132, in run data = self.run_test() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/tests/runner_client.py",
>  line 189, in run_test return self.test_context.function(self.test) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/mark/_mark.py",
>  line 428, in wrapper return functools.partial(f, *args, **kwargs)(*w_args, 
> **w_kwargs) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 148, in test_reassign_partitions self.move_start_offset() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/tests/core/reassign_partitions_test.py",
>  line 121, in move_start_offset producer.stop() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/services/background_thread.py",
>  line 82, in stop super(BackgroundThreadService, self).stop() File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/venv/lib/python2.7/site-packages/ducktape-0.7.5-py2.7.egg/ducktape/services/service.py",
>  line 279, in stop self.stop_node(node) File 
> "/home/jenkins/workspace/system-test-kafka_2.2-WCMO537TEDQFHHUF4SVK33SPUR5AG4KSITCCNJIYFPKDZQWBEJOA/kafka/tests/kafkatest/services/verifiable_producer.py",
>  line 285, in stop_node (str(node.account), str(self.stop_timeout_sec)) 
> AssertionError: Node ubuntu@worker10: did not stop within the specified 
> timeout of 150 seconds
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8018) Flaky Test SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786584#comment-16786584
 ] 

ASF GitHub Bot commented on KAFKA-8018:
---

omkreddy commented on pull request #6354: KAFKA-8018: Flaky Test 
SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed
URL: https://github.com/apache/kafka/pull/6354
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test 
> SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed
> 
>
> Key: KAFKA-8018
> URL: https://issues.apache.org/jira/browse/KAFKA-8018
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/35/]
> {quote}org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired for 
> /brokers/topics/__consumer_offsets/partitions/3/state at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:130) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at 
> kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:534) at 
> kafka.zk.KafkaZkClient.getTopicPartitionState(KafkaZkClient.scala:891) at 
> kafka.zk.KafkaZkClient.getLeaderForPartition(KafkaZkClient.scala:901) at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:669) 
> at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:304) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:302) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:302) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:350) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:78)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8018) Flaky Test SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8018.
--
Resolution: Fixed

> Flaky Test 
> SaslSslAdminClientIntegrationTest#testLegacyAclOpsNeverAffectOrReturnPrefixed
> 
>
> Key: KAFKA-8018
> URL: https://issues.apache.org/jira/browse/KAFKA-8018
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/35/]
> {quote}org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired for 
> /brokers/topics/__consumer_offsets/partitions/3/state at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:130) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at 
> kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:534) at 
> kafka.zk.KafkaZkClient.getTopicPartitionState(KafkaZkClient.scala:891) at 
> kafka.zk.KafkaZkClient.getLeaderForPartition(KafkaZkClient.scala:901) at 
> kafka.utils.TestUtils$.waitUntilLeaderIsElectedOrChanged(TestUtils.scala:669) 
> at kafka.utils.TestUtils$.$anonfun$createTopic$1(TestUtils.scala:304) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$1$adapted(TestUtils.scala:302) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:302) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:350) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73) at 
> kafka.api.AdminClientIntegrationTest.setUp(AdminClientIntegrationTest.scala:78)
>  at 
> kafka.api.SaslSslAdminClientIntegrationTest.setUp(SaslSslAdminClientIntegrationTest.scala:64){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7288) Transient failure in SslSelectorTest.testCloseConnectionInClosingState

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786612#comment-16786612
 ] 

ASF GitHub Bot commented on KAFKA-7288:
---

rajinisivaram commented on pull request #6390: KAFKA-7288 - Make sure no bytes 
buffered when relying on idle timeout in channel close test
URL: https://github.com/apache/kafka/pull/6390
 
 
   SelectorTest.testCloseConnectionInClosingState sends and receives messages 
to get the channel into a state with staged receives and then waits for idle 
timeout to close the channel. When run with SSL, the channel may have buffered 
bytes that prevent the channel from being closed. Updated the test to wait 
until there are no buffered bytes as well. I have left the number of retries to 
achieve this state at 100, since my local runs always succeed the first time.
   
   I wasn't able to recreate the failure with or without the fix.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Transient failure in SslSelectorTest.testCloseConnectionInClosingState
> --
>
> Key: KAFKA-7288
> URL: https://issues.apache.org/jira/browse/KAFKA-7288
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.1.0, 2.3.0
>
>
> Noticed this failure in SslSelectorTest.testCloseConnectionInClosingState a 
> few times in unit tests in Jenkins:
> {quote}
> java.lang.AssertionError: Channel not expired expected null, but 
> was: at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotNull(Assert.java:755) at 
> org.junit.Assert.assertNull(Assert.java:737) at 
> org.apache.kafka.common.network.SelectorTest.testCloseConnectionInClosingState(SelectorTest.java:341)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread Patrik Kleindl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrik Kleindl reassigned KAFKA-8037:
-

Assignee: Patrik Kleindl

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7980) Flaky Test SocketServerTest#testConnectionRateLimit

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786665#comment-16786665
 ] 

ASF GitHub Bot commented on KAFKA-7980:
---

rajinisivaram commented on pull request #6391: KAFKA-7980 - Fix timing issue in 
SocketServerTest.testConnectionRateLimit
URL: https://github.com/apache/kafka/pull/6391
 
 
   Test currently checks that there were at least 5 polls when 5 connections 
are established with connectionQueueSize=1. But we could be doing the check 
just after the 5th connection before the 5th poll, so updated the check to 
verify that there were at least 4 polls.
   
   I could not recreate the failure with or without the fix.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test SocketServerTest#testConnectionRateLimit
> ---
>
> Key: KAFKA-7980
> URL: https://issues.apache.org/jira/browse/KAFKA-7980
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: Connections created too quickly: 4 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.network.SocketServerTest.testConnectionRateLimit(SocketServerTest.scala:1122){quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8039) Flaky Test SaslAuthenticatorTest#testCannotReauthenticateAgainFasterThanOneSecond

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786757#comment-16786757
 ] 

ASF GitHub Bot commented on KAFKA-8039:
---

rajinisivaram commented on pull request #6383: KAFKA-8039 - Use MockTime in 
fast reauth test to avoid transient failures
URL: https://github.com/apache/kafka/pull/6383
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test 
> SaslAuthenticatorTest#testCannotReauthenticateAgainFasterThanOneSecond
> -
>
> Key: KAFKA-8039
> URL: https://issues.apache.org/jira/browse/KAFKA-8039
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/46/testReport/junit/org.apache.kafka.common.security.authenticator/SaslAuthenticatorTest/testCannotReauthenticateAgainFasterThanOneSecond/]
> {quote}java.lang.AssertionError: Should have received the 
> SaslHandshakeRequest bytes back since we re-authenticated too quickly, but 
> instead we got our generated message echoed back, implying re-auth succeeded 
> when it should not have at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testCannotReauthenticateAgainFasterThanOneSecond(SaslAuthenticatorTest.java:1503){quote}
> STDOUT
> {quote}[2019-03-04 19:33:46,222] ERROR Extensions provided in login context 
> without a token 
> (org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule:318) 
> java.io.IOException: Extensions provided in login context without a token at 
> org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredLoginCallbackHandler.handle(OAuthBearerUnsecuredLoginCallbackHandler.java:164)
>  at 
> org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
>  at 
> org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at 
> javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at 
> javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at 
> javax.security.auth.login.LoginContext.login(LoginContext.java:587) at 
> org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
>  at 
> org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:61)
>  at 
> org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:104)
>  at 
> org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:120) 
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:96) 
> at 
> org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:49)
>  at 
> org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:43)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1842)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1838)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testValidSaslOauthBearerMechanismWithoutServerTokens(SaslAuthenticatorTest.java:1578)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeM

[jira] [Resolved] (KAFKA-8039) Flaky Test SaslAuthenticatorTest#testCannotReauthenticateAgainFasterThanOneSecond

2019-03-07 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8039.
---
Resolution: Fixed
  Reviewer: Ron Dagostino

> Flaky Test 
> SaslAuthenticatorTest#testCannotReauthenticateAgainFasterThanOneSecond
> -
>
> Key: KAFKA-8039
> URL: https://issues.apache.org/jira/browse/KAFKA-8039
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/46/testReport/junit/org.apache.kafka.common.security.authenticator/SaslAuthenticatorTest/testCannotReauthenticateAgainFasterThanOneSecond/]
> {quote}java.lang.AssertionError: Should have received the 
> SaslHandshakeRequest bytes back since we re-authenticated too quickly, but 
> instead we got our generated message echoed back, implying re-auth succeeded 
> when it should not have at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testCannotReauthenticateAgainFasterThanOneSecond(SaslAuthenticatorTest.java:1503){quote}
> STDOUT
> {quote}[2019-03-04 19:33:46,222] ERROR Extensions provided in login context 
> without a token 
> (org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule:318) 
> java.io.IOException: Extensions provided in login context without a token at 
> org.apache.kafka.common.security.oauthbearer.internals.unsecured.OAuthBearerUnsecuredLoginCallbackHandler.handle(OAuthBearerUnsecuredLoginCallbackHandler.java:164)
>  at 
> org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.identifyToken(OAuthBearerLoginModule.java:316)
>  at 
> org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule.login(OAuthBearerLoginModule.java:301)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at 
> javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at 
> javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at 
> javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at 
> javax.security.auth.login.LoginContext.login(LoginContext.java:587) at 
> org.apache.kafka.common.security.authenticator.AbstractLogin.login(AbstractLogin.java:60)
>  at 
> org.apache.kafka.common.security.authenticator.LoginManager.(LoginManager.java:61)
>  at 
> org.apache.kafka.common.security.authenticator.LoginManager.acquireLoginManager(LoginManager.java:104)
>  at 
> org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:149)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:146)
>  at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:85)
>  at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:120) 
> at 
> org.apache.kafka.common.network.NioEchoServer.(NioEchoServer.java:96) 
> at 
> org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:49)
>  at 
> org.apache.kafka.common.network.NetworkTestUtils.createEchoServer(NetworkTestUtils.java:43)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1842)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.createEchoServer(SaslAuthenticatorTest.java:1838)
>  at 
> org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest.testValidSaslOauthBearerMechanismWithoutServerTokens(SaslAuthenticatorTest.java:1578)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.juni

[jira] [Created] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Andrey Volkov (JIRA)
Andrey Volkov created KAFKA-8062:


 Summary: StateListener is not notified when StreamThread dies
 Key: KAFKA-8062
 URL: https://issues.apache.org/jira/browse/KAFKA-8062
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1
 Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
Reporter: Andrey Volkov


I want my application to react when streams die. Trying to use 
KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time to 
time.

The test scenario: Kafka is available, but there are no topics that my Topology 
is supposed to use.

I expect streams to crash and the state listener to be notified about that, 
with the new state ERROR. KafkaStreams.state() should also return ERROR.

In reality the streams crash, but the KafkaStreams.state() method always 
returns REBALANCING and the last time the StateListener was called, the new 
state was also REBALANCING. 

 

I believe the reason for this is in the methods:

org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which does 
not react on the state StreamsThread.State.PENDING_SHUTDOWN

and

org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
 which calls shutdown() setting the state to PENDING_SHUTDOWN and then

streamThread.setStateListener(null) effectively removing the state listener, so 
that the DEAD state of the thread never reaches KafkaStreams object.

Here is an extract from the logs:

{{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] ERROR 
o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
test-input-topic is unknown yet during rebalance, please make sure they have 
been pre-created before starting the Streams application.}}
{{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.c.c.i.AbstractCoordinator - [Consumer 
clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
groupId=Test] Successfully joined group with generation 1}}
{{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
groupId=Test] Setting newly assigned partitions []}}
{{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.s.p.i.StreamThread - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
down}}
{{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.s.p.i.StreamThread - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
{{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.s.p.i.StreamThread - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
{{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.c.c.KafkaConsumer - [Consumer 
clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
 groupId=] Unsubscribed all topics or patterns and assigned partitions}}
{{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.c.p.KafkaProducer - [Producer 
clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
{{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.s.p.i.StreamThread - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
from PENDING_SHUTDOWN to DEAD}}
{{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] INFO 
o.a.k.s.p.i.StreamThread - stream-thread 
[Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}

After this calls to KafkaStreams.state() still return REBALANCING

There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786882#comment-16786882
 ] 

Bill Bejeck commented on KAFKA-8062:


Hi [~andrey.v.volkov]

Thanks for reporting this, can you share your topology to re-create the error 
locally?

Thanks,

Bill

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8036) Log dir reassignment on followers fails with FileNotFoundException for the leader epoch cache on leader election

2019-03-07 Thread Stanislav Kozlovski (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanislav Kozlovski resolved KAFKA-8036.

Resolution: Fixed

I was wrong. After writing a test to verify the behavior 
([https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188),]
 I found that KAFKA-7897 fixed this issue across all versions it affected.

> Log dir reassignment on followers fails with FileNotFoundException for the 
> leader epoch cache on leader election
> 
>
> Key: KAFKA-8036
> URL: https://issues.apache.org/jira/browse/KAFKA-8036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.1, 2.0.1
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> When changing a partition's log directories for a follower broker, we move 
> all the data related to that partition to the other log dir (as per 
> [KIP-113|https://cwiki.apache.org/confluence/display/KAFKA/KIP-113:+Support+replicas+movement+between+log+directories]).
>  On a successful move, we rename the original directory by adding a suffix 
> consisting of an UUID and `-delete`. (e.g `test_log_dir` would be renamed to 
> `test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete`)
> We copy every log file and [initialize a new leader epoch file 
> cache|https://github.com/apache/kafka/blob/0d56f1413557adabc736cae2dffcdc56a620403e/core/src/main/scala/kafka/log/Log.scala#L768].
>  The problem is that we do not update the associated `Replica` class' leader 
> epoch cache - it still points to the old `LeaderEpochFileCache` instance.
> This results in a FileNotFound exception when the broker is [elected as a 
> leader for the 
> [partition|https://github.com/apache/kafka/blob/255f4a6effdc71c273691859cd26c4138acad778/core/src/main/scala/kafka/cluster/Partition.scala#L312].
>  This has the unintended side effect of marking the log directory as offline, 
> resulting in all partitions from that log directory becoming unavailable for 
> the specific broker.
> h2.  
> h2. Exception and logs
>  I reproduced this locally by running two brokers. The steps to reproduce: 
> {code:java}
> Create partition replicated across two brokers (A, B) with leader A
> Move partition leadership to B
> Alter log dirs on A
> Move partition leadership back to A{code}
> This results in a log directory structure on broker B similar to this:
> {code:java}
> ├── new_dir
> │   ├── cleaner-offset-checkpoint
> │   ├── log-start-offset-checkpoint
> │   ├── meta.properties
> │   ├── recovery-point-offset-checkpoint
> │   ├── replication-offset-checkpoint
> │   └── test_log_dir-0
> │   ├── .index
> │   ├── .log
> │   ├── .timeindex
> │   └── leader-epoch-checkpoint
> └── old_dir
>   ├── cleaner-offset-checkpoint
>   ├── log-start-offset-checkpoint
>   ├── meta.properties
>   ├── recovery-point-offset-checkpoint
>   ├── replication-offset-checkpoint
>   └── test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete
> ├── .index
> ├── .log
> ├── .timeindex
> ├── 0009.snapshot
> └── leader-epoch-checkpoint
> {code}
>  
>  
> {code:java}
> [2019-03-04 15:36:56,854] INFO [Partition test_log_dir-0 broker=0] 
> test_log_dir-0 starts at Leader Epoch 3 from offset 9. Previous Leader Epoch 
> was: 2 (kafka.cluster.Partition) [2019-03-04 15:36:56,855] WARN 
> [LeaderEpochCache test_log_dir-0] New epoch entry EpochEntry(epoch=3, 
> startOffset=9) caused truncation of conflicting entries 
> ListBuffer(EpochEntry(epoch=1, startOffset=9)). Cache now contains 2 entries. 
> (kafka.server.epoch.LeaderEpochFileCache) [2019-03-04 15:36:56,857] ERROR 
> Error while writing to checkpoint file 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint 
> (kafka.server.LogDirFailureChannel) java.io.FileNotFoundException: 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint.tmp (No such file or 
> directory) at java.base/java.io.FileOutputStream.open0(Native Method) at 
> java.base/java.io.FileOutputStream.open(FileOutputStream.java:299) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:238) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:188) at 
> kafka.server.checkpoints.CheckpointFile.liftedTree1$1(CheckpointFile.scala:52)
>  at kafka.server.checkpoints.CheckpointFile.write(CheckpointFile.scala:50) at 
> kafka.server.checkpoints.LeaderEpochCheckpointFile.write(LeaderEpochCheckpointFile.scala:64)
>  at 
> kafka.server.epoch.LeaderEpochFileCache.kafka$server$epoch$LeaderEpochFileCache$$flush(LeaderEpochFileCache.scala:219)
>  at 
> kafka.server.epoch.LeaderEpochFileCache$$anonfun$assi

[jira] [Comment Edited] (KAFKA-8036) Log dir reassignment on followers fails with FileNotFoundException for the leader epoch cache on leader election

2019-03-07 Thread Stanislav Kozlovski (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786939#comment-16786939
 ] 

Stanislav Kozlovski edited comment on KAFKA-8036 at 3/7/19 4:23 PM:


I was wrong. After writing a test to verify the behavior 
([https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188|https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188),]),
 I found that KAFKA-7897 fixed this issue across all versions it affected.


was (Author: enether):
I was wrong. After writing a test to verify the behavior 
([https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188),]
 I found that KAFKA-7897 fixed this issue across all versions it affected.

> Log dir reassignment on followers fails with FileNotFoundException for the 
> leader epoch cache on leader election
> 
>
> Key: KAFKA-8036
> URL: https://issues.apache.org/jira/browse/KAFKA-8036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.1, 2.0.1
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> When changing a partition's log directories for a follower broker, we move 
> all the data related to that partition to the other log dir (as per 
> [KIP-113|https://cwiki.apache.org/confluence/display/KAFKA/KIP-113:+Support+replicas+movement+between+log+directories]).
>  On a successful move, we rename the original directory by adding a suffix 
> consisting of an UUID and `-delete`. (e.g `test_log_dir` would be renamed to 
> `test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete`)
> We copy every log file and [initialize a new leader epoch file 
> cache|https://github.com/apache/kafka/blob/0d56f1413557adabc736cae2dffcdc56a620403e/core/src/main/scala/kafka/log/Log.scala#L768].
>  The problem is that we do not update the associated `Replica` class' leader 
> epoch cache - it still points to the old `LeaderEpochFileCache` instance.
> This results in a FileNotFound exception when the broker is [elected as a 
> leader for the 
> [partition|https://github.com/apache/kafka/blob/255f4a6effdc71c273691859cd26c4138acad778/core/src/main/scala/kafka/cluster/Partition.scala#L312].
>  This has the unintended side effect of marking the log directory as offline, 
> resulting in all partitions from that log directory becoming unavailable for 
> the specific broker.
> h2.  
> h2. Exception and logs
>  I reproduced this locally by running two brokers. The steps to reproduce: 
> {code:java}
> Create partition replicated across two brokers (A, B) with leader A
> Move partition leadership to B
> Alter log dirs on A
> Move partition leadership back to A{code}
> This results in a log directory structure on broker B similar to this:
> {code:java}
> ├── new_dir
> │   ├── cleaner-offset-checkpoint
> │   ├── log-start-offset-checkpoint
> │   ├── meta.properties
> │   ├── recovery-point-offset-checkpoint
> │   ├── replication-offset-checkpoint
> │   └── test_log_dir-0
> │   ├── .index
> │   ├── .log
> │   ├── .timeindex
> │   └── leader-epoch-checkpoint
> └── old_dir
>   ├── cleaner-offset-checkpoint
>   ├── log-start-offset-checkpoint
>   ├── meta.properties
>   ├── recovery-point-offset-checkpoint
>   ├── replication-offset-checkpoint
>   └── test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete
> ├── .index
> ├── .log
> ├── .timeindex
> ├── 0009.snapshot
> └── leader-epoch-checkpoint
> {code}
>  
>  
> {code:java}
> [2019-03-04 15:36:56,854] INFO [Partition test_log_dir-0 broker=0] 
> test_log_dir-0 starts at Leader Epoch 3 from offset 9. Previous Leader Epoch 
> was: 2 (kafka.cluster.Partition) [2019-03-04 15:36:56,855] WARN 
> [LeaderEpochCache test_log_dir-0] New epoch entry EpochEntry(epoch=3, 
> startOffset=9) caused truncation of conflicting entries 
> ListBuffer(EpochEntry(epoch=1, startOffset=9)). Cache now contains 2 entries. 
> (kafka.server.epoch.LeaderEpochFileCache) [2019-03-04 15:36:56,857] ERROR 
> Error while writing to checkpoint file 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint 
> (kafka.server.LogDirFailureChannel) java.io.FileNotFoundException: 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint.tmp (No such file or 
> directory) at java.base/java.io.FileOutputStream.open0(Native Method) at 
> java.base/java.io.FileOutputStream.open(FileOutputStream.java:299) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:238) at 
> java.base/java.io.FileOutputStream.(FileOutputStream.java:188) at 
> kafka.server.che

[jira] [Commented] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Andrey Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786945#comment-16786945
 ] 

Andrey Volkov commented on KAFKA-8062:
--

Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
{{  val props = new Properties()}}
{{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
{{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
{{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
{{  builder.stream("test-input-topic").to("test-output-topic")}}
{{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
{{    println(s"State change: old = $oldState, new = $newState")}}
{{  })}}

{{  streams.start()}}

{{  while (true) {}}
{{    println(s"State: ${streams.state()}")}}
{{    Thread.sleep(5000)}}
{{  }}}
{{}}}

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stre

[jira] [Comment Edited] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Andrey Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786945#comment-16786945
 ] 

Andrey Volkov edited comment on KAFKA-8062 at 3/7/19 4:26 PM:
--

Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
 {{  val props = new Properties()}}
 {{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
 {{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
 {{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
 {{  builder.stream("test-input-topic").to("test-output-topic")}}
 {{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
 {{    println(s"State change: old = $oldState, new = $newState")}}
 {{  })}}

{{  streams.start()}}

{{  while (true) {}}
{{    println(s"State: ${streams.state()}")}}
{{    Thread.sleep(5000)}}
{{  }}}
{{}}}


was (Author: andrey.v.volkov):
Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
{{  val props = new Properties()}}
{{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
{{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
{{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
{{  builder.stream("test-input-topic").to("test-output-topic")}}
{{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
{{    println(s"State change: old = $oldState, new = $newState")}}
{{  })}}

{{  streams.start()}}

{{  while (true) {}}
{{    println(s"State: ${streams.state()}")}}
{{    Thread.sleep(5000)}}
{{  }}}
{{}}}

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353a

[jira] [Comment Edited] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Andrey Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786945#comment-16786945
 ] 

Andrey Volkov edited comment on KAFKA-8062 at 3/7/19 4:27 PM:
--

Hi [~bbejeck], here is the whole code:

 

{{object Application extends App {}}
 {{  val props = new Properties()}}
 {{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
 {{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
 {{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
 {{  builder.stream("test-input-topic").to("test-output-topic")}}
 {{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
 {{    println(s"State change: old = $oldState, new = $newState")}}
 {{  })}}

{{  streams.start()}}

{{  while (true) {}}
 {{    println(s"State: ${streams.state()}")}}
 {{    Thread.sleep(5000)}}
 }}


was (Author: andrey.v.volkov):
Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
 {{  val props = new Properties()}}
 {{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
 {{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
 {{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
 {{  builder.stream("test-input-topic").to("test-output-topic")}}
 {{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
 {{    println(s"State change: old = $oldState, new = $newState")}}
 {{  })}}

{{  streams.start()}}

{{  while (true) {}}
 {{    println(s"State: ${streams.state()}")}}
 {{    Thread.sleep(5000)}}
}}

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b3

[jira] [Commented] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786946#comment-16786946
 ] 

ASF GitHub Bot commented on KAFKA-7950:
---

omkreddy commented on pull request #6357: KAFKA-7950: Kafka tools 
GetOffsetShell -time description
URL: https://github.com/apache/kafka/pull/6357
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
> Fix For: 2.3.0
>
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-07 Thread Andrey Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786945#comment-16786945
 ] 

Andrey Volkov edited comment on KAFKA-8062 at 3/7/19 4:26 PM:
--

Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
 {{  val props = new Properties()}}
 {{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
 {{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
 {{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
 {{  builder.stream("test-input-topic").to("test-output-topic")}}
 {{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
 {{    println(s"State change: old = $oldState, new = $newState")}}
 {{  })}}

{{  streams.start()}}

{{  while (true) {}}
 {{    println(s"State: ${streams.state()}")}}
 {{    Thread.sleep(5000)}}
}}


was (Author: andrey.v.volkov):
Hi [~bbejeck], here it is:

 

{{object Application extends App {}}
 {{  val props = new Properties()}}
 {{  props.put(StreamsConfig.APPLICATION_ID_CONFIG, "Test")}}
 {{  props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
"test-kafka-server:9092")}}

{{  implicit val consumed = Consumed.`with`(new StringSerde, new StringSerde)}}
 {{  implicit val produced = Produced.`with`(new StringSerde, new StringSerde)}}

{{  val builder = new StreamsBuilder()}}
 {{  builder.stream("test-input-topic").to("test-output-topic")}}
 {{  val topology = builder.build()}}

{{  val streams = new KafkaStreams(topology, props)}}

{{  streams.setStateListener((oldState, newState) => {}}
 {{    println(s"State change: old = $oldState, new = $newState")}}
 {{  })}}

{{  streams.start()}}

{{  while (true) {}}
{{    println(s"State: ${streams.state()}")}}
{{    Thread.sleep(5000)}}
{{  }}}
{{}}}

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353ad

[jira] [Comment Edited] (KAFKA-8036) Log dir reassignment on followers fails with FileNotFoundException for the leader epoch cache on leader election

2019-03-07 Thread Stanislav Kozlovski (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786939#comment-16786939
 ] 

Stanislav Kozlovski edited comment on KAFKA-8036 at 3/7/19 4:23 PM:


I was wrong. After writing a test to verify the behavior 
([https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188|https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188),]),
 I found that KAFKA-7897 fixed this issue across all versions it affected.

Resolving this...


was (Author: enether):
I was wrong. After writing a test to verify the behavior 
([https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188|https://github.com/stanislavkozlovski/kafka/commit/b93a866e2318c47ff7538ddecd2fa38f29abd188),]),
 I found that KAFKA-7897 fixed this issue across all versions it affected.

> Log dir reassignment on followers fails with FileNotFoundException for the 
> leader epoch cache on leader election
> 
>
> Key: KAFKA-8036
> URL: https://issues.apache.org/jira/browse/KAFKA-8036
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 1.1.1, 2.0.1
>Reporter: Stanislav Kozlovski
>Assignee: Stanislav Kozlovski
>Priority: Major
>
> When changing a partition's log directories for a follower broker, we move 
> all the data related to that partition to the other log dir (as per 
> [KIP-113|https://cwiki.apache.org/confluence/display/KAFKA/KIP-113:+Support+replicas+movement+between+log+directories]).
>  On a successful move, we rename the original directory by adding a suffix 
> consisting of an UUID and `-delete`. (e.g `test_log_dir` would be renamed to 
> `test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete`)
> We copy every log file and [initialize a new leader epoch file 
> cache|https://github.com/apache/kafka/blob/0d56f1413557adabc736cae2dffcdc56a620403e/core/src/main/scala/kafka/log/Log.scala#L768].
>  The problem is that we do not update the associated `Replica` class' leader 
> epoch cache - it still points to the old `LeaderEpochFileCache` instance.
> This results in a FileNotFound exception when the broker is [elected as a 
> leader for the 
> [partition|https://github.com/apache/kafka/blob/255f4a6effdc71c273691859cd26c4138acad778/core/src/main/scala/kafka/cluster/Partition.scala#L312].
>  This has the unintended side effect of marking the log directory as offline, 
> resulting in all partitions from that log directory becoming unavailable for 
> the specific broker.
> h2.  
> h2. Exception and logs
>  I reproduced this locally by running two brokers. The steps to reproduce: 
> {code:java}
> Create partition replicated across two brokers (A, B) with leader A
> Move partition leadership to B
> Alter log dirs on A
> Move partition leadership back to A{code}
> This results in a log directory structure on broker B similar to this:
> {code:java}
> ├── new_dir
> │   ├── cleaner-offset-checkpoint
> │   ├── log-start-offset-checkpoint
> │   ├── meta.properties
> │   ├── recovery-point-offset-checkpoint
> │   ├── replication-offset-checkpoint
> │   └── test_log_dir-0
> │   ├── .index
> │   ├── .log
> │   ├── .timeindex
> │   └── leader-epoch-checkpoint
> └── old_dir
>   ├── cleaner-offset-checkpoint
>   ├── log-start-offset-checkpoint
>   ├── meta.properties
>   ├── recovery-point-offset-checkpoint
>   ├── replication-offset-checkpoint
>   └── test_log_dir-0.32e77c96939140f9a56a49b75ad8ec8d-delete
> ├── .index
> ├── .log
> ├── .timeindex
> ├── 0009.snapshot
> └── leader-epoch-checkpoint
> {code}
>  
>  
> {code:java}
> [2019-03-04 15:36:56,854] INFO [Partition test_log_dir-0 broker=0] 
> test_log_dir-0 starts at Leader Epoch 3 from offset 9. Previous Leader Epoch 
> was: 2 (kafka.cluster.Partition) [2019-03-04 15:36:56,855] WARN 
> [LeaderEpochCache test_log_dir-0] New epoch entry EpochEntry(epoch=3, 
> startOffset=9) caused truncation of conflicting entries 
> ListBuffer(EpochEntry(epoch=1, startOffset=9)). Cache now contains 2 entries. 
> (kafka.server.epoch.LeaderEpochFileCache) [2019-03-04 15:36:56,857] ERROR 
> Error while writing to checkpoint file 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint 
> (kafka.server.LogDirFailureChannel) java.io.FileNotFoundException: 
> /logs/old_dir/test_log_dir-0/leader-epoch-checkpoint.tmp (No such file or 
> directory) at java.base/java.io.FileOutputStream.open0(Native Method) at 
> java.base/java.io.FileOutputStream.open(FileOutputStream.java:299) at 
> java.base/java.io.FileOutputStream.(FileO

[jira] [Updated] (KAFKA-8061) Use forceClose flag to check in the loop inside maybeWaitForProducerId.

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8061:
-
Description: 
In KAFKA-5503, we have added a check 
(https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
close(), while we attempt to get the ProducerId.

This created a corner case where sender thread gets blocked, if we had 
concurrent producerId reset and shutdown call. The proposed fix is to check the 
forceClose flag in the loop inside maybeWaitForProducerId.

 

 

  was:
In KAFKA-5503, we have added a check 
(https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
inside maybeWaitForProducerId.  This is to avoid blocking sender thread 
shutdown call, while we attempt to get the ProducerId.

This created a corner case where sender thread gets blocked, if we had 
concurrent producerId reset and shutdown call. The proposed fix is to check the 
forceClose flag in the loop inside maybeWaitForProducerId.

 

 


> Use forceClose flag to check in the loop inside maybeWaitForProducerId.
> ---
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This created a corner case where sender thread gets blocked, if we had 
> concurrent producerId reset and shutdown call. The proposed fix is to check 
> the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7950) Kafka tools GetOffsetShell -time description

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-7950.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6357
[https://github.com/apache/kafka/pull/6357]

> Kafka tools GetOffsetShell -time description 
> -
>
> Key: KAFKA-7950
> URL: https://issues.apache.org/jira/browse/KAFKA-7950
> Project: Kafka
>  Issue Type: Wish
>  Components: tools
>Affects Versions: 2.1.0
>Reporter: Kartik
>Assignee: Kartik
>Priority: Trivial
> Fix For: 2.3.0
>
>
> In Kafka GetOffsetShell tool, The --time description should contain 
> information regarding what happens when the timestamp value  > recently 
> committed timestamp is given.
>  
> Expected: "If timestamp value provided is greater than recently committed 
> timestamp then no offset is returned. "
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8061) Handle ProducerId reset while closing Sender thread.

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8061:
-
Description: 
In KAFKA-5503, we have added a check 
(https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
close(), while we attempt to get the ProducerId.
This avoids blocking indefinitely when the producer is shutting down.

This created a corner case, where Sender thread gets blocked, if we had 
concurrent producerId reset and call to Sender thread close. The proposed fix 
is to check the forceClose flag in the loop inside maybeWaitForProducerId.

 

 

  was:
In KAFKA-5503, we have added a check 
(https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
close(), while we attempt to get the ProducerId.

This created a corner case where sender thread gets blocked, if we had 
concurrent producerId reset and shutdown call. The proposed fix is to check the 
forceClose flag in the loop inside maybeWaitForProducerId.

 

 


> Handle ProducerId reset while closing Sender thread.
> 
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8061) Handle ProducerId reset while closing Sender thread.

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8061:
-
Summary: Handle ProducerId reset while closing Sender thread.  (was: Use 
forceClose flag to check in the loop inside maybeWaitForProducerId.)

> Handle ProducerId reset while closing Sender thread.
> 
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This created a corner case where sender thread gets blocked, if we had 
> concurrent producerId reset and shutdown call. The proposed fix is to check 
> the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8061) Handle concurrent ProducerId reset while closing Sender thread.

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8061:
-
Summary: Handle concurrent ProducerId reset while closing Sender thread.  
(was: Handle ProducerId reset while closing Sender thread.)

> Handle concurrent ProducerId reset while closing Sender thread.
> ---
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8061) Handle concurrent ProducerId reset and call to Sender thread shutdown

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar updated KAFKA-8061:
-
Summary: Handle concurrent ProducerId reset and call to Sender thread 
shutdown  (was: Handle concurrent ProducerId reset while closing Sender thread.)

> Handle concurrent ProducerId reset and call to Sender thread shutdown
> -
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8022) Flaky Test RequestQuotaTest#testExemptRequestTime

2019-03-07 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-8022.

   Resolution: Fixed
Fix Version/s: (was: 0.11.0.4)
   2.2.1
   2.3.0

The PR is merged. Closing it for now.

> Flaky Test RequestQuotaTest#testExemptRequestTime
> -
>
> Key: KAFKA-8022
> URL: https://issues.apache.org/jira/browse/KAFKA-8022
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 0.11.0.3
>Reporter: Matthias J. Sax
>Assignee: Jun Rao
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk11/detail/kafka-trunk-jdk11/328/tests]
> {quote}kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for 
> connection while in state: CONNECTING
> at 
> kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:242)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> at 
> kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:238)
> at kafka.zookeeper.ZooKeeperClient.(ZooKeeperClient.scala:96)
> at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1825)
> at kafka.zk.ZooKeeperTestHarness.setUp(ZooKeeperTestHarness.scala:59)
> at 
> kafka.integration.KafkaServerTestHarness.setUp(KafkaServerTestHarness.scala:90)
> at kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:81)
> at kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73)
> at kafka.server.RequestQuotaTest.setUp(RequestQuotaTest.scala:81){quote}
> STDOUT:
> {quote}[2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:37894-127.0.0.1:54838-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:37894-127.0.0.1:54822-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-01 00:40:47,091] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:37894-127.0.0.1:54836-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-01 00:40:47,090] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:37894-127.0.0.1:54834-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-01 00:40:47,106] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-WRITE_TXN_MARKERS, correlationId=1, 
> api=WRITE_TXN_MARKERS, body=\{transaction_markers=[]} 
> (kafka.server.KafkaApi

[jira] [Resolved] (KAFKA-7977) Flaky Test ReassignPartitionsClusterTest#shouldOnlyThrottleMovingReplicas

2019-03-07 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7977.

Resolution: Fixed

The PR in KAFKA-8018 is merged, which should reduce the chance for ZK session 
expiration. Closing this issue for now. If the same issue occurs again, feel 
free to reopen the ticket. 

> Flaky Test ReassignPartitionsClusterTest#shouldOnlyThrottleMovingReplicas
> -
>
> Key: KAFKA-7977
> URL: https://issues.apache.org/jira/browse/KAFKA-7977
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired for /brokers/topics/topic1 at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:130) at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:54) at 
> kafka.zookeeper.AsyncResponse.resultException(ZooKeeperClient.scala:534) at 
> kafka.zk.KafkaZkClient.$anonfun$getReplicaAssignmentForTopics$2(KafkaZkClient.scala:579)
>  at 
> scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:244)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> scala.collection.TraversableLike.flatMap(TraversableLike.scala:244) at 
> scala.collection.TraversableLike.flatMap$(TraversableLike.scala:241) at 
> scala.collection.AbstractTraversable.flatMap(Traversable.scala:108) at 
> kafka.zk.KafkaZkClient.getReplicaAssignmentForTopics(KafkaZkClient.scala:574) 
> at 
> kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:338)
>  at 
> kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:209)
>  at 
> kafka.admin.ReassignPartitionsClusterTest.shouldOnlyThrottleMovingReplicas(ReassignPartitionsClusterTest.scala:343){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787022#comment-16787022
 ] 

Matthias J. Sax commented on KAFKA-7965:


One more "Boxed Error": 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2956/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4332) kafka.api.UserQuotaTest.testThrottledProducerConsumer transient unit test failure

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787032#comment-16787032
 ] 

Matthias J. Sax commented on KAFKA-4332:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/kafka.api/UserQuotaTest/testThrottledProducerConsumer/]
{quote}java.lang.AssertionError: Client with id=QuotasTestProducer-1 should 
have been throttled at org.junit.Assert.fail(Assert.java:89) at 
org.junit.Assert.assertTrue(Assert.java:42) at 
kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) at 
kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215) at 
kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82){quote}
STDOUT
{quote}[2019-03-07 02:57:47,580] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka5530647155515717240.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:57:47,584] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) [2019-03-07 
02:57:47,831] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka5530647155515717240.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:57:47,836] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka2946892511966406188.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false principal is kafka/localh...@example.com 
Will use keytab Commit Succeeded [2019-03-07 02:57:48,186] WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: '/tmp/kafka5530647155515717240.tmp'. Will continue connection to 
Zookeeper server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:57:48,194] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka4020757312096439524.tmp refreshKrb5Config is false principal is 
clie...@example.com tryFirstPass is false useFirstPass is false storePass is 
false clearPass is false principal is clie...@example.com Will use keytab 
Commit Succeeded [2019-03-07 02:58:19,504] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka5046880890559957539.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:58:19,507] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) [2019-03-07 
02:58:19,856] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka5046880890559957539.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:58:19,857] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is null isInitiator true KeyTab is 
/tmp/kafka3985288658603216278.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false principal is kafka/localh...@example.com 
Will use keytab Commit Succeeded [2019-03-07 02:58:20,382] WARN SASL 
configuration failed: javax.security.auth.login.LoginException: No JAAS 
configuration section named 'Client' was found in specified JAAS configuration 
file: '/tmp/kafka5046880890559957539.tmp'. Will continue connection to 
Zookeeper server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1011) [2019-03-07 02:58:20,388] ERROR 
[ZooKeeperClient] Auth failed. (kafka.zookeeper.ZooKeeperClient:74) Debug is 
true storeKey true useTicketCache false useKeyTab true doNotPrompt false 
ticketCache is

[jira] [Commented] (KAFKA-8032) Flaky Test UserQuotaTest#testQuotaOverrideDelete

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787035#comment-16787035
 ] 

Matthias J. Sax commented on KAFKA-8032:


One more: 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/kafka.api/UserQuotaTest/testQuotaOverrideDelete/]

> Flaky Test UserQuotaTest#testQuotaOverrideDelete
> 
>
> Key: KAFKA-8032
> URL: https://issues.apache.org/jira/browse/KAFKA-8032
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.api/UserQuotaTest/testQuotaOverrideDelete/]
> {quote}java.lang.AssertionError: Client with id=QuotasTestProducer-1 should 
> have been throttled at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229) 
> at kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215) 
> at 
> kafka.api.BaseQuotaTest.testQuotaOverrideDelete(BaseQuotaTest.scala:124){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-07 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8063:
--

 Summary: Flaky Test WorkerTest#testConverterOverrides
 Key: KAFKA-8063
 URL: https://issues.apache.org/jira/browse/KAFKA-8063
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, unit tests
Affects Versions: 2.3.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20068/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testConverterOverrides/]
{quote}java.lang.AssertionError: Expectation failure on verify: 
WorkerSourceTask.run(): expected: 1, actual: 1 at 
org.easymock.internal.MocksControl.verify(MocksControl.java:242){quote}
STDOUT
{quote}[2019-03-07 02:28:25,482] (Test worker) ERROR Failed to start connector 
test-connector (org.apache.kafka.connect.runtime.Worker:234) 
org.apache.kafka.connect.errors.ConnectException: Failed to find Connector at 
org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
 at 
org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101) 
at 
org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
 at 
org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
 at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226) at 
org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68) at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
 at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89) at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
 at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87) at 
org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50) at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
 at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
 at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34) 
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44) at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
 at 
org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
 at 
org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57)
 at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59) 
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.

[jira] [Commented] (KAFKA-7288) Transient failure in SslSelectorTest.testCloseConnectionInClosingState

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787050#comment-16787050
 ] 

ASF GitHub Bot commented on KAFKA-7288:
---

rajinisivaram commented on pull request #6390: KAFKA-7288 - Make sure no bytes 
buffered when relying on idle timeout in channel close test
URL: https://github.com/apache/kafka/pull/6390
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Transient failure in SslSelectorTest.testCloseConnectionInClosingState
> --
>
> Key: KAFKA-7288
> URL: https://issues.apache.org/jira/browse/KAFKA-7288
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.1.0, 2.3.0
>
>
> Noticed this failure in SslSelectorTest.testCloseConnectionInClosingState a 
> few times in unit tests in Jenkins:
> {quote}
> java.lang.AssertionError: Channel not expired expected null, but 
> was: at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotNull(Assert.java:755) at 
> org.junit.Assert.assertNull(Assert.java:737) at 
> org.apache.kafka.common.network.SelectorTest.testCloseConnectionInClosingState(SelectorTest.java:341)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7288) Transient failure in SslSelectorTest.testCloseConnectionInClosingState

2019-03-07 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-7288.
---
   Resolution: Fixed
Fix Version/s: (was: 2.1.0)
   2.2.1

> Transient failure in SslSelectorTest.testCloseConnectionInClosingState
> --
>
> Key: KAFKA-7288
> URL: https://issues.apache.org/jira/browse/KAFKA-7288
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> Noticed this failure in SslSelectorTest.testCloseConnectionInClosingState a 
> few times in unit tests in Jenkins:
> {quote}
> java.lang.AssertionError: Channel not expired expected null, but 
> was: at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotNull(Assert.java:755) at 
> org.junit.Assert.assertNull(Assert.java:737) at 
> org.apache.kafka.common.network.SelectorTest.testCloseConnectionInClosingState(SelectorTest.java:341)
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7939) Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787122#comment-16787122
 ] 

Matthias J. Sax commented on KAFKA-7939:


One more: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/52/testReport/junit/org.apache.kafka.clients.admin/KafkaAdminClientTest/testCreateTopicsRetryBackoff/]

> Flaky Test KafkaAdminClientTest#testCreateTopicsRetryBackoff
> 
>
> Key: KAFKA-7939
> URL: https://issues.apache.org/jira/browse/KAFKA-7939
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/12/]
> {quote}org.junit.runners.model.TestTimedOutException: test timed out after 
> 12 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.kafka.clients.admin.KafkaAdminClientTest.testCreateTopicsRetryBackoff(KafkaAdminClientTest.java:347){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8064) Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion

2019-03-07 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8064:
--

 Summary: Flaky Test DeleteTopicTest #testRecreateTopicAfterDeletion
 Key: KAFKA-8064
 URL: https://issues.apache.org/jira/browse/KAFKA-8064
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/54/testReport/junit/kafka.admin/DeleteTopicTest/testRecreateTopicAfterDeletion/]
{quote}java.lang.AssertionError: Admin path /admin/delete_topic/test path not 
deleted even after a replica is restarted at 
kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.verifyTopicDeletion(TestUtils.scala:1056) at 
kafka.admin.DeleteTopicTest.testRecreateTopicAfterDeletion(DeleteTopicTest.scala:283){quote}
STDOUT
{quote}[2019-03-07 16:05:05,661] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition test-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:26,122] WARN Unable to 
read additional data from client sessionid 0x1006f1dd1a60003, likely client has 
closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 
16:05:36,511] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error 
for partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:36,512] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:43,418] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:43,422] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:47,649] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:05:51,668] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. WARNING: If partitions are increased for a 
topic that has a key, the partition logic or ordering of the messages will be 
affected Adding partitions succeeded! [2019-03-07 16:05:56,135] WARN Unable to 
read additional data from client sessionid 0x1006f1e2abb0006, likely client has 
closed socket (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-07 
16:06:00,286] ERROR [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error 
for partition test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:06:00,357] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:06:01,289] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:06:07,517] ERROR 
[ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
test-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. [2019-03-07 16:06:07,519] ERROR 
[ReplicaFetcher replicaId=1, leaderId=0, fetcher

[jira] [Created] (KAFKA-8065) Forwarding modified timestamps does not reset timestamp correctly

2019-03-07 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8065:
--

 Summary: Forwarding modified timestamps does not reset timestamp 
correctly
 Key: KAFKA-8065
 URL: https://issues.apache.org/jira/browse/KAFKA-8065
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.1.1, 2.0.1, 2.2.0
Reporter: Matthias J. Sax


Using Processor API, users can set a new output record timestamp via 
`context.forward(..., To.all().withTimestamp(...))`. However, after the 
forward()-call returns, the timestamp is not reset to the original input record 
timestamp and thus a consecutive call to `context.forward(...)` without `To` 
will use the newly set output record timestamp from before, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8065) Forwarding modified timestamps does not reset timestamp correctly

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8065:
--

Assignee: Matthias J. Sax

> Forwarding modified timestamps does not reset timestamp correctly
> -
>
> Key: KAFKA-8065
> URL: https://issues.apache.org/jira/browse/KAFKA-8065
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.1, 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Using Processor API, users can set a new output record timestamp via 
> `context.forward(..., To.all().withTimestamp(...))`. However, after the 
> forward()-call returns, the timestamp is not reset to the original input 
> record timestamp and thus a consecutive call to `context.forward(...)` 
> without `To` will use the newly set output record timestamp from before, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8065) Forwarding modified timestamps does not reset timestamp correctly

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787151#comment-16787151
 ] 

ASF GitHub Bot commented on KAFKA-8065:
---

mjsax commented on pull request #6393: KAFKA-8065: restore original input 
record timestamp in forward()
URL: https://github.com/apache/kafka/pull/6393
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Forwarding modified timestamps does not reset timestamp correctly
> -
>
> Key: KAFKA-8065
> URL: https://issues.apache.org/jira/browse/KAFKA-8065
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.1, 2.2.0, 2.1.1
>Reporter: Matthias J. Sax
>Assignee: Matthias J. Sax
>Priority: Major
>
> Using Processor API, users can set a new output record timestamp via 
> `context.forward(..., To.all().withTimestamp(...))`. However, after the 
> forward()-call returns, the timestamp is not reset to the original input 
> record timestamp and thus a consecutive call to `context.forward(...)` 
> without `To` will use the newly set output record timestamp from before, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3042) updateIsr should stop after failed several times due to zkVersion issue

2019-03-07 Thread Kessiler Rodrigues (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787211#comment-16787211
 ] 

Kessiler Rodrigues commented on KAFKA-3042:
---

I also hit this issue. Do we have an eta for this fix to come out?

> updateIsr should stop after failed several times due to zkVersion issue
> ---
>
> Key: KAFKA-3042
> URL: https://issues.apache.org/jira/browse/KAFKA-3042
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.10.0.0
> Environment: jdk 1.7
> centos 6.4
>Reporter: Jiahongchao
>Assignee: Dong Lin
>Priority: Major
>  Labels: reliability
> Attachments: controller.log, server.log.2016-03-23-01, 
> state-change.log
>
>
> sometimes one broker may repeatly log
> "Cached zkVersion 54 not equal to that in zookeeper, skip updating ISR"
> I think this is because the broker consider itself as the leader in fact it's 
> a follower.
> So after several failed tries, it need to find out who is the leader



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8066) ReplicaFetcherThread fails to startup because of failing to register the metric.

2019-03-07 Thread Zhanxiang (Patrick) Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanxiang (Patrick) Huang updated KAFKA-8066:
-
Affects Version/s: 2.1.2
   2.0.2
   2.2.0
   1.1.2
   1.1.0
   1.1.1
   2.0.0
   2.0.1
   2.1.0
   2.1.1

> ReplicaFetcherThread fails to startup because of failing to register the 
> metric.
> 
>
> Key: KAFKA-8066
> URL: https://issues.apache.org/jira/browse/KAFKA-8066
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.1.1, 1.1.2, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1, 
> 2.0.2, 2.1.2
>Reporter: Zhanxiang (Patrick) Huang
>Assignee: Zhanxiang (Patrick) Huang
>Priority: Major
>
> After KAFKA-6051, we close leaderEndPoint in replica fetcher thread 
> initiateShutdown to try to preempt in-progress fetch request and accelerate 
> repica fetcher thread shutdown. However, the selector may fail to close the 
> channel and throw an Exception when the replica fetcher thread is still 
> actively fetching. In this case, the sensor will not be cleaned up. 
> Basically, if `close(id)` throws an exception in `Selector.close()`, then 
> `sensors.close()` will not be called and thus the sensors will not get 
> unregistered (See codes below).
> {code:java}
> public void close() {
> List connections = new ArrayList<>(channels.keySet());
> for (String id : connections)
> close(id);
> try {
> this.nioSelector.close();
> } catch (IOException | SecurityException e) {
> log.error("Exception closing nioSelector:", e);
> }
> sensors.close();
> channelBuilder.close();
> }
> {code}
> If this happen, when the broker want to start up the ReplicaFetcherThread 
> with the same fetch id to the same destination broker again (e.g. due to 
> leadership changes or new partitions get created), the ReplicaFetcherThread 
> will fail to start up because the selector will throw an 
> IllegalArgumentException if the metric with the same name already exists:
> {noformat}
> 2019/02/27 10:24:26.938 ERROR [KafkaApis] [kafka-request-handler-6] 
> [kafka-server] [] [KafkaApi-38031] Error when handling request {}
> java.lang.IllegalArgumentException: A metric named 'MetricName 
> [name=connection-count, group=replica-fetcher-metrics, description=The 
> current number of active connections., tags={broker-id=29712, fetcher-id=3}]' 
> already exists, can't register another one.
> at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:559) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at 
> org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:502) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at 
> org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:485) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at 
> org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:470) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at 
> org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:963)
>  ~[kafka-clients-2.0.0.66.jar:?]
> at org.apache.kafka.common.network.Selector.(Selector.java:170) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at org.apache.kafka.common.network.Selector.(Selector.java:188) 
> ~[kafka-clients-2.0.0.66.jar:?]
> at 
> kafka.server.ReplicaFetcherBlockingSend.(ReplicaFetcherBlockingSend.scala:61)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$1.apply(ReplicaFetcherThread.scala:68)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.ReplicaFetcherThread$$anonfun$1.apply(ReplicaFetcherThread.scala:68)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at scala.Option.getOrElse(Option.scala:121) 
> ~[scala-library-2.11.12.jar:?]
> at 
> kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:67) 
> ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:32)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.AbstractFetcherManager.kafka$server$AbstractFetcherManager$$addAndStartFetcherThread$1(AbstractFetcherManager.scala:132)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:146)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:137)
>  ~[kafka_2.11-2.0.0.66.jar:?]
> at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableL

[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7647:
---
Priority: Critical  (was: Major)

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Priority: Critical
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787223#comment-16787223
 ] 

Matthias J. Sax commented on KAFKA-7647:


One more: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3440/tests]

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Dong Lin
>Priority: Major
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7647:
---
Component/s: unit tests
 core

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7647:
---
Labels: flaky-test  (was: )

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8066) ReplicaFetcherThread fails to startup because of failing to register the metric.

2019-03-07 Thread Zhanxiang (Patrick) Huang (JIRA)
Zhanxiang (Patrick) Huang created KAFKA-8066:


 Summary: ReplicaFetcherThread fails to startup because of failing 
to register the metric.
 Key: KAFKA-8066
 URL: https://issues.apache.org/jira/browse/KAFKA-8066
 Project: Kafka
  Issue Type: Bug
Reporter: Zhanxiang (Patrick) Huang
Assignee: Zhanxiang (Patrick) Huang


After KAFKA-6051, we close leaderEndPoint in replica fetcher thread 
initiateShutdown to try to preempt in-progress fetch request and accelerate 
repica fetcher thread shutdown. However, the selector may fail to close the 
channel and throw an Exception when the replica fetcher thread is still 
actively fetching. In this case, the sensor will not be cleaned up. Basically, 
if `close(id)` throws an exception in `Selector.close()`, then 
`sensors.close()` will not be called and thus the sensors will not get 
unregistered (See codes below).
{code:java}
public void close() {
List connections = new ArrayList<>(channels.keySet());
for (String id : connections)
close(id);
try {
this.nioSelector.close();
} catch (IOException | SecurityException e) {
log.error("Exception closing nioSelector:", e);
}
sensors.close();
channelBuilder.close();
}
{code}

If this happen, when the broker want to start up the ReplicaFetcherThread with 
the same fetch id to the same destination broker again (e.g. due to leadership 
changes or new partitions get created), the ReplicaFetcherThread will fail to 
start up because the selector will throw an IllegalArgumentException if the 
metric with the same name already exists:

{noformat}
2019/02/27 10:24:26.938 ERROR [KafkaApis] [kafka-request-handler-6] 
[kafka-server] [] [KafkaApi-38031] Error when handling request {}
java.lang.IllegalArgumentException: A metric named 'MetricName 
[name=connection-count, group=replica-fetcher-metrics, description=The current 
number of active connections., tags={broker-id=29712, fetcher-id=3}]' already 
exists, can't register another one.
at 
org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:559) 
~[kafka-clients-2.0.0.66.jar:?]
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:502) 
~[kafka-clients-2.0.0.66.jar:?]
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:485) 
~[kafka-clients-2.0.0.66.jar:?]
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:470) 
~[kafka-clients-2.0.0.66.jar:?]
at 
org.apache.kafka.common.network.Selector$SelectorMetrics.(Selector.java:963)
 ~[kafka-clients-2.0.0.66.jar:?]
at org.apache.kafka.common.network.Selector.(Selector.java:170) 
~[kafka-clients-2.0.0.66.jar:?]
at org.apache.kafka.common.network.Selector.(Selector.java:188) 
~[kafka-clients-2.0.0.66.jar:?]
at 
kafka.server.ReplicaFetcherBlockingSend.(ReplicaFetcherBlockingSend.scala:61)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.ReplicaFetcherThread$$anonfun$1.apply(ReplicaFetcherThread.scala:68)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.ReplicaFetcherThread$$anonfun$1.apply(ReplicaFetcherThread.scala:68)
 ~[kafka_2.11-2.0.0.66.jar:?]
at scala.Option.getOrElse(Option.scala:121) 
~[scala-library-2.11.12.jar:?]
at 
kafka.server.ReplicaFetcherThread.(ReplicaFetcherThread.scala:67) 
~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.ReplicaFetcherManager.createFetcherThread(ReplicaFetcherManager.scala:32)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.AbstractFetcherManager.kafka$server$AbstractFetcherManager$$addAndStartFetcherThread$1(AbstractFetcherManager.scala:132)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:146)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.AbstractFetcherManager$$anonfun$addFetcherForPartitions$2.apply(AbstractFetcherManager.scala:137)
 ~[kafka_2.11-2.0.0.66.jar:?]
at 
scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
 ~[scala-library-2.11.12.jar:?]
at scala.collection.immutable.Map$Map1.foreach(Map.scala:116) 
~[scala-library-2.11.12.jar:?]
at 
scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) 
~[scala-library-2.11.12.jar:?]
at 
kafka.server.AbstractFetcherManager.addFetcherForPartitions(AbstractFetcherManager.scala:137)
 ~[kafka_2.11-2.0.0.66.jar:?]
at kafka.server.ReplicaManager.makeFollowers(ReplicaManager.scala:1333) 
~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.ReplicaManager.becomeLeaderOrFollower(ReplicaManager.scala:1107) 
~[kafka_2.11-2.0.0.66.jar:?]
at 
kafka.server.KafkaApis.handleLeaderAndIsrRequest(KafkaApis.scala:194) 
~[kafka_2.11-2.0.0.66.jar:?]
at kafka.server.Ka

[jira] [Updated] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5141:
---
Fix Version/s: 2.3.0

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: transient-unit-test-failure
> Fix For: 2.3.0
>
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
>   at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
>   at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
>   at 
> org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.

[jira] [Reopened] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-5141:


This happened again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3442/tests]

Intersting stack trace:
{quote}java.lang.AssertionError:
Expectation failure on verify:
WorkerSourceTask.run(): expected: 1, actual: 1
at org.easymock.internal.MocksControl.verify(MocksControl.java:242)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:126)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1476)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1415)
at 
org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:723){quote}
Might be a Long vs Integer issue?

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>  Labels: transient-unit-test-failure
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.Cl

[jira] [Updated] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5141:
---
Priority: Critical  (was: Major)

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: transient-unit-test-failure
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
>   at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
>   at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
>   at 
> org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   a

[jira] [Updated] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5141:
---
Affects Version/s: 2.3.0

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Major
>  Labels: transient-unit-test-failure
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
>   at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
>   at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
>   at 
> org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gr

[jira] [Commented] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787230#comment-16787230
 ] 

Matthias J. Sax commented on KAFKA-5141:


One more: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3443/tests]
{quote}java.lang.AssertionError:
Expectation failure on verify:
WorkerSourceTask.run(): expected: 1, actual: 0
at org.easymock.internal.MocksControl.verify(MocksControl.java:242)
at 
org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:126)
at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1476)
at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1415)
at 
org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:723){quote}

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0
>
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(Clas

[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7647:
---
Affects Version/s: 2.3.0

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7647) Flaky test LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7647:
---
Fix Version/s: 2.3.0

> Flaky test 
> LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic
> -
>
> Key: KAFKA-7647
> URL: https://issues.apache.org/jira/browse/KAFKA-7647
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> {code}
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
>     java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
>         at org.junit.Assert.fail(Assert.java:88)
>         at org.junit.Assert.failNotEquals(Assert.java:834)
>         at org.junit.Assert.assertEquals(Assert.java:118)
>         at
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-5141) WorkerTest.testCleanupTasksOnStop transient failure due to NPE

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5141:
---
Labels: flaky-test transient-unit-test-failure  (was: 
transient-unit-test-failure)

> WorkerTest.testCleanupTasksOnStop transient failure due to NPE
> --
>
> Key: KAFKA-5141
> URL: https://issues.apache.org/jira/browse/KAFKA-5141
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Ewen Cheslack-Postava
>Priority: Critical
>  Labels: flaky-test, transient-unit-test-failure
> Fix For: 2.3.0
>
>
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/3281/testReport/junit/org.apache.kafka.connect.runtime/WorkerTest/testCleanupTasksOnStop/
> Looks like the potential culprit is a NullPointerException when trying to 
> start a connector. It's likely being caught and logged via a catch 
> (Throwable). From the lines being executed it looks like the null might be 
> due to the instantiation of the Connector returning null, although I don't 
> see how that is possible given the current code. We may need more logging 
> output to track the issue down.
> {quote}
> Error Message
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
> Stacktrace
> java.lang.AssertionError: 
>   Expectation failure on verify:
> WorkerSourceTask.run(): expected: 1, actual: 0
>   at org.easymock.internal.MocksControl.verify(MocksControl.java:225)
>   at 
> org.powermock.api.easymock.internal.invocationcontrol.EasyMockMethodInvocationControl.verify(EasyMockMethodInvocationControl.java:132)
>   at org.powermock.api.easymock.PowerMock.verify(PowerMock.java:1466)
>   at org.powermock.api.easymock.PowerMock.verifyAll(PowerMock.java:1405)
>   at 
> org.apache.kafka.connect.runtime.WorkerTest.testCleanupTasksOnStop(WorkerTest.java:480)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:310)
>   at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
>   at 
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:294)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:127)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:282)
>   at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
>   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:207)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:146)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:120)
>   at 
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
>   at 
> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
>   at 
> org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:122)
>   at 
> org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:106)
>   at 
> org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:53)
>   at 
> org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
>   at 
> org.gradle.api.internal.task

[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread Patrik Kleindl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787240#comment-16787240
 ] 

Patrik Kleindl commented on KAFKA-8037:
---

[~mjsax] [~guozhang]

Added [https://github.com/apache/kafka/pull/6394] with a first working 
implementation for the global state stores. Any feedback is appreciated.

I will try to do a small performance test to see how big the impact is.

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787241#comment-16787241
 ] 

Matthias J. Sax commented on KAFKA-3729:


[~yuzhih...@gmail.com] The patch seems to go the right way. Note that global 
topology does not have sink node (we can simplify there). Additionally, we need 
to take care of stores and changelog topics, that have Serdes, as well.

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8001) Fetch from future replica stalls when local replica becomes a leader

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787263#comment-16787263
 ] 

ASF GitHub Bot commented on KAFKA-8001:
---

colinhicks commented on pull request #6395: KAFKA-8001: Reset future replica 
fetcher when local log becomes leader.
URL: https://github.com/apache/kafka/pull/6395
 
 
   Upon becoming leader, the local replica can fail with `FENCED_LEADER_EPOCH` 
on its next fetch from the the future replica. In this condition, fetching is 
stalled until the next leader change. This patch avoids this scenario by 
removing then re-adding from `replicaAlterLogDirsManager` partitions for which 
both the local log is leader and a future replica exists.
   
   The test case asserts that such a partition is reset for fetching with the 
new leader epoch.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Fetch from future replica stalls when local replica becomes a leader
> 
>
> Key: KAFKA-8001
> URL: https://issues.apache.org/jira/browse/KAFKA-8001
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0, 2.1.1
>Reporter: Anna Povzner
>Assignee: Colin Hicks
>Priority: Critical
>
> With KIP-320, fetch from follower / future replica returns 
> FENCED_LEADER_EPOCH if current leader epoch in the request is lower than the 
> leader epoch known to the leader (or local replica in case of future replica 
> fetching). In case of future replica fetching from the local replica, if 
> local replica becomes the leader of the partition, the next fetch from future 
> replica fails with FENCED_LEADER_EPOCH and fetching from future replica is 
> stopped until the next leader change. 
> Proposed solution: on local replica leader change, future replica should 
> "become a follower" again, and go through the truncation phase. Or we could 
> optimize it, and just update partition state of the future replica to reflect 
> the updated current leader epoch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8041) Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787270#comment-16787270
 ] 

Matthias J. Sax commented on KAFKA-8041:


Failed again: 
[https://builds.apache.org/job/kafka-pr-jdk10-scala2.12/4726/testReport/junit/kafka.server/LogDirFailureTest/testIOExceptionDuringLogRoll/]

> Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll
> -
>
> Key: KAFKA-8041
> URL: https://issues.apache.org/jira/browse/KAFKA-8041
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/236/tests]
> {quote}java.lang.AssertionError: Expected some messages
> at kafka.utils.TestUtils$.fail(TestUtils.scala:357)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:787)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:189)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63){quote}
> STDOUT
> {quote}[2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition topic-6 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-10 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-4 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:45:00,248] ERROR Error while rolling log segment for topic-0 
> in dir 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216
>  (kafka.server.LogDirFailureChannel:76)
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216/topic-0/.index
>  (Not a directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:121)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:115)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:184)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:184)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:501)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1520)
> at kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1520)
> at scala.Option.foreach(Option.scala:257)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1520)
> at kafka.log.Log.maybeHandleIOException(Log.scala:1881)
> at kafka.log.Log.roll(Log.scala:1484)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:154)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j

[jira] [Updated] (KAFKA-8041) Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8041:
---
Fix Version/s: 2.3.0

> Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll
> -
>
> Key: KAFKA-8041
> URL: https://issues.apache.org/jira/browse/KAFKA-8041
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2, 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/236/tests]
> {quote}java.lang.AssertionError: Expected some messages
> at kafka.utils.TestUtils$.fail(TestUtils.scala:357)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:787)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:189)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63){quote}
> STDOUT
> {quote}[2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition topic-6 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-10 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-4 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:45:00,248] ERROR Error while rolling log segment for topic-0 
> in dir 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216
>  (kafka.server.LogDirFailureChannel:76)
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216/topic-0/.index
>  (Not a directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:121)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:115)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:184)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:184)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:501)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1520)
> at kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1520)
> at scala.Option.foreach(Option.scala:257)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1520)
> at kafka.log.Log.maybeHandleIOException(Log.scala:1881)
> at kafka.log.Log.roll(Log.scala:1484)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:154)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.

[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787272#comment-16787272
 ] 

Ted Yu commented on KAFKA-3729:
---

The following seems to result in compilation error:
{code}
/Users/yute/kafka/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:667:
 warning: [unchecked] unchecked call to configure(Map,boolean) as a 
member of the raw type Serializer
sn.getKeySer().configure(config.originals(), true);
^
/Users/yute/kafka/streams/src/main/java/org/apache/kafka/streams/KafkaStreams.java:668:
 warning: [unchecked] unchecked call to configure(Map,boolean) as a 
member of the raw type Serializer
sn.getValueSer().configure(config.originals(), false);
  ^
{code}
I wonder how the warnings can be suppressed.
I checked existing calls to configure() which doesn't give me much clue.

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8061) Handle concurrent ProducerId reset and call to Sender thread shutdown

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787274#comment-16787274
 ] 

ASF GitHub Bot commented on KAFKA-8061:
---

hachikuji commented on pull request #6388: KAFKA-8061: Handle concurrent 
ProducerId reset and call to Sender thread shutdown
URL: https://github.com/apache/kafka/pull/6388
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Handle concurrent ProducerId reset and call to Sender thread shutdown
> -
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8041) Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8041:
---
Affects Version/s: 2.3.0

> Flaky Test LogDirFailureTest#testIOExceptionDuringLogRoll
> -
>
> Key: KAFKA-8041
> URL: https://issues.apache.org/jira/browse/KAFKA-8041
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.0.1, 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.0.2
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/236/tests]
> {quote}java.lang.AssertionError: Expected some messages
> at kafka.utils.TestUtils$.fail(TestUtils.scala:357)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:787)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:189)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63){quote}
> STDOUT
> {quote}[2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition topic-6 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,614] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-10 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-4 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-8 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:44:58,615] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-2 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-05 03:45:00,248] ERROR Error while rolling log segment for topic-0 
> in dir 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216
>  (kafka.server.LogDirFailureChannel:76)
> java.io.FileNotFoundException: 
> /home/jenkins/jenkins-slave/workspace/kafka-2.0-jdk8/core/data/kafka-3869208920357262216/topic-0/.index
>  (Not a directory)
> at java.io.RandomAccessFile.open0(Native Method)
> at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
> at java.io.RandomAccessFile.(RandomAccessFile.java:243)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:121)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:115)
> at kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:184)
> at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:184)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:501)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1520)
> at kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1520)
> at scala.Option.foreach(Option.scala:257)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1520)
> at kafka.log.Log.maybeHandleIOException(Log.scala:1881)
> at kafka.log.Log.roll(Log.scala:1484)
> at 
> kafka.server.LogDirFailureTest.testProduceAfterLogDirFailureOnLeader(LogDirFailureTest.scala:154)
> at 
> kafka.server.LogDirFailureTest.testIOExceptionDuringLogRoll(LogDirFailureTest.scala:63)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.run

[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787275#comment-16787275
 ] 

Matthias J. Sax commented on KAFKA-3729:


`configure()` expects `Map configs` but returns `Map`, so the types don't match and you get "unchecked" warning (and warning 
fail the build). You would need to add `@SuppressWarning("unckecked")` to the 
method to suppress the warning ant to make the build pass.

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7965) Flaky Test ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup

2019-03-07 Thread John Roesler (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787286#comment-16787286
 ] 

John Roesler commented on KAFKA-7965:
-

Saw this on [https://github.com/apache/kafka/pull/6382]

[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20100/testReport/junit/kafka.api/ConsumerBounceTest/testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup/]

 
{noformat}
Error Message
java.lang.AssertionError: Received 0, expected at least 68

Stacktrace
java.lang.AssertionError: Received 0, expected at least 68
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at 
kafka.api.ConsumerBounceTest.kafka$api$ConsumerBounceTest$$receiveAndCommit(ConsumerBounceTest.scala:562)
at 
kafka.api.ConsumerBounceTest$$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$2.apply(ConsumerBounceTest.scala:325)
at 
kafka.api.ConsumerBounceTest$$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$2.apply(ConsumerBounceTest.scala:324)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:324)


{noformat}

> Flaky Test 
> ConsumerBounceTest#testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup
> 
>
> Key: KAFKA-7965
> URL: https://issues.apache.org/jira/browse/KAFKA-7965
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Stanislav Kozlovski
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/21/]
> {quote}java.lang.AssertionError: Received 0, expected at least 68 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.api.ConsumerBounceTest.receiveAndCommit(ConsumerBounceTest.scala:557) 
> at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1(ConsumerBounceTest.scala:320)
>  at 
> kafka.api.ConsumerBounceTest.$anonfun$testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup$1$adapted(ConsumerBounceTest.scala:319)
>  at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) 
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) 
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> kafka.api.ConsumerBounceTest.testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup(ConsumerBounceTest.scala:319){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8040) Streams needs to retry initTransactions

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787290#comment-16787290
 ] 

ASF GitHub Bot commented on KAFKA-8040:
---

vvcephei commented on pull request #6396: KAFKA-8040: Streams handle 
initTransactions timeout
URL: https://github.com/apache/kafka/pull/6396
 
 
   As of 2.0, Producer.initTransactions may throw a TimeoutException. Streams 
should log an explanatory error and throw a StreamsException to signal an 
intentional graceful shutdown in the face of an error.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams needs to retry initTransactions
> ---
>
> Key: KAFKA-8040
> URL: https://issues.apache.org/jira/browse/KAFKA-8040
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Critical
> Fix For: 2.3.0
>
>
> Following on KAFKA-6446, Streams needs to handle the new behavior.
> `initTxn` can throw TimeoutException now: default `MAX_BLOCK_MS_CONFIG` in 
> producer is 60 seconds, so I ([~guozhang]) think just wrapping it as 
> StreamsException should be reasonable, similar to what we do for 
> `producer#send`'s TimeoutException 
> ([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L220-L225]
>  ).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8040) Streams needs to retry initTransactions

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787292#comment-16787292
 ] 

ASF GitHub Bot commented on KAFKA-8040:
---

vvcephei commented on pull request #6396: KAFKA-8040: Streams handle 
initTransactions timeout
URL: https://github.com/apache/kafka/pull/6396
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Streams needs to retry initTransactions
> ---
>
> Key: KAFKA-8040
> URL: https://issues.apache.org/jira/browse/KAFKA-8040
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Critical
> Fix For: 2.3.0
>
>
> Following on KAFKA-6446, Streams needs to handle the new behavior.
> `initTxn` can throw TimeoutException now: default `MAX_BLOCK_MS_CONFIG` in 
> producer is 60 seconds, so I ([~guozhang]) think just wrapping it as 
> StreamsException should be reasonable, similar to what we do for 
> `producer#send`'s TimeoutException 
> ([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L220-L225]
>  ).
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787294#comment-16787294
 ] 

Guozhang Wang commented on KAFKA-3729:
--

I read the diff file as well and that looks promising (btw I think you do not 
need to send a diff before sending a PR, a PR would make the reviews easier and 
we can always iterate on that even if it is not on the right direction :)

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787301#comment-16787301
 ] 

ASF GitHub Bot commented on KAFKA-8037:
---

pkleindl commented on pull request #6394: KAFKA-8037: Check deserialization at 
global state store restoration
URL: https://github.com/apache/kafka/pull/6394
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787302#comment-16787302
 ] 

ASF GitHub Bot commented on KAFKA-8037:
---

pkleindl commented on pull request #6394: KAFKA-8037: Check deserialization at 
global state store restoration
URL: https://github.com/apache/kafka/pull/6394
 
 
   This change aims to prevent records that cannot be deserialized from going 
into a global state store during restore. Currently such records are filtered 
during normal operations but will be processed during restore and will cause an 
exception when trying to access the value in the store.
   The change copies the logic from the GlobalStateUpdateTask and builds a list 
of deserializers to use during restoration.
   
   The GlobalStateManagerImplTest was extended to cover the case that a 
StreamsException is expected when a record is processed that can't be 
deserialized with the default LogAndFailExceptionHandler.
   GlobalStateManagerImplLogAndContinueTest was added and contains one new test 
which uses the LogAndContinueExceptionHandler and verifies that a record can be 
ignored during restoration.
   Copying all tests is not ideal, but I found no easy way to override the 
DefaultExceptionHandler just for the one case.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-3729:
--

Assignee: Ted Yu

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Assignee: Ted Yu
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787305#comment-16787305
 ] 

ASF GitHub Bot commented on KAFKA-8037:
---

pkleindl commented on pull request #6394: KAFKA-8037: Check deserialization at 
global state store restoration
URL: https://github.com/apache/kafka/pull/6394
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8067) JsonConverter missing and optional field defaults result in a null pointer

2019-03-07 Thread Eric Pheatt (JIRA)
Eric Pheatt created KAFKA-8067:
--

 Summary: JsonConverter missing and optional field defaults result 
in a null pointer
 Key: KAFKA-8067
 URL: https://issues.apache.org/jira/browse/KAFKA-8067
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Eric Pheatt


The internal Kafka Connect JsonSchema allows for specifying an optional field 
but the JsonConverter throws a null pointer error when trying to apply the 
default for missing optional field in the payload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8037) KTable restore may load bad data

2019-03-07 Thread Patrik Kleindl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787319#comment-16787319
 ] 

Patrik Kleindl commented on KAFKA-8037:
---

Sorry for the mess, I thought I had to re-create the PR when I fixed something.

[https://github.com/apache/kafka/pull/6398] has the current changes.

Performance test with 1 mio. good records shows no difference, with 1 mio. bad 
records I see a 50% increase in time, but this might have been due to the 
excessive logging.

> KTable restore may load bad data
> 
>
> Key: KAFKA-8037
> URL: https://issues.apache.org/jira/browse/KAFKA-8037
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: pull-request-available
>
> If an input topic contains bad data, users can specify a 
> `deserialization.exception.handler` to drop corrupted records on read. 
> However, this mechanism may be by-passed on restore. Assume a 
> `builder.table()` call reads and drops a corrupted record. If the table state 
> is lost and restored from the changelog topic, the corrupted record may be 
> copied into the store, because on restore plain bytes are copied.
> If the KTable is used in a join, an internal `store.get()` call to lookup the 
> record would fail with a deserialization exception if the value part cannot 
> be deserialized.
> GlobalKTables are affected, too (cf. KAFKA-7663 that may allow a fix for 
> GlobalKTable case). It's unclear to me atm, how this issue could be addressed 
> for KTables though.
> Note, that user state stores are not affected, because they always have a 
> dedicated changelog topic (and don't reuse an input topic) and thus the 
> corrupted record would not be written into the changelog.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-3729) Auto-configure non-default SerDes passed alongside the topology builder

2019-03-07 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787312#comment-16787312
 ] 

Ted Yu commented on KAFKA-3729:
---

https://github.com/apache/kafka/pull/6399

>  Auto-configure non-default SerDes passed alongside the topology builder
> 
>
> Key: KAFKA-3729
> URL: https://issues.apache.org/jira/browse/KAFKA-3729
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Fred Patton
>Priority: Major
>  Labels: api, newbie
> Attachments: 3729.txt
>
>
> From Guozhang Wang:
> "Only default serdes provided through configs are auto-configured today. But 
> we could auto-configure other serdes passed alongside the topology builder as 
> well."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8061) Handle concurrent ProducerId reset and call to Sender thread shutdown

2019-03-07 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-8061:
---
Affects Version/s: 2.0.1

> Handle concurrent ProducerId reset and call to Sender thread shutdown
> -
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.1, 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8061) Handle concurrent ProducerId reset and call to Sender thread shutdown

2019-03-07 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8061.

   Resolution: Fixed
Fix Version/s: 2.2.1
   2.1.2
   2.0.2

> Handle concurrent ProducerId reset and call to Sender thread shutdown
> -
>
> Key: KAFKA-8061
> URL: https://issues.apache.org/jira/browse/KAFKA-8061
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Manikumar
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.2, 2.1.2, 2.2.1
>
>
> In KAFKA-5503, we have added a check 
> (https://github.com/apache/kafka/pull/5881) for `running` flag in the loop 
> inside maybeWaitForProducerId.  This is to handle concurrent call to Sender 
> close(), while we attempt to get the ProducerId.
> This avoids blocking indefinitely when the producer is shutting down.
> This created a corner case, where Sender thread gets blocked, if we had 
> concurrent producerId reset and call to Sender thread close. The proposed fix 
> is to check the forceClose flag in the loop inside maybeWaitForProducerId.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8068) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup

2019-03-07 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-8068:
--

 Summary: Flaky Test 
DescribeConsumerGroupTest#testDescribeMembersOfExistingGroup
 Key: KAFKA-8068
 URL: https://issues.apache.org/jira/browse/KAFKA-8068
 Project: Kafka
  Issue Type: Bug
  Components: admin, unit tests
Affects Versions: 2.2.0
Reporter: Matthias J. Sax
 Fix For: 2.3.0, 2.2.1


[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/55/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroup/]
{quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata not 
propagated after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) 
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
scala.collection.immutable.Range.foreach(Range.scala:158) at 
scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroup(DescribeConsumerGroupTest.scala:154){quote}
 

STDOUT
{quote}[2019-03-07 18:55:40,194] WARN Unable to read additional data from 
client sessionid 0x1006fb9a65f0001, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376) TOPIC PARTITION CURRENT-OFFSET 
LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION 
CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - 
COORDINATOR (ID) ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:35213 (0) Empty 0 
[2019-03-07 18:58:42,206] WARN Unable to read additional data from client 
sessionid 0x1006fbc6962, likely client has closed socket 
(org.apache.zookeeper.server.NIOServerCnxn:376)
{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8055) Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787331#comment-16787331
 ] 

Matthias J. Sax commented on KAFKA-8055:


Failed again: https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/55/

> Flaky Test LogCleanerParameterizedIntegrationTest#cleanerTest
> -
>
> Key: KAFKA-8055
> URL: https://issues.apache.org/jira/browse/KAFKA-8055
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/49/]
> {quote}java.lang.AssertionError: log cleaner should have processed up to 
> offset 588, but lastCleaned=295 at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.log.LogCleanerParameterizedIntegrationTest.checkLastCleaned(LogCleanerParameterizedIntegrationTest.scala:284)
>  at 
> kafka.log.LogCleanerParameterizedIntegrationTest.cleanerTest(LogCleanerParameterizedIntegrationTest.scala:77){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7980) Flaky Test SocketServerTest#testConnectionRateLimit

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787332#comment-16787332
 ] 

Matthias J. Sax commented on KAFKA-7980:


One more: 
https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/49/

> Flaky Test SocketServerTest#testConnectionRateLimit
> ---
>
> Key: KAFKA-7980
> URL: https://issues.apache.org/jira/browse/KAFKA-7980
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/25/]
> {quote}java.lang.AssertionError: Connections created too quickly: 4 at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.network.SocketServerTest.testConnectionRateLimit(SocketServerTest.scala:1122){quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8067) JsonConverter missing and optional field defaults result in a null pointer

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787333#comment-16787333
 ] 

ASF GitHub Bot commented on KAFKA-8067:
---

epheatt commented on pull request #6400: KAFKA-8067 - JsonConverter Missing 
Optional field handling
URL: https://github.com/apache/kafka/pull/6400
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> JsonConverter missing and optional field defaults result in a null pointer
> --
>
> Key: KAFKA-8067
> URL: https://issues.apache.org/jira/browse/KAFKA-8067
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Eric Pheatt
>Priority: Minor
>  Labels: Json, converter
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The internal Kafka Connect JsonSchema allows for specifying an optional field 
> but the JsonConverter throws a null pointer error when trying to apply the 
> default for missing optional field in the payload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3705) Support non-key joining in KTable

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-3705:
---
Labels: api kip  (was: api)

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: api, kip
>
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-3705) Support non-key joining in KTable

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-3705:
---
Description: 
KIP-213: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable]

Today in Kafka Streams DSL, KTable joins are only based on keys. If users want 
to join a KTable A by key {{a}} with another KTable B by key {{b}} but with a 
"foreign key" {{a}}, and assuming they are read from two topics which are 
partitioned on {{a}} and {{b}} respectively, they need to do the following 
pattern:
{code:java}
tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' is 
partitioned on "a"

tableA.join(tableB', joiner);
{code}
Even if these two tables are read from two topics which are already partitioned 
on {{a}}, users still need to do the pre-aggregation in order to make the two 
joining streams to be on the same key. This is a draw-back from programability 
and we should fix it.

  was:
Today in Kafka Streams DSL, KTable joins are only based on keys. If users want 
to join a KTable A by key {{a}} with another KTable B by key {{b}} but with a 
"foreign key" {{a}}, and assuming they are read from two topics which are 
partitioned on {{a}} and {{b}} respectively, they need to do the following 
pattern:

{code}
tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' is 
partitioned on "a"

tableA.join(tableB', joiner);
{code}

Even if these two tables are read from two topics which are already partitioned 
on {{a}}, users still need to do the pre-aggregation in order to make the two 
joining streams to be on the same key. This is a draw-back from programability 
and we should fix it.


> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: api, kip
>
> KIP-213: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable]
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code:java}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7831) Consumer SubscriptionState missing synchronization

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787360#comment-16787360
 ] 

ASF GitHub Bot commented on KAFKA-7831:
---

hachikuji commented on pull request #6221: KAFKA-7831; Do not modify 
subscription state from background thread
URL: https://github.com/apache/kafka/pull/6221
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Consumer SubscriptionState missing synchronization
> --
>
> Key: KAFKA-7831
> URL: https://issues.apache.org/jira/browse/KAFKA-7831
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> ConsumerCoordinator installs a Metadata.Listener in order to update pattern 
> subscriptions after metadata changes. The listener is invoked from 
> NetworkClient.poll, which could happen in the heartbeat thread. Currently, 
> however, there is no synchronization in SubscriptionState to make this safe. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7831) Consumer SubscriptionState missing synchronization

2019-03-07 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7831.

   Resolution: Fixed
Fix Version/s: 2.3.0

> Consumer SubscriptionState missing synchronization
> --
>
> Key: KAFKA-7831
> URL: https://issues.apache.org/jira/browse/KAFKA-7831
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 2.3.0
>
>
> ConsumerCoordinator installs a Metadata.Listener in order to update pattern 
> subscriptions after metadata changes. The listener is invoked from 
> NetworkClient.poll, which could happen in the heartbeat thread. Currently, 
> however, there is no synchronization in SubscriptionState to make this safe. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8067) JsonConverter missing and optional field defaults result in a null pointer

2019-03-07 Thread Eric Pheatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Pheatt updated KAFKA-8067:
---
Description: The internal Kafka Connect JsonSchema allows for specifying an 
optional field but the JsonConverter throws a null pointer error when trying to 
apply the default for missing optional fields in the payload.   (was: The 
internal Kafka Connect JsonSchema allows for specifying an optional field but 
the JsonConverter throws a null pointer error when trying to apply the default 
for missing optional field in the payload. )

> JsonConverter missing and optional field defaults result in a null pointer
> --
>
> Key: KAFKA-8067
> URL: https://issues.apache.org/jira/browse/KAFKA-8067
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Eric Pheatt
>Priority: Minor
>  Labels: Json, converter
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The internal Kafka Connect JsonSchema allows for specifying an optional field 
> but the JsonConverter throws a null pointer error when trying to apply the 
> default for missing optional fields in the payload. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7764) Authentication exceptions during consumer metadata updates may not get propagated

2019-03-07 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-7764.

Resolution: Fixed

This was fixed by the patch for KAFKA-7831 which removes the metadata listener.

> Authentication exceptions during consumer metadata updates may not get 
> propagated
> -
>
> Key: KAFKA-7764
> URL: https://issues.apache.org/jira/browse/KAFKA-7764
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Dhruvil Shah
>Priority: Major
>
> The consumer should propagate authentication errors to the user. We handle 
> the common case in ConsumerNetworkClient when the exception occurs in 
> response to an explicitly provided request. However, we are missing the logic 
> to propagate exceptions during metadata updates, which are handled internally 
> by NetworkClient. This logic exists in 
> ConsumerNetworkClient.awaitMetadataUpdate, but metadata updates can occur 
> outside of this path. Probably we just need to move that logic into 
> ConsumerNetworkClient.poll() so that errors are always checked.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7921) Instable KafkaStreamsTest

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7921:
---
Fix Version/s: 2.3.0

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinato

[jira] [Updated] (KAFKA-7921) Instable KafkaStreamsTest

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7921:
---
Labels: flaky-test  (was: )

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,208] INFO [Consumer 
> cli

[jira] [Updated] (KAFKA-7921) Instable KafkaStreamsTest

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7921:
---
Priority: Critical  (was: Major)

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,208] INFO [Consume

[jira] [Assigned] (KAFKA-7921) Instable KafkaStreamsTest

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-7921:
--

Assignee: (was: Matthias J. Sax)

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Matthias J. Sax
>Priority: Major
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,208] INFO [Consumer 
> clientId=clientId-Str

[jira] [Updated] (KAFKA-7921) Instable KafkaStreamsTest

2019-03-07 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-7921:
---
Affects Version/s: 2.3.0

> Instable KafkaStreamsTest
> -
>
> Key: KAFKA-7921
> URL: https://issues.apache.org/jira/browse/KAFKA-7921
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> {{KafkaStreamsTest}} failed multiple times, eg,
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.shouldThrowOnCleanupWhileRunning(KafkaStreamsTest.java:556){quote}
> or
> {quote}java.lang.AssertionError: Condition not met within timeout 15000. 
> Streams never started.
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:365)
> at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:325)
> at 
> org.apache.kafka.streams.KafkaStreamsTest.testStateThreadClose(KafkaStreamsTest.java:255){quote}
>  
> The preserved logs are as follows:
> {quote}[2019-02-12 07:02:17,198] INFO Kafka version: 2.3.0-SNAPSHOT 
> (org.apache.kafka.common.utils.AppInfoParser:109)
> [2019-02-12 07:02:17,198] INFO Kafka commitId: 08036fa4b1e5b822 
> (org.apache.kafka.common.utils.AppInfoParser:110)
> [2019-02-12 07:02:17,199] INFO stream-client [clientId] State transition from 
> CREATED to REBALANCING (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-client [clientId] State transition from 
> REBALANCING to PENDING_SHUTDOWN (org.apache.kafka.streams.KafkaStreams:263)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> Starting (org.apache.kafka.streams.processor.internals.StreamThread:767)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-239] 
> State transition from CREATED to STARTING 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,200] INFO stream-thread [clientId-StreamThread-238] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-238] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> Informed to shut down 
> (org.apache.kafka.streams.processor.internals.StreamThread:1192)
> [2019-02-12 07:02:17,201] INFO stream-thread [clientId-StreamThread-239] 
> State transition from STARTING to PENDING_SHUTDOWN 
> (org.apache.kafka.streams.processor.internals.StreamThread:214)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO Cluster ID: J8uJhiTKQx-Y_i9LzT0iLg 
> (org.apache.kafka.clients.Metadata:365)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,205] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Discovered group 
> coordinator localhost:36122 (id: 2147483647 rack: null) 
> (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] Revoking 
> previously assigned partitions [] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:458)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-238-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:17,206] INFO [Consumer 
> clientId=clientId-StreamThread-239-consumer, groupId=appId] (Re-)joining 
> group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
> [2019-02-12 07:02:

[jira] [Commented] (KAFKA-8059) Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota

2019-03-07 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787391#comment-16787391
 ] 

Matthias J. Sax commented on KAFKA-8059:


Failed again: 
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/20106/testReport/junit/kafka.network/DynamicConnectionQuotaTest/testDynamicConnectionQuota/

> Flaky Test DynamicConnectionQuotaTest #testDynamicConnectionQuota
> -
>
> Key: KAFKA-8059
> URL: https://issues.apache.org/jira/browse/KAFKA-8059
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.2.1
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/46/tests]
> {quote}org.scalatest.junit.JUnitTestFailedError: Expected exception 
> java.io.IOException to be thrown, but no exception was thrown
> at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:100)
> at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:71)
> at org.scalatest.Assertions$class.intercept(Assertions.scala:822)
> at org.scalatest.junit.JUnitSuite.intercept(JUnitSuite.scala:71)
> at 
> kafka.network.DynamicConnectionQuotaTest.testDynamicConnectionQuota(DynamicConnectionQuotaTest.scala:82){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8060) The Kafka protocol generator should allow null default values for strings

2019-03-07 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-8060.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

Issue resolved by pull request 6387
[https://github.com/apache/kafka/pull/6387]

> The Kafka protocol generator should allow null default values for strings
> -
>
> Key: KAFKA-8060
> URL: https://issues.apache.org/jira/browse/KAFKA-8060
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
> Fix For: 2.3.0
>
>
> The Kafka protocol generator should allow null default values for strings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8060) The Kafka protocol generator should allow null default values for strings

2019-03-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16787505#comment-16787505
 ] 

ASF GitHub Bot commented on KAFKA-8060:
---

omkreddy commented on pull request #6387: KAFKA-8060: The Kafka protocol 
generator should allow null defaults
URL: https://github.com/apache/kafka/pull/6387
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The Kafka protocol generator should allow null default values for strings
> -
>
> Key: KAFKA-8060
> URL: https://issues.apache.org/jira/browse/KAFKA-8060
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
> Fix For: 2.3.0
>
>
> The Kafka protocol generator should allow null default values for strings.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >