[jira] [Created] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread JieFang.He (JIRA)
JieFang.He created KAFKA-8127:
-

 Summary: It may need to import scala.io
 Key: KAFKA-8127
 URL: https://issues.apache.org/jira/browse/KAFKA-8127
 Project: Kafka
  Issue Type: Bug
Reporter: JieFang.He


I get an error when compile kafka,which disappear when import scala.io

 
{code:java}
D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140: 
object Source is not a member of package io
val lineIterators = files.map(io.Source.fromFile(_).getLines)
^
6 warnings found
one error found
:core:compileScala FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:compileScala'.
> Compilation failed
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread JieFang.He (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated KAFKA-8127:
--
Issue Type: Improvement  (was: Bug)

> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-03-19 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-8123:
-

Assignee: Rajini Sivaram

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,725] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=2, api=STOP_REPLICA, 

[jira] [Commented] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795825#comment-16795825
 ] 

ASF GitHub Bot commented on KAFKA-8127:
---

hejiefang commented on pull request #6470: KAFKA-8127 It may need to import 
scala.io
URL: https://github.com/apache/kafka/pull/6470
 
 
   
[https://issues.apache.org/jira/browse/KAFKA-8127?orderby=created+DESC%2C+priority+DESC%2C+updated+DESC](url)
   
   It may get error when compile kafka if no scala.io imported
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6675) Connect workers should log plugin path and available plugins more clearly

2019-03-19 Thread Valeria Vasylieva (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valeria Vasylieva reassigned KAFKA-6675:


Assignee: Valeria Vasylieva

> Connect workers should log plugin path and available plugins more clearly
> -
>
> Key: KAFKA-6675
> URL: https://issues.apache.org/jira/browse/KAFKA-6675
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.1
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Minor
>
> Users struggle with setting the plugin path and properly installing plugins. 
> If users get any of this wrong, they get strange errors only after they run 
> the worker and attempt to deploy connectors or use transformations. 
> The Connect worker should more obviously output the plugin path directories 
> and the available plugins. For example, if the {{plugin.path}} were:
> {code}
> plugin.path=/usr/local/share/java,/usr/local/plugins
> {code}
> then the worker might output something like the following information in the 
> log:
> {noformat}
> Looking for plugins on classpath and inside plugin.path directories:
>   /usr/local/share/java
>   /usr/local/plugins
>  
> Source Connector(s):
>   FileStreamSource  (org.apache.kafka.connect.file.FileStreamSourceConnector) 
>   @ classpath
>   FileStreamSink(org.apache.kafka.connect.file.FileStreamSinkConnector)   
>   @ classpath
>   JdbcSource(io.confluent.connect.jdbc.JdbcSourceConnector)   
>   @ /usr/local/share/java/kafka-connect-jdbc
>   MySql (io.debezium.connector.mysql.MySqlConnector)  
>   @ /usr/local/plugins/debezium-connector-mysql
> Converter(s):
>   JsonConverter (org.apache.kafka.connect.json.JsonConverter) 
>   @ classpath
>   ByteArrayConverter
> (org.apache.kafka.connect.converters.ByteArrayConverter)@ classpath
>   SimpleHeaderConverter 
> (org.apache.kafka.connect.converters.SimpleHeaderConverter) @ classpath
>   AvroConverter (io.confluent.connect.avro.AvroConverter) 
>   @ /usr/local/share/java/kafka-serde-tools
> Transformation(s):
>   InsertField   (org.apache.kafka.connect.transforms.InsertField) 
>   @ classpath
>   ReplaceField  (org.apache.kafka.connect.transforms.ReplaceField)
>   @ classpath
>   MaskField (org.apache.kafka.connect.transforms.MaskField)   
>   @ classpath
>   ValueToKey(org.apache.kafka.connect.transforms.ValueToKey)  
>   @ classpath
>   HoistField(org.apache.kafka.connect.transforms.HoistField)  
>   @ classpath
>   ExtractField  (org.apache.kafka.connect.transforms.ExtractField)
>   @ classpath
>   SetSchemaMetadata (org.apache.kafka.connect.transforms.SetSchemaMetadata)   
>   @ classpath
>   RegexRouter   (org.apache.kafka.connect.transforms.RegexRouter) 
>   @ classpath
>   TimestampRouter   (org.apache.kafka.connect.transforms.TimestampRouter) 
>   @ classpath
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6675) Connect workers should log plugin path and available plugins more clearly

2019-03-19 Thread Valeria Vasylieva (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795837#comment-16795837
 ] 

Valeria Vasylieva commented on KAFKA-6675:
--

I would like to work on it

> Connect workers should log plugin path and available plugins more clearly
> -
>
> Key: KAFKA-6675
> URL: https://issues.apache.org/jira/browse/KAFKA-6675
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.1
>Reporter: Randall Hauch
>Priority: Minor
>
> Users struggle with setting the plugin path and properly installing plugins. 
> If users get any of this wrong, they get strange errors only after they run 
> the worker and attempt to deploy connectors or use transformations. 
> The Connect worker should more obviously output the plugin path directories 
> and the available plugins. For example, if the {{plugin.path}} were:
> {code}
> plugin.path=/usr/local/share/java,/usr/local/plugins
> {code}
> then the worker might output something like the following information in the 
> log:
> {noformat}
> Looking for plugins on classpath and inside plugin.path directories:
>   /usr/local/share/java
>   /usr/local/plugins
>  
> Source Connector(s):
>   FileStreamSource  (org.apache.kafka.connect.file.FileStreamSourceConnector) 
>   @ classpath
>   FileStreamSink(org.apache.kafka.connect.file.FileStreamSinkConnector)   
>   @ classpath
>   JdbcSource(io.confluent.connect.jdbc.JdbcSourceConnector)   
>   @ /usr/local/share/java/kafka-connect-jdbc
>   MySql (io.debezium.connector.mysql.MySqlConnector)  
>   @ /usr/local/plugins/debezium-connector-mysql
> Converter(s):
>   JsonConverter (org.apache.kafka.connect.json.JsonConverter) 
>   @ classpath
>   ByteArrayConverter
> (org.apache.kafka.connect.converters.ByteArrayConverter)@ classpath
>   SimpleHeaderConverter 
> (org.apache.kafka.connect.converters.SimpleHeaderConverter) @ classpath
>   AvroConverter (io.confluent.connect.avro.AvroConverter) 
>   @ /usr/local/share/java/kafka-serde-tools
> Transformation(s):
>   InsertField   (org.apache.kafka.connect.transforms.InsertField) 
>   @ classpath
>   ReplaceField  (org.apache.kafka.connect.transforms.ReplaceField)
>   @ classpath
>   MaskField (org.apache.kafka.connect.transforms.MaskField)   
>   @ classpath
>   ValueToKey(org.apache.kafka.connect.transforms.ValueToKey)  
>   @ classpath
>   HoistField(org.apache.kafka.connect.transforms.HoistField)  
>   @ classpath
>   ExtractField  (org.apache.kafka.connect.transforms.ExtractField)
>   @ classpath
>   SetSchemaMetadata (org.apache.kafka.connect.transforms.SetSchemaMetadata)   
>   @ classpath
>   RegexRouter   (org.apache.kafka.connect.transforms.RegexRouter) 
>   @ classpath
>   TimestampRouter   (org.apache.kafka.connect.transforms.TimestampRouter) 
>   @ classpath
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-19 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795965#comment-16795965
 ] 

Viktor Somogyi-Vass commented on KAFKA-8030:


Started working on this, now trying to reproduce. I don't exactly see why does 
it fail but I have the feeling that we have to wait for metadata propagation 
after killing the broker. For now I'l try to reproduce it.

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8030) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2019-03-19 Thread Viktor Somogyi-Vass (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795965#comment-16795965
 ] 

Viktor Somogyi-Vass edited comment on KAFKA-8030 at 3/19/19 10:46 AM:
--

Started working on this, now trying to reproduce. I don't exactly see why does 
it fail but I have the feeling that we have to wait for metadata propagation 
after killing the broker.


was (Author: viktorsomogyi):
Started working on this, now trying to reproduce. I don't exactly see why does 
it fail but I have the feeling that we have to wait for metadata propagation 
after killing the broker. For now I'l try to reproduce it.

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8030
> URL: https://issues.apache.org/jira/browse/KAFKA-8030
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Viktor Somogyi-Vass
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2830/testReport/junit/kafka.admin/TopicCommandWithAdminClientTest/testDescribeUnderMinIsrPartitionsMixed/]
> {quote}java.lang.AssertionError at org.junit.Assert.fail(Assert.java:87) at 
> org.junit.Assert.assertTrue(Assert.java:42) at 
> org.junit.Assert.assertTrue(Assert.java:53) at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:602){quote}
> STDERR
> {quote}Option "[replica-assignment]" can't be used with option 
> "[partitions]"{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-03-19 Thread Jingguo Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795966#comment-16795966
 ] 

Jingguo Yao commented on KAFKA-5998:


After we restarted a streaming application, we also got the following error:

 

2019-03-19 18:10:50,713 WARN [roombox-stream-client-StreamThread-1] 
o.a.k.s.p.i.ProcessorStateManager: task [0_0] Failed to write offset checkpoint 
file to /data/kafka-streams/roombox-stream/0_0/.checkpoint: {}
java.io.FileNotFoundException: 
/data/kafka-streams/roombox-stream/0_0/.checkpoint.tmp (No such file or 
directory)
 at java.io.FileOutputStream.open0(Native Method)
 at java.io.FileOutputStream.open(FileOutputStream.java:270)
 at java.io.FileOutputStream.(FileOutputStream.java:213)
 at java.io.FileOutputStream.(FileOutputStream.java:162)
 at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:79)
 at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:293)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:446)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.suspend(StreamTask.java:568)
 at 
org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:691)
 at 
org.apache.kafka.streams.processor.internals.AssignedTasks.close(AssignedTasks.java:397)
 at 
org.apache.kafka.streams.processor.internals.TaskManager.shutdown(TaskManager.java:260)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.completeShutdown(StreamThread.java:1181)
 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:758)

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Major
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.i

[jira] [Comment Edited] (KAFKA-6675) Connect workers should log plugin path and available plugins more clearly

2019-03-19 Thread Valeria Vasylieva (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795993#comment-16795993
 ] 

Valeria Vasylieva edited comment on KAFKA-6675 at 3/19/19 11:30 AM:


[~rhauch] I have investigated the actual log:

if we have such configuration:
{code:java}
plugin.path=/etc/kafka/kafka_2.11-2.1.0/connect-plugins, 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium
{code}
then, we have such output for plugin.path plugins: 
{code:java}
[2019-03-19 13:32:47,277] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-cassandra-1.0.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,278] INFO Added plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,279] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:47,694] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,694] INFO Added plugin 
'io.confluent.connect.avro.AvroConverter' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,695] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:48,083] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:48,090] INFO Added plugin 
'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,128] INFO Registered loader: 
sun.misc.Launcher$AppClassLoader@764c12b6 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,161] INFO Added aliases 'CassandraSinkConnector' and 
'CassandraSink' to plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,161] INFO Added aliases 'MySqlConnector' and 'MySql' to 
plugin 'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390){code}
and such output for classpath plugins: 
{code:java}
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,132] INFO Added plugin 
'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,163] INFO Added aliases 'MockConnector' and 'Mock' to 
plugin 'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,164] INFO Added aliases 'MockSinkConnector' and 'MockSink' 
to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
{code}
 

What is the goal of this task, to make the output more clear and pretty? Almost 
all the information (including path), that you stated is already available in 
the log.


was (Author: nimfadora):
[~rhauch] I have investigated the actual log:

if we have such configuration:

 
{code:java}
plugin.path=/etc/kafka/kafka_2.11-2.1.0/connect-plugins, 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium
{code}
then, we have such output for plugin.path plugins:

 

 
{code:java}
[2019-03-19 13:32:47,277] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-cassandra-1.0.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,278] INFO Added plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,279] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:47,694] INFO Registered loader

[jira] [Commented] (KAFKA-6675) Connect workers should log plugin path and available plugins more clearly

2019-03-19 Thread Valeria Vasylieva (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795993#comment-16795993
 ] 

Valeria Vasylieva commented on KAFKA-6675:
--

[~rhauch] I have investigated the actual log:

if we have such configuration:

 
{code:java}
plugin.path=/etc/kafka/kafka_2.11-2.1.0/connect-plugins, 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium
{code}
then, we have such output for plugin.path plugins:

 

 
{code:java}
[2019-03-19 13:32:47,277] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-cassandra-1.0.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,278] INFO Added plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,279] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:47,694] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,694] INFO Added plugin 
'io.confluent.connect.avro.AvroConverter' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,695] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:48,083] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:48,090] INFO Added plugin 
'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,128] INFO Registered loader: 
sun.misc.Launcher$AppClassLoader@764c12b6 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,161] INFO Added aliases 'CassandraSinkConnector' and 
'CassandraSink' to plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,161] INFO Added aliases 'MySqlConnector' and 'MySql' to 
plugin 'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390){code}
and such output for classpath plugins:

 
{code:java}
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,132] INFO Added plugin 
'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,163] INFO Added aliases 'MockConnector' and 'Mock' to 
plugin 'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,164] INFO Added aliases 'MockSinkConnector' and 'MockSink' 
to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
{code}
 

What is the goal of this task, to make the output more clear and pretty? Almost 
all the information (including path), that you stated is already available in 
the log.

> Connect workers should log plugin path and available plugins more clearly
> -
>
> Key: KAFKA-6675
> URL: https://issues.apache.org/jira/browse/KAFKA-6675
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Affects Versions: 0.11.0.1
>Reporter: Randall Hauch
>Assignee: Valeria Vasylieva
>Priority: Minor
>
> Users struggle with setting the plugin path and properly installing plugins. 
> If users get any of this wrong, they get strange errors only after they run 
> the worker and attempt to deploy connectors or use transformations. 
> The Connect worker should more obviously output the plugin path directories 
> and the available plugins. For example, if the {{plugin.path}} were:
> {code}
> plugin.path=/usr/local/share/java,/usr/local/plugins
> {code}
> then the worker might output something like the following information in the 
>

[jira] [Comment Edited] (KAFKA-6675) Connect workers should log plugin path and available plugins more clearly

2019-03-19 Thread Valeria Vasylieva (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795993#comment-16795993
 ] 

Valeria Vasylieva edited comment on KAFKA-6675 at 3/19/19 11:39 AM:


[~rhauch] I have investigated the actual log:

if we have such configuration:
{code:java}
plugin.path=/etc/kafka/kafka_2.11-2.1.0/connect-plugins, 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium
{code}
then, we have such output for plugin.path plugins: 
{code:java}
[2019-03-19 13:32:47,277] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-cassandra-1.0.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,278] INFO Added plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,279] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:47,694] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,694] INFO Added plugin 
'io.confluent.connect.avro.AvroConverter' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,695] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:48,083] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium/debezium-connector-mysql/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:48,090] INFO Added plugin 
'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,128] INFO Registered loader: 
sun.misc.Launcher$AppClassLoader@764c12b6 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,161] INFO Added aliases 'CassandraSinkConnector' and 
'CassandraSink' to plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,161] INFO Added aliases 'MySqlConnector' and 'MySql' to 
plugin 'io.debezium.connector.mysql.MySqlConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390){code}
and such output for classpath plugins: 
{code:java}
[2019-03-19 13:32:50,131] INFO Added plugin 
'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:50,132] INFO Added plugin 
'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
...
[2019-03-19 13:32:50,163] INFO Added aliases 'MockConnector' and 'Mock' to 
plugin 'org.apache.kafka.connect.tools.MockConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
[2019-03-19 13:32:50,164] INFO Added aliases 'MockSinkConnector' and 'MockSink' 
to plugin 'org.apache.kafka.connect.tools.MockSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:390)
{code}
 

Could you please clarify the goal of this task? Almost all the information 
(including path), that you stated is already available in the log. Or the goal 
is to make the output more clear and pretty? 


was (Author: nimfadora):
[~rhauch] I have investigated the actual log:

if we have such configuration:
{code:java}
plugin.path=/etc/kafka/kafka_2.11-2.1.0/connect-plugins, 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins-debezium
{code}
then, we have such output for plugin.path plugins: 
{code:java}
[2019-03-19 13:32:47,277] INFO Registered loader: 
PluginClassLoader{pluginLocation=file:/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-cassandra-1.0.2/}
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:243)
[2019-03-19 13:32:47,278] INFO Added plugin 
'io.confluent.connect.cassandra.CassandraSinkConnector' 
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:172)
[2019-03-19 13:32:47,279] INFO Loading plugin from: 
/etc/kafka/kafka_2.11-2.1.0/connect-plugins/confluentinc-kafka-connect-avro-converter-5.1.2
 (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:220)
[2019-03-19 13:32:47,69

[jira] [Commented] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796000#comment-16796000
 ] 

ASF GitHub Bot commented on KAFKA-8127:
---

hejiefang commented on pull request #6470: KAFKA-8127 It may need to import 
scala.io
URL: https://github.com/apache/kafka/pull/6470
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-7989:
-

Assignee: Rajini Sivaram

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread Slim Ouertani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796005#comment-16796005
 ] 

Slim Ouertani edited comment on KAFKA-8127 at 3/19/19 12:01 PM:


What the the command used to build and the OS ? 

>> io is part of default scala._ implicit imported pkg


was (Author: ouertani):
What the the command used to build and the OS ? 

>> io is part of default scala imported pkg

> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread Slim Ouertani (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796005#comment-16796005
 ] 

Slim Ouertani commented on KAFKA-8127:
--

What the the command used to build and the OS ? 

>> io is part of default scala imported pkg

> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8062:
--

Assignee: Guozhang Wang  (was: Bill Bejeck)

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Guozhang Wang
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796126#comment-16796126
 ] 

ASF GitHub Bot commented on KAFKA-8062:
---

bbejeck commented on pull request #6468: KAFKA-8062: Do not remore 
StateListener when shutting down stream thread
URL: https://github.com/apache/kafka/pull/6468
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Bill Bejeck
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this c

[jira] [Resolved] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8062.

Resolution: Fixed

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Guozhang Wang
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)
Gabor Somogyi created KAFKA-8128:


 Summary: Dynamic delegation token change possibility for 
consumer/producer
 Key: KAFKA-8128
 URL: https://issues.apache.org/jira/browse/KAFKA-8128
 Project: Kafka
  Issue Type: Improvement
Reporter: Gabor Somogyi


Re-authentication feature on broker side is under implementation which will 
enforce consumer/producer instances to re-authenticate time to time. It would 
be good to set the latest delegation token dynamically and not re-creating 
consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796147#comment-16796147
 ] 

Gabor Somogyi commented on KAFKA-8128:
--

cc [~viktorsomogyi] maybe interesting for you.

> Dynamic delegation token change possibility for consumer/producer
> -
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Gabor Somogyi
>Priority: Major
>
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8128) Dynamic delegation token change possibility for consumer/producer

2019-03-19 Thread Gabor Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Somogyi updated KAFKA-8128:
-
Affects Version/s: 2.2.0

> Dynamic delegation token change possibility for consumer/producer
> -
>
> Key: KAFKA-8128
> URL: https://issues.apache.org/jira/browse/KAFKA-8128
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Gabor Somogyi
>Priority: Major
>
> Re-authentication feature on broker side is under implementation which will 
> enforce consumer/producer instances to re-authenticate time to time. It would 
> be good to set the latest delegation token dynamically and not re-creating 
> consumer/producer instances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread Bill Bejeck (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796140#comment-16796140
 ] 

Bill Bejeck commented on KAFKA-8062:


cherry-picked to 2.2 as well

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Guozhang Wang
>Priority: Minor
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-19 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-8094:
--

Assignee: Sophie Blee-Goldman

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-19 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8094.

Resolution: Fixed

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796215#comment-16796215
 ] 

ASF GitHub Bot commented on KAFKA-8094:
---

bbejeck commented on pull request #6433: KAFKA-8094: Iterating over cache with 
get(key) is inefficient
URL: https://github.com/apache/kafka/pull/6433
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-7989:
-

Assignee: Anna Povzner  (was: Rajini Sivaram)

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-03-19 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram reassigned KAFKA-8123:
-

Assignee: Anna Povzner  (was: Rajini Sivaram)

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,725] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=2, 

[jira] [Commented] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796420#comment-16796420
 ] 

Matthias J. Sax commented on KAFKA-6078:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3475/tests]
{quote}java.lang.AssertionError: Partition should have been moved to the 
expected log directory on broker 100
at kafka.utils.TestUtils$.fail(TestUtils.scala:381)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791)
at 
kafka.admin.ReassignPartitionsClusterTest.shouldExpandCluster(ReassignPartitionsClusterTest.scala:190){quote}
STDOUT
{quote}2019-03-19 01:20:43,339] ERROR [ReplicaFetcher replicaId=101, 
leaderId=100, fetcherId=0] Error for partition my-topic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"my-topic","partition":0,"replicas":[100,101],"log_dirs":["any","any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Warning: You must run Verify periodically, until the reassignment completes, to 
ensure the throttle is removed. You can also alter the throttle by rerunning 
the Execute command passing a new value.
The inter-broker throttle limit was set to 1000 B/s
Successfully started reassignment of partitions.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"my-topic","partition":0,"replicas":[100],"log_dirs":["any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.
[2019-03-19 01:20:58,605] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
fetcherId=0] Error for partition orders-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:05,636] ERROR [Controller id=0] Ignoring request to reassign 
partition customers-0 that doesn't exist. (kafka.controller.KafkaController:74)
[2019-03-19 01:21:11,383] ERROR [ReplicaFetcher replicaId=104, leaderId=103, 
fetcherId=0] Error for partition topic1-2 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,390] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition topic1-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,390] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition topic1-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,801] ERROR [ReplicaFetcher replicaId=105, leaderId=104, 
fetcherId=0] Error for partition topic2-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:11,801] ERROR [ReplicaFetcher replicaId=105, leaderId=104, 
fetcherId=0] Error for partition topic2-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
Current partition replica assignment
 
{"version":1,"partitions":[\{"topic":"topic1","partition":2,"replicas":[103,104],"log_dirs":["any","any"]},\{"topic":"topic2","partition":0,"replicas":[104,105],"log_dirs":["any","any"]},\{"topic":"topic2","partition":1,"replicas":[104,105],"log_dirs":["any","any"]},\{"topic":"topic1","partition":0,"replicas":[100,101],"log_dirs":["any","any"]},\{"topic":"topic1","partition":1,"replicas":[100,101],"log_dirs":["any","any"]},\{"topic":"topic2","partition":2,"replicas":[103,104],"log_dirs":["any","any"]}]}
 
Save this to use as the --reassignment-json-file option during rollback
Warning: You must run Verify periodically, until the reassignment completes, to 
ensure the throttle is removed. You can also alter the throttle by rerunning 
the Execute command passing a new value.
The inter-broker throttle limit was set to 100 B/s
Successfully started reassignment of partitions.
[2019-03-19 01:21:18,518] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for partition my-topic-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-03-19 01:21:18,518] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
fetcherId=0] Error for 

[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Fix Version/s: 2.3.0

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Affects Version/s: 2.3.0

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Major
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Priority: Critical  (was: Major)

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Component/s: core

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6078:
---
Labels: flaky-test  (was: )

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8063) Flaky Test WorkerTest#testConverterOverrides

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796421#comment-16796421
 ] 

Matthias J. Sax commented on KAFKA-8063:


Failed again: 
[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3476/tests]
{quote}java.lang.AssertionError:
Expectation failure on verify:
WorkerSourceTask.run(): expected: 1, actual: 0{quote}
STDOUT
{quote}[2019-03-19 08:03:25,575] (Test worker) ERROR Failed to start connector 
test-connector (org.apache.kafka.connect.runtime.Worker:234)
org.apache.kafka.connect.errors.ConnectException: Failed to find Connector
at 
org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:46)
at 
org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:101)
at 
org.easymock.internal.ClassProxyFactory$MockMethodInterceptor.intercept(ClassProxyFactory.java:97)
at 
org.apache.kafka.connect.runtime.isolation.Plugins$$EnhancerByCGLIB$$205db954.newConnector()
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:226)
at 
org.apache.kafka.connect.runtime.WorkerTest.testStartConnectorFailure(WorkerTest.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:68)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:326)
at org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:89)
at 
org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:97)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.executeTest(PowerMockJUnit44RunnerDelegateImpl.java:310)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTestInSuper(PowerMockJUnit47RunnerDelegateImpl.java:131)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.access$100(PowerMockJUnit47RunnerDelegateImpl.java:59)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner$TestExecutorStatement.evaluate(PowerMockJUnit47RunnerDelegateImpl.java:147)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.evaluateStatement(PowerMockJUnit47RunnerDelegateImpl.java:107)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit47RunnerDelegateImpl$PowerMockJUnit47MethodRunner.executeTest(PowerMockJUnit47RunnerDelegateImpl.java:82)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$PowerMockJUnit44MethodRunner.runBeforesThenTestThenAfters(PowerMockJUnit44RunnerDelegateImpl.java:298)
at org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:87)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:50)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.invokeTestMethod(PowerMockJUnit44RunnerDelegateImpl.java:218)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.runMethods(PowerMockJUnit44RunnerDelegateImpl.java:160)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl$1.run(PowerMockJUnit44RunnerDelegateImpl.java:134)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:34)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:44)
at 
org.powermock.modules.junit4.internal.impl.PowerMockJUnit44RunnerDelegateImpl.run(PowerMockJUnit44RunnerDelegateImpl.java:136)
at 
org.powermock.modules.junit4.common.internal.impl.JUnit4TestSuiteChunkerImpl.run(JUnit4TestSuiteChunkerImpl.java:117)
at 
org.powermock.modules.junit4.common.internal.impl.AbstractCommonPowerMockRunner.run(AbstractCommonPowerMockRunner.java:57)
at org.powermock.modules.junit4.PowerMockRunner.run(PowerMockRunner.java:59)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at

[jira] [Commented] (KAFKA-7937) Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796424#comment-16796424
 ] 

Matthias J. Sax commented on KAFKA-7937:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.admin/ResetConsumerGroupOffsetTest/testResetOffsetsNotExistingGroup/]

> Flaky Test ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup
> 
>
> Key: KAFKA-7937
> URL: https://issues.apache.org/jira/browse/KAFKA-7937
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, clients, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Gwen Shapira
>Priority: Critical
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/19/pipeline
> {quote}kafka.admin.ResetConsumerGroupOffsetTest > 
> testResetOffsetsNotExistingGroup FAILED 
> java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.CoordinatorNotAvailableException: The 
> coordinator is not available. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:306)
>  at 
> kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsNotExistingGroup(ResetConsumerGroupOffsetTest.scala:89)
>  Caused by: org.apache.kafka.common.errors.CoordinatorNotAvailableException: 
> The coordinator is not available.{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796425#comment-16796425
 ] 

Matthias J. Sax commented on KAFKA-7989:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8123) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796426#comment-16796426
 ] 

Matthias J. Sax commented on KAFKA-8123:


Failed again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/76/testReport/junit/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated
>  
> 
>
> Key: KAFKA-8123
> URL: https://issues.apache.org/jira/browse/KAFKA-8123
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3474/tests]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for produce quota not updated: Client 
> small-quota-producer-client apiKey PRODUCE requests 1 requestTime 
> 0.015790873650539786 throttleTime 1000.0
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:206)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:423)
> at 
> kafka.server.RequestQuotaTest$$anonfun$waitAndCheckResults$1.apply(RequestQuotaTest.scala:421)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
> at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:45)
> at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:421)
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothProduceAndRequestQuotasViolated(RequestQuotaTest.scala:130){quote}
> STDOUT
> {quote}[2019-03-18 21:42:16,637] ERROR [KafkaApi-0] Error when handling 
> request: clientId=unauthorized-CONTROLLED_SHUTDOWN, correlationId=1, 
> api=CONTROLLED_SHUTDOWN, body=\{broker_id=0,broker_epoch=9223372036854775807} 
> (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47612-1, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,655] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-STOP_REPLICA, correlationId=1, api=STOP_REPLICA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,delete_partitions=true,partitions=[{topic=topic-1,partition_ids=[0]}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:42118-127.0.0.1:47614-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,657] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-LEADER_AND_ISR, correlationId=1, api=LEADER_AND_ISR, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],is_new=true}]}],live_leaders=[\{id=0,host=localhost,port=0}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=1, connectionId=127.0.0.1:42118-127.0.0.1:47616-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized.
> [2019-03-18 21:42:16,668] ERROR [KafkaApi-0] Error when handling request: 
> clientId=unauthorized-UPDATE_METADATA, correlationId=1, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=2147483647,broker_epoch=9223372036854775807,topic_states=[{topic=topic-1,partition_states=[{partition=0,controller_epoch=2147483647,leader=0,leader_epoch=2147483647,isr=[0],zk_version=2,replicas=[0],offline_replicas=[]}]}],live_brokers=[\{id=0,end_points=[{port=0,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=2, connectionId=127.0.0.1:42118-127.0.0.1:47618-2, 
> session=Session(User:Unauthorized,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), s

[jira] [Commented] (KAFKA-6078) Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster

2019-03-19 Thread Suman B N (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796435#comment-16796435
 ] 

Suman B N commented on KAFKA-6078:
--

[~mjsax], Can I take a deeper look into it? Please assign the ticket in my name.

> Investigate failure of ReassignPartitionsClusterTest.shouldExpandCluster
> 
>
> Key: KAFKA-6078
> URL: https://issues.apache.org/jira/browse/KAFKA-6078
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Dong Lin
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> See https://github.com/apache/kafka/pull/4084



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8094) Iterating over cache with get(key) is inefficient

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8094:
---
Fix Version/s: 2.3.0

> Iterating over cache with get(key) is inefficient 
> --
>
> Key: KAFKA-8094
> URL: https://issues.apache.org/jira/browse/KAFKA-8094
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: streams
> Fix For: 2.3.0
>
>
> Currently, range queries in the caching layer are implemented by creating an 
> iterator over the subset of keys in the range, and calling get() on the 
> underlying TreeMap for each key. While this protects against 
> ConcurrentModificationException, we can improve performance by replacing the 
> TreeMap with a concurrent data structure such as ConcurrentSkipListMap and 
> then just iterating over a subMap.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8062) StateListener is not notified when StreamThread dies

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8062:
---
Fix Version/s: 2.2.1
   2.3.0

> StateListener is not notified when StreamThread dies
> 
>
> Key: KAFKA-8062
> URL: https://issues.apache.org/jira/browse/KAFKA-8062
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
> Environment: Kafka 2.1.1, kafka-streams-scala version 2.1.1
>Reporter: Andrey Volkov
>Assignee: Guozhang Wang
>Priority: Minor
> Fix For: 2.3.0, 2.2.1
>
>
> I want my application to react when streams die. Trying to use 
> KafkaStreams.setStateListener. Also checking KafkaStreams.state() from time 
> to time.
> The test scenario: Kafka is available, but there are no topics that my 
> Topology is supposed to use.
> I expect streams to crash and the state listener to be notified about that, 
> with the new state ERROR. KafkaStreams.state() should also return ERROR.
> In reality the streams crash, but the KafkaStreams.state() method always 
> returns REBALANCING and the last time the StateListener was called, the new 
> state was also REBALANCING. 
>  
> I believe the reason for this is in the methods:
> org.apache.kafka.streams.KafkaStreams.StreamStateListener.onChange() which 
> does not react on the state StreamsThread.State.PENDING_SHUTDOWN
> and
> org.apache.kafka.streams.processor.internals.StreamThread.RebalanceListener.onPartitionsAssigned,
>  which calls shutdown() setting the state to PENDING_SHUTDOWN and then
> streamThread.setStateListener(null) effectively removing the state listener, 
> so that the DEAD state of the thread never reaches KafkaStreams object.
> Here is an extract from the logs:
> {{14:57:03.272 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> ERROR o.a.k.s.p.i.StreamsPartitionAssignor - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer] 
> test-input-topic is unknown yet during rebalance, please make sure they have 
> been pre-created before starting the Streams application.}}
> {{14:57:03.283 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.AbstractCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Successfully joined group with generation 1}}
> {{14:57:03.284 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.i.ConsumerCoordinator - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-consumer, 
> groupId=Test] Setting newly assigned partitions []}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Informed to shut 
> down}}
> {{14:57:03.285 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PARTITIONS_REVOKED to PENDING_SHUTDOWN}}
> {{14:57:03.316 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutting down}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.c.KafkaConsumer - [Consumer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-restore-consumer,
>  groupId=] Unsubscribed all topics or patterns and assigned partitions}}
> {{14:57:03.317 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.c.p.KafkaProducer - [Producer 
> clientId=Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1-producer] 
> Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD}}
> {{14:57:03.332 [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] 
> INFO o.a.k.s.p.i.StreamThread - stream-thread 
> [Test-749d56c7-505d-46a4-8dbd-2c756b353adc-StreamThread-1] Shutdown complete}}
> After this calls to KafkaStreams.state() still return REBALANCING
> There is a workaround with requesting KafkaStreams.localThreadsMetadata() and 
> checking each thread's state manually, but that seems very wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Patrik Kleindl (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796490#comment-16796490
 ] 

Patrik Kleindl commented on KAFKA-:
---

[~mjsax] In https://issues.apache.org/jira/browse/KAFKA-8037 I implemented the 
deserialization as part of the global state restore to catch corrupted records. 
This might be extended to serialize into the store in another format.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Paul Whalen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796519#comment-16796519
 ] 

Paul Whalen commented on KAFKA-:


This is a very interesting idea that I am suddenly very excited about, and 
since my team has a somewhat related problem, I'll phrase it the way we've been 
thinking of it: we love that key value state stores can be backed up to topics, 
but in our streams application we want a much richer way of querying data than 
just by key.

In a sense, {{range()}} partly solves this problem because it allows for a 
different way of querying the store rather then just based on your exact key.  
But the real win would be a complete decoupling of local state store 
implementation and how it changelogs to kafka.  It wouldn't need to be just 
key-value with range like RocksDB, but has a fancier on-disk structure that 
could support efficient querying or indexing of the data many ways (I'm 
thinking SQLite).  It would definitely increase fail-over/restore time, but 
that would be an acceptable/necessary tradeoff - if you're going to layout the 
data in a totally different format for querying, obviously you have to pay the 
cost of that translation.

What I'm proposing (completely decoupling local state store implementation and 
how it changelogs to kafka) is more useful for Processor API users, but it 
could also provide an API useable at the DSL level to enable what this JIRA is 
asking for (merely decoupling serdes between local state store and changelog in 
kafka).

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8129) Shade Kafka client dependencies

2019-03-19 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-8129:
--

 Summary: Shade Kafka client dependencies
 Key: KAFKA-8129
 URL: https://issues.apache.org/jira/browse/KAFKA-8129
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe


The Kafka client should shade its library dependencies.  This will ensure that 
its dependencies don't collide with those employed by users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796580#comment-16796580
 ] 

Matthias J. Sax commented on KAFKA-7989:


[https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3477/tests]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7989) Flaky Test RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796582#comment-16796582
 ] 

Matthias J. Sax commented on KAFKA-7989:


And again: 
[https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/77/testReport/kafka.server/RequestQuotaTest/testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated/]

> Flaky Test 
> RequestQuotaTest#testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated
> -
>
> Key: KAFKA-7989
> URL: https://issues.apache.org/jira/browse/KAFKA-7989
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Assignee: Anna Povzner
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0, 2.2.1
>
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/27/]
> {quote}java.util.concurrent.ExecutionException: java.lang.AssertionError: 
> Throttle time metrics for consumer quota not updated: 
> small-quota-consumer-client at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122) at 
> java.util.concurrent.FutureTask.get(FutureTask.java:206) at 
> kafka.server.RequestQuotaTest.$anonfun$waitAndCheckResults$1(RequestQuotaTest.scala:415)
>  at scala.collection.immutable.List.foreach(List.scala:392) at 
> scala.collection.generic.TraversableForwarder.foreach(TraversableForwarder.scala:38)
>  at 
> scala.collection.generic.TraversableForwarder.foreach$(TraversableForwarder.scala:38)
>  at scala.collection.mutable.ListBuffer.foreach(ListBuffer.scala:47) at 
> kafka.server.RequestQuotaTest.waitAndCheckResults(RequestQuotaTest.scala:413) 
> at 
> kafka.server.RequestQuotaTest.testResponseThrottleTimeWhenBothFetchAndRequestQuotasViolated(RequestQuotaTest.scala:134){quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796604#comment-16796604
 ] 

Matthias J. Sax commented on KAFKA-:


[~pgwhalen] What you describe might be possible today already. Using the 
Processor API, users can implement `StateStore` interface using any internal 
storage engine they like and implement whatever change-logging mechanism they 
need. You can also expose any interface you like to query a store, by 
implementing a custom `QueryableStoreType`.

The coupling describe in this ticket, is a DSL limitation only.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-7928) Deprecate WindowStore.put(key, value)

2019-03-19 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-7928:
--

Assignee: Slim Ouertani

> Deprecate WindowStore.put(key, value)
> -
>
> Key: KAFKA-7928
> URL: https://issues.apache.org/jira/browse/KAFKA-7928
> Project: Kafka
>  Issue Type: Improvement
>Reporter: John Roesler
>Assignee: Slim Ouertani
>Priority: Major
>  Labels: beginner, easy-fix, needs-kip, newbie
>
> Specifically, `org.apache.kafka.streams.state.WindowStore#put(K, V)`
> This method is strange... A window store needs to have a timestamp associated 
> with the key, so if you do a put without a timestamp, it's up to the store to 
> just make one up.
> Even the javadoc on the method recommends not to use it, due to this 
> confusing behavior.
> We should just deprecate it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Paul Whalen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796648#comment-16796648
 ] 

Paul Whalen commented on KAFKA-:


I had that thought and definitely believe it's possible, but I guess to be more 
clear, it might be nicer if hooking into the change-logging code already built 
was easier.  That seems like a bit of a challenge, if it is possible.  Anyway, 
sorry to hijack this ticket, it's definitely interesting and useful on its own.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796650#comment-16796650
 ] 

Matthias J. Sax commented on KAFKA-:


No need to apologize, you did not "hijack" this ticket :)

What do you mean by "hooking into the change-logging code already built". What 
API do you have in mind to reuse existing code? From what I can tell atm, it 
seems that it would be hard to share existing code, but I might be wrong. If 
you have a good idea, feel free to create a new ticket to follow up.

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8127) It may need to import scala.io

2019-03-19 Thread JieFang.He (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796660#comment-16796660
 ] 

JieFang.He commented on KAFKA-8127:
---

I get the same result both in windows7 and Ubuntu with command: gradlew 
releaseTarGz.

kafka version: 1.1.0

gradle version: 4.5.1

scala version:  2.11.12

java version: jdk8

> It may need to import scala.io
> --
>
> Key: KAFKA-8127
> URL: https://issues.apache.org/jira/browse/KAFKA-8127
> Project: Kafka
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> I get an error when compile kafka,which disappear when import scala.io
>  
> {code:java}
> D:\gerrit\Kafka\core\src\main\scala\kafka\tools\StateChangeLogMerger.scala:140:
>  object Source is not a member of package io
> val lineIterators = files.map(io.Source.fromFile(_).getLines)
> ^
> 6 warnings found
> one error found
> :core:compileScala FAILED
> FAILURE: Build failed with an exception.
> * What went wrong:
> Execution failed for task ':core:compileScala'.
> > Compilation failed
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7777) Decouple topic serdes from materialized serdes

2019-03-19 Thread Paul Whalen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796661#comment-16796661
 ] 

Paul Whalen commented on KAFKA-:


So I just went through looking at all the relevant interfaces again and I think 
I stand corrected.  Fundamentally, the API I was imagining is just:

# Some thing that you can send a key/value to and it will write the appropriate 
records to an appropriately named changelog topic.
# Supplying a callback to restore from a topic when a state store is 
initialized (I know that exists, though I will admit that one of my colleagues 
spent a morning trying to accomplish that and failed to find an online example 
or get anything working)

I see {{StoreChangeLogger}} as the solution to 1, and although it is not 
public, it is obviously small and replicable, and now that I see that 
{{ProcessorContext}} implements {{RecordCollector.Supplier}} allowing the 
all-important "hook in" so we can get EoS by using the same consumer.  And we 
can choose an appropriate topic name of course from the public 
{{ProcessorStateManager.storeChangelogTopic()}}

And, I'm sure 2 is perfectly solvable given the right understanding.

Thanks for your help!  The best kind of new feature is the kind that existed 
all along!

> Decouple topic serdes from materialized serdes
> --
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Wish
>  Components: streams
>Reporter: Maarten
>Priority: Minor
>  Labels: needs-kip
>
> It would be valuable to us to have the the encoding format in a Kafka topic 
> decoupled from the encoding format used to cache the data locally in a kafka 
> streams app. 
> We would like to use the `range()` function in the interactive queries API to 
> look up a series of results, but can't with our encoding scheme due to our 
> keys being variable length.
> We use protobuf, but based on what I've read Avro, Flatbuffers and Cap'n 
> proto have similar problems.
> Currently we use the following code to work around this problem:
> {code}
> builder
> .stream("input-topic", Consumed.with(inputKeySerde, inputValueSerde))
> .to("intermediate-topic", Produced.with(intermediateKeySerde, 
> intermediateValueSerde)); 
> t1 = builder
> .table("intermediate-topic", Consumed.with(intermediateKeySerde, 
> intermediateValueSerde), t1Materialized);
> {code}
> With the encoding formats decoupled, the code above could be reduced to a 
> single step, not requiring an intermediate topic.
> Based on feedback on my [SO 
> question|https://stackoverflow.com/questions/53913571/is-there-a-way-to-separate-kafka-topic-serdes-from-materialized-serdes]
>  a change that introduces this would impact state restoration when using an 
> input topic for recovery.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8125) Check for topic existence in CreateTopicsRequest prior to creating replica assignment

2019-03-19 Thread huxihx (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx reassigned KAFKA-8125:
-

Assignee: huxihx

> Check for topic existence in CreateTopicsRequest prior to creating replica 
> assignment
> -
>
> Key: KAFKA-8125
> URL: https://issues.apache.org/jira/browse/KAFKA-8125
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Lucas Bradstreet
>Assignee: huxihx
>Priority: Minor
>
> Imagine the following pattern to ensure topic creation in an application:
>  # Attempt to create a topic with # partitions P and replication factor R.
>  #  If topic creation fails with TopicExistsException, continue. If topic 
> creation succeeds, continue, the topic now exists.
> This normally works fine. However, if the topic has already been created, but 
> if the number of live brokers < R, then the topic creation will fail an 
> org.apache.kafka.common.errors.InvalidReplicationFactorException, even though 
> the topic already exists.
> This could be avoided if we check whether the topic exists prior to calling 
> AdminUtils.assignReplicasToBrokers.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6958) Allow to define custom processor names with KStreams DSL

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796714#comment-16796714
 ] 

ASF GitHub Bot commented on KAFKA-6958:
---

bbejeck commented on pull request #6409: KAFKA-6958: Add new NamedOperation 
interface to enforce consistency i…
URL: https://github.com/apache/kafka/pull/6409
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Allow to define custom processor names with KStreams DSL
> 
>
> Key: KAFKA-6958
> URL: https://issues.apache.org/jira/browse/KAFKA-6958
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Florian Hussonnois
>Assignee: Florian Hussonnois
>Priority: Minor
>  Labels: kip
>
> Currently, while building a new Topology through the KStreams DSL the 
> processors are automatically named.
> The genarated names are prefixed depending of the operation (i.e 
> KSTREAM-SOURCE, KSTREAM-FILTER, KSTREAM-MAP, etc).
> To debug/understand a topology it is possible to display the processor 
> lineage with the method Topology#describe(). However, a complex topology with 
> dozens of operations can be hard to understand if the processor names are not 
> relevant.
> It would be useful to be able to set more meaningful names. For example, a 
> processor name could describe the business rule performed by a map() 
> operation.
> [KIP-307|https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8125) Check for topic existence in CreateTopicsRequest prior to creating replica assignment

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796730#comment-16796730
 ] 

ASF GitHub Bot commented on KAFKA-8125:
---

huxihx commented on pull request #6472: KAFKA-8125: Skip doing assignment when 
topic existed
URL: https://github.com/apache/kafka/pull/6472
 
 
   https://issues.apache.org/jira/browse/KAFKA-8125
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Check for topic existence in CreateTopicsRequest prior to creating replica 
> assignment
> -
>
> Key: KAFKA-8125
> URL: https://issues.apache.org/jira/browse/KAFKA-8125
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Lucas Bradstreet
>Assignee: huxihx
>Priority: Minor
>
> Imagine the following pattern to ensure topic creation in an application:
>  # Attempt to create a topic with # partitions P and replication factor R.
>  #  If topic creation fails with TopicExistsException, continue. If topic 
> creation succeeds, continue, the topic now exists.
> This normally works fine. However, if the topic has already been created, but 
> if the number of live brokers < R, then the topic creation will fail an 
> org.apache.kafka.common.errors.InvalidReplicationFactorException, even though 
> the topic already exists.
> This could be avoided if we check whether the topic exists prior to calling 
> AdminUtils.assignReplicasToBrokers.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8098) Flaky Test AdminClientIntegrationTest#testConsumerGroups

2019-03-19 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796805#comment-16796805
 ] 

ASF GitHub Bot commented on KAFKA-8098:
---

omkreddy commented on pull request #6441: KAFKA-8098: Fix Flaky Test 
testConsumerGroups
URL: https://github.com/apache/kafka/pull/6441
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test AdminClientIntegrationTest#testConsumerGroups
> 
>
> Key: KAFKA-8098
> URL: https://issues.apache.org/jira/browse/KAFKA-8098
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Assignee: huxihx
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.3.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3459/tests]
> {quote}java.lang.AssertionError: expected:<2> but was:<0>
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.failNotEquals(Assert.java:835)
> at org.junit.Assert.assertEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:633)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumerGroups(AdminClientIntegrationTest.scala:1194){quote}
> STDOUT
> {quote}2019-03-12 10:52:33,482] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:52:33,485] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:52:35,880] WARN Unable to read additional data from client 
> sessionid 0x104458575770003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-03-12 10:52:38,596] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition elect-preferred-leaders-topic-1-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:52:38,797] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition elect-preferred-leaders-topic-2-0 at offset 
> 0 (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:52:51,998] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:52:52,005] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:53:13,750] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition mytopic2-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:53:13,754] ERROR [ReplicaFetcher replicaId=2, leaderId=1, 
> fetcherId=0] Error for partition mytopic2-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:53:13,755] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
> fetcherId=0] Error for partition mytopic2-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:53:13,760] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
> fetcherId=0] Error for partition mytopic2-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-12 10:53:13,757] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
> fetcherId=0] Error f