[jira] [Resolved] (KAFKA-9834) Add interface to set ZSTD compresson level

2020-04-14 Thread jiamei xie (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiamei xie resolved KAFKA-9834.
---
Resolution: Duplicate

Duplicate with KAFKA-7632

> Add interface to set ZSTD compresson level
> --
>
> Key: KAFKA-9834
> URL: https://issues.apache.org/jira/browse/KAFKA-9834
> Project: Kafka
>  Issue Type: Bug
>Reporter: jiamei xie
>Assignee: jiamei xie
>Priority: Major
>
> It seems kafka use zstd default compression level 3 and doesn't have support 
> for setting zstd compression level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9834) Add interface to set ZSTD compresson level

2020-04-14 Thread jiamei xie (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083819#comment-17083819
 ] 

jiamei xie commented on KAFKA-9834:
---

Ok. I'll close this jira.

> Add interface to set ZSTD compresson level
> --
>
> Key: KAFKA-9834
> URL: https://issues.apache.org/jira/browse/KAFKA-9834
> Project: Kafka
>  Issue Type: Bug
>Reporter: jiamei xie
>Assignee: jiamei xie
>Priority: Major
>
> It seems kafka use zstd default compression level 3 and doesn't have support 
> for setting zstd compression level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9826) Log cleaning repeatedly picks same segment with no effect when first dirty offset is past start of active segment

2020-04-14 Thread Jun Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-9826.

Fix Version/s: 2.6.0
   Resolution: Fixed

merged the PR to trunk

> Log cleaning repeatedly picks same segment with no effect when first dirty 
> offset is past start of active segment
> -
>
> Key: KAFKA-9826
> URL: https://issues.apache.org/jira/browse/KAFKA-9826
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.4.1
>Reporter: Steve Rodrigues
>Assignee: Steve Rodrigues
>Priority: Major
> Fix For: 2.6.0
>
>
> Seen on a system where a given partition had a single segment, and for 
> whatever reason (deleteRecords?), the logStartOffset was greater than the 
> base segment of the log, there were a continuous series of 
> ```
> [2020-03-03 16:56:31,374] WARN Resetting first dirty offset of  FOO-3 to log 
> start offset 55649 since the checkpointed offset 0 is invalid. 
> (kafka.log.LogCleanerManager$)
> ```
> messages (partition name changed, it wasn't really FOO). This was expected to 
> be resolved by KAFKA-6266 but clearly wasn't. 
> Further investigation revealed that  a few segments were continuously 
> cleaning and generating messages in the `log-cleaner.log` of the form:
> ```
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Beginning cleaning of log FOO-3 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Building offset map for FOO-3... 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Building offset map for log FOO-3 
> for 0 segments in offset range [55287, 54237). (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Offset map for log FOO-3 complete. 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Cleaning log FOO-3 (cleaning prior 
> to Wed Dec 31 19:00:00 EST 1969, discarding tombstones prior to Tue Dec 10 
> 13:39:08 EST 2019)... (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO [kafka-log-cleaner-thread-1]: Log cleaner 
> thread 1 cleaned log FOO-3 (dirty section = [55287, 55287])
> 0.0 MB of log processed in 0.0 seconds (0.0 MB/sec).
> Indexed 0.0 MB in 0.0 seconds (0.0 Mb/sec, 100.0% of total time)
> Buffer utilization: 0.0%
> Cleaned 0.0 MB in 0.0 seconds (NaN Mb/sec, 0.0% of total time)
> Start size: 0.0 MB (0 messages)
> End size: 0.0 MB (0 messages) NaN% size reduction (NaN% fewer messages) 
> (kafka.log.LogCleaner)
> ```
> What seems to have happened here (data determined for a different partition) 
> is:
> There exist a number of partitions here which get relatively low traffic, 
> including our friend FOO-5. For whatever reason, LogStartOffset of this 
> partition has moved beyond the baseOffset of the active segment. (Notes in 
> other issues indicate that this is a reasonable scenario.) So there’s one 
> segment, starting at 166266, and a log start of 166301.
> grabFilthiestCompactedLog runs and reads the checkpoint file. We see that 
> this topicpartition needs to be cleaned, and call cleanableOffsets on it 
> which returns an OffsetsToClean with firstDirtyOffset == logStartOffset == 
> 166301 and firstUncleanableOffset = max(logStart, activeSegment.baseOffset) = 
> 116301, and forceCheckpoint = true.
> The checkpoint file is updated in grabFilthiestCompactedLog (this is the fix 
> for KAFKA-6266). We then create a LogToClean object based on the 
> firstDirtyOffset and firstUncleanableOffset of 166301 (past the active 
> segment’s base offset).
> The LogToClean object has cleanBytes = logSegments(-1, 
> firstDirtyOffset).map(_.size).sum → the size of this segment. It has 
> firstUncleanableOffset and cleanableBytes determined by 
> calculateCleanableBytes. calculateCleanableBytes returns:
> {{}}
> {{val firstUncleanableSegment = 
> log.nonActiveLogSegmentsFrom(uncleanableOffset).headOption.getOrElse(log.activeSegment)}}
> {{val firstUncleanableOffset = firstUncleanableSegment.baseOffset}}
> {{val cleanableBytes = log.logSegments(firstDirtyOffset, 
> math.max(firstDirtyOffset, firstUncleanableOffset)).map(_.size.toLong).sum
> (firstUncleanableOffset, cleanableBytes)}}
> firstUncleanableSegment is activeSegment. firstUncleanableOffset is the base 
> offset, 166266. cleanableBytes is looking for logSegments(166301, max(166301, 
> 166266) → which _is the active segment_
> So there are “cleanableBytes” > 0.
> We then filter out segments with totalbytes (clean + cleanable) > 0. This 
> segment has totalBytes > 0, and it has cleanablebytes, so great! It’s 
> filthiest.
> The cleaner picks it, calls cleanLog on it, which then does cleaner.clean, 
> which returns nextDirtyOffset and cleaner stats. cleaner.clean callls 
> doClean, which builds an offsetMap. The offsetMap 

[jira] [Commented] (KAFKA-9826) Log cleaning repeatedly picks same segment with no effect when first dirty offset is past start of active segment

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083810#comment-17083810
 ] 

ASF GitHub Bot commented on KAFKA-9826:
---

junrao commented on pull request #8469: [KAFKA-9826] Handle an unaligned first 
dirty offset during log cleaning.
URL: https://github.com/apache/kafka/pull/8469
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Log cleaning repeatedly picks same segment with no effect when first dirty 
> offset is past start of active segment
> -
>
> Key: KAFKA-9826
> URL: https://issues.apache.org/jira/browse/KAFKA-9826
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.4.1
>Reporter: Steve Rodrigues
>Assignee: Steve Rodrigues
>Priority: Major
>
> Seen on a system where a given partition had a single segment, and for 
> whatever reason (deleteRecords?), the logStartOffset was greater than the 
> base segment of the log, there were a continuous series of 
> ```
> [2020-03-03 16:56:31,374] WARN Resetting first dirty offset of  FOO-3 to log 
> start offset 55649 since the checkpointed offset 0 is invalid. 
> (kafka.log.LogCleanerManager$)
> ```
> messages (partition name changed, it wasn't really FOO). This was expected to 
> be resolved by KAFKA-6266 but clearly wasn't. 
> Further investigation revealed that  a few segments were continuously 
> cleaning and generating messages in the `log-cleaner.log` of the form:
> ```
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Beginning cleaning of log FOO-3 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,924] INFO Cleaner 1: Building offset map for FOO-3... 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Building offset map for log FOO-3 
> for 0 segments in offset range [55287, 54237). (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Offset map for log FOO-3 complete. 
> (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO Cleaner 1: Cleaning log FOO-3 (cleaning prior 
> to Wed Dec 31 19:00:00 EST 1969, discarding tombstones prior to Tue Dec 10 
> 13:39:08 EST 2019)... (kafka.log.LogCleaner)
> [2020-03-31 13:34:50,927] INFO [kafka-log-cleaner-thread-1]: Log cleaner 
> thread 1 cleaned log FOO-3 (dirty section = [55287, 55287])
> 0.0 MB of log processed in 0.0 seconds (0.0 MB/sec).
> Indexed 0.0 MB in 0.0 seconds (0.0 Mb/sec, 100.0% of total time)
> Buffer utilization: 0.0%
> Cleaned 0.0 MB in 0.0 seconds (NaN Mb/sec, 0.0% of total time)
> Start size: 0.0 MB (0 messages)
> End size: 0.0 MB (0 messages) NaN% size reduction (NaN% fewer messages) 
> (kafka.log.LogCleaner)
> ```
> What seems to have happened here (data determined for a different partition) 
> is:
> There exist a number of partitions here which get relatively low traffic, 
> including our friend FOO-5. For whatever reason, LogStartOffset of this 
> partition has moved beyond the baseOffset of the active segment. (Notes in 
> other issues indicate that this is a reasonable scenario.) So there’s one 
> segment, starting at 166266, and a log start of 166301.
> grabFilthiestCompactedLog runs and reads the checkpoint file. We see that 
> this topicpartition needs to be cleaned, and call cleanableOffsets on it 
> which returns an OffsetsToClean with firstDirtyOffset == logStartOffset == 
> 166301 and firstUncleanableOffset = max(logStart, activeSegment.baseOffset) = 
> 116301, and forceCheckpoint = true.
> The checkpoint file is updated in grabFilthiestCompactedLog (this is the fix 
> for KAFKA-6266). We then create a LogToClean object based on the 
> firstDirtyOffset and firstUncleanableOffset of 166301 (past the active 
> segment’s base offset).
> The LogToClean object has cleanBytes = logSegments(-1, 
> firstDirtyOffset).map(_.size).sum → the size of this segment. It has 
> firstUncleanableOffset and cleanableBytes determined by 
> calculateCleanableBytes. calculateCleanableBytes returns:
> {{}}
> {{val firstUncleanableSegment = 
> log.nonActiveLogSegmentsFrom(uncleanableOffset).headOption.getOrElse(log.activeSegment)}}
> {{val firstUncleanableOffset = firstUncleanableSegment.baseOffset}}
> {{val cleanableBytes = log.logSegments(firstDirtyOffset, 
> math.max(firstDirtyOffset, firstUncleanableOffset)).map(_.size.toLong).sum
> (firstUncleanableOffset, cleanableBytes)}}
> firstUncleanableSegment is activeSegment. firstUncleanableOffset is the base 
> offset, 166266. cleanableBytes is looking for logSegments(166301, max

[jira] [Created] (KAFKA-9871) Consumer group description showing duplicate partition information

2020-04-14 Thread OM Singh (Jira)
OM Singh created KAFKA-9871:
---

 Summary: Consumer group description showing duplicate partition 
information
 Key: KAFKA-9871
 URL: https://issues.apache.org/jira/browse/KAFKA-9871
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.2.1
 Environment: Preprod environment (Staging )
Reporter: OM Singh
 Attachments: externalDevice-lag

Kafka consumer describe command showing duplicate values of same partitions :
qea1.preprod.fe.deviceTopic 3  3061365 3061365 
0   
feexternaldevicestreamproducer-preprod-63dcef9e-f721-457f-8273-d7761aa24844-StreamThread-5-consumer-248ba8e8-dc47-4763-884f-2e07592b66e5
  /10.243.227.103

qea1.preprod.feexternal.deviceTopic 3  2525385 2568565 
43180   -
 

qea1.preprod.feexternal.deviceTopic topic has 60 partition . We observed this 
issue with 12 partition. This is not getting away even after restarting 
contains or reseting the offsets. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9870) Error is reported until the cluster is restarted. Error for partition [__consumer_offsets,19] to broker 0:org.apache.kafka.common.errors.NotLeaderForPartitionException: T

2020-04-14 Thread yonglu gao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yonglu gao updated KAFKA-9870:
--
Description: 
I have a kafka cluster .when broker 2 restart,then error,Error is reported 
until the cluster is restarted in turn after 3 hours:

 

broker2:
ERROR [KafkaApi-2] Error when handling request 
\{replica_id=0,max_wait_time=500,min_bytes=1,max_bytes=10485760,isolation_level=0,topics=[{topic=__consumer_offsets,partitions=[{partition=13,fetch_offset=17465556,log_start_offset=0,max_bytes=1048576}]}]}
 (kafka.server.KafkaApis)RROR [KafkaApi-2] Error when handling request 
\{replica_id=0,max_wait_time=500,min_bytes=1,max_bytes=10485760,isolation_level=0,topics=[{topic=__consumer_offsets,partitions=[{partition=13,fetch_offset=17465556,log_start_offset=0,max_bytes=1048576}]}]}
 (kafka.server.KafkaApis)kafka.common.NotAssignedReplicaException: Leader 2 
failed to record follower 0's position -1 since the replica is not recognized 
to be one of the assigned replicas  for partition __consumer_offsets-13. at 
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:274)

 
[2020-04-11 03:01:45,760] INFO Partition [__consumer_offsets,13] on broker 2: 
Shrinking ISR from 2,1,0 to 2 (kafka.cluster.Partition)
[2020-04-11 03:02:32,046] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[__consumer_offsets,19] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,053] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[base_registerAndCardAdd,26] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,059] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[mac-base-topic,15] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,067] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[pay-total-order-topic,24] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,072] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[pay-total-order-topic,30] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,078] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[base_registerAndCardAdd,56] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
[2020-04-11 03:02:32,087] ERROR [ReplicaFetcherThread-0-0]: Error for partition 
[mac-dbasync-topic,45] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)

*All of these are errors until you restart the cluster*

* broker0&1:*



2020-04-11 03:02:05,261] WARN [ReplicaFetcherThread-0-2]: Error in fetch to 
broker 2, request (type=FetchRequest, replicaId=1, maxWait=500, minBytes=1, 
maxBytes=10485760, fetchData={__consumer_offsets-13=(offset=17477531, 
logStartOffset=0, maxBytes=1048576)}) (kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 2 was disconnected before the response was 
read
at 
org.apache.kafka.clients.NetworkClientUtils.sendAndReceive(NetworkClientUtils.java:93)
at 
kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93)
at 
kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:207)
at 
kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at 
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:151)
at 
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:112)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
[2020-04-11 03:02:32,041] INFO [ReplicaFetcherManager on broker 1] Removed 
fetcher for partitions __consumer_offsets-19 
(kafka.server.ReplicaFetcherManager)
[2020-04-11 03:02:32,041] INFO [ReplicaFetcherManager on broker 1] Added 
fetcher for partitions List([__consumer_offsets-19, initOffset 0 to broker 
BrokerEndPoint(2,19.13.139.125,9092)] ) (kafka.server.ReplicaFetcherManager)
[2020-04-11 03:02:32,041] INFO [Group Metadata Manager on Broker 1]: Scheduling 
unloading of offsets and group metadata from __consumer_offsets-19 
(kafka.coordinator.group.GroupMetadataManager)
[2020-04-11 03:02:32,041] INFO [Group Metadata Manager on Broker 1]: Finis

[jira] [Created] (KAFKA-9870) Error is reported until the cluster is restarted. Error for partition [__consumer_offsets,19] to broker 0:org.apache.kafka.common.errors.NotLeaderForPartitionException: T

2020-04-14 Thread yonglu gao (Jira)
yonglu gao created KAFKA-9870:
-

 Summary: Error is reported until the cluster is restarted. Error 
for partition [__consumer_offsets,19] to broker 
0:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. 
 Key: KAFKA-9870
 URL: https://issues.apache.org/jira/browse/KAFKA-9870
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.0
 Environment: jdk 1.8
Reporter: yonglu gao


I have a kafka cluster .when broker 2 restart,then error,Error is reported 
until the cluster is restarted in turn after 3 hours:

 

broker2:
ERROR [KafkaApi-2] Error when handling request 
\{replica_id=0,max_wait_time=500,min_bytes=1,max_bytes=10485760,isolation_level=0,topics=[{topic=__consumer_offsets,partitions=[{partition=13,fetch_offset=17465556,log_start_offset=0,max_bytes=1048576}]}]}
 (kafka.server.KafkaApis)kafka.common.NotAssignedReplicaException: Leader 2 
failed to record follower 0's position -1 since the replica is not recognized 
to be one of the assigned replicas  for partition __consumer_offsets-13. at 
kafka.cluster.Partition.updateReplicaLogReadResult(Partition.scala:274) at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1091)
 at 
kafka.server.ReplicaManager$$anonfun$updateFollowerLogReadResults$2.apply(ReplicaManager.scala:1088)
 at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
kafka.server.ReplicaManager.updateFollowerLogReadResults(ReplicaManager.scala:1088)
 at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:622) at 
kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:606) at 
kafka.server.KafkaApis.handle(KafkaApis.scala:98) at 
kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:66) at 
java.lang.Thread.run(Thread.java:748) [2020-04-11 03:00:32,285] INFO 
[ReplicaFetcherManager on broker 2] Removed fetcher for partitions 
test-2,base_registerAndCardAdd-25,pay-total-order-topic-30,base_registerAndCardAdd-7,pay-total-order-topic-12,base_registerAndCardAdd-51,mac-dbasync-topic-51,test-10,base_registerAndCardAdd-33,pay-total-order-topic-38,mac-base-topic-26,base_registerAndCardAdd-15,pay-total-order-topic-39,mac-dbasync-topic-59,mac-base-topic-8,pay-total-order-topic-21,test-0,base_registerAndCardAdd-23,pay-total-order-topic-3,pay-total-order-topic-47,mac-base-topic-16,base_registerAndCardAdd-24,pay-total-order-topic-29,__consumer_offsets-31,mac-dbasync-topic-31,mac-base-topic-24,mac-dbasync-topic-13,mac-dbasync-topic-57,mac-base-topic-6,__consumer_offsets-39,mac-dbasync-topic-39,mac-dbasync-topic-21,mac-base-topic-14,mac-dbasync-topic-47,__consumer_offsets-48,pay-total-order-topic-9,mac-dbasync-topic-48,__consumer_offsets-30,mac-dbasync-topic-30,mac-base-topic-4,__consumer_offsets-19,mac-dbasync-topic-19,base-beginnerOper-topic-56,__consumer_offsets-1,mac-dbasync-topic-1,base-beginnerOper-topic-38,__consumer_offsets-2,base-beginnerOper-topic-20,__consumer_offsets-28,base-beginnerOper-topic-46,__consumer_offsets-10,base-beginnerOper-topic-28,__consumer_offsets-36,base-beginnerOper-topic-54,__consumer_offsets-18,base_registerAndCardAdd-56,pay-total-order-topic-61,base-beginnerOper-topic-36,base_registerAndCardAdd-38,base-beginnerOper-topic-18,base-beginnerOper-topic-0,base-beginnerOper-topic-44,base_registerAndCardAdd-46,base-beginnerOper-topic-26,base-beginnerOper-topic-8,base-beginnerOper-topic-16,pay-total-order-topic-60,pay-total-order-topic-23,mac-dbasync-topic-62,base_registerAndCardAdd-0,base_registerAndCardAdd-44,mac-dbasync-topic-44,test-3,base_registerAndCardAdd-26,pay-total-order-topic-31,mac-base-topic-19,base_registerAndCardAdd-8,pay-total-order-topic-32,mac-base-topic-1,test-11,pay-total-order-topic-14,pay-total-order-topic-58,mac-base-topic-27,test-12,base_registerAndCardAdd-16,pay-total-order-topic-40,base_registerAndCardAdd-17,pay-total-order-topic-22,base_registerAndCardAdd-43,pay-total-order-topic-48,pay-total-order-topic-4,mac-dbasync-topic-24,mac-base-topic-17,mac-dbasync-topic-50,mac-dbasync-topic-6,mac-dbasync-topic-32,mac-base-topic-25,mac-dbasync-topic-58,mac-base-topic-7,pay-total-order-topic-20,mac-dbasync-topic-40,__consumer_offsets-41,pay-total-order-topic-2,mac-dbasync-topic-41,__consumer_offsets-23,mac-dbasync-topic-23,__consumer_offsets-49,base_registerAndCardAdd-5,mac-dbasync-topic-49,__consumer_offsets-12,mac-dbasync-topic-12,base-beginnerOper-topic-49,base-beginnerOper-topic-31,base-beginnerOper-topic-57,base-beginnerOper-topic-13,__consumer_offsets-21,base-beginnerOper-topic-39,__consumer_offsets-3,__consumer_offsets-47,mac-dbasync-topic-3,__consumer_offsets-29,mac-dbasync-topic-29,base-beginnerOper-topic-47,__consumer_offsets-11,mac-dbasync-topic-11,base-beginnerOper-topic-4

[jira] [Updated] (KAFKA-9869) kafka offset is discontinuous when appear when appear EOFException

2020-04-14 Thread Lee chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee chen updated KAFKA-9869:

Description: 
 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\ and I am not damage the log dir of parititon-1,or not delete this 
offset ,such as use the tools :kafka-delete-records.sh . and the offset is 
discontinuous. 

 

  was:
 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\ and I am not damage the log dir of parititon-1,or not delete this 
offset ,such as use the tools :kafka-delete-records.sh . and the offset 

 


> kafka offset is discontinuous when  appear when  appear EOFException
> 
>
> Key: KAFKA-9869
> URL: https://issues.apache.org/jira/browse/KAFKA-9869
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.4
>Reporter: Lee chen
>Priority: Major
>
>  My kafka version is 0.11.x .The producer I used add the config 
> conpression.type = gzip, which Configurate in broker is  “producer”.
>  When I send some messages in one topic ,and use simpleConsumer to read 
> the topic messages.When read the partition-1 's offset 167923404.I got a 
> error like this: java.util.zip.ZipException
>  I use the tools ./kafka-simple-consumer-shell.sh to read the  
> partition-1 's data of 167923404,but cannot read it. It seems like that this 
> message  has missed.\ and I am not damage the log dir of parititon-1,or not 
> delete this offset ,such as use the tools :kafka-delete-records.sh . and the 
> offset is discontinuous. 
> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9869) kafka offset is discontinuous when appear when appear EOFException

2020-04-14 Thread Lee chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lee chen updated KAFKA-9869:

Description: 
 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\ and I am not damage the log dir of parititon-1,or not delete this 
offset ,such as use the tools :kafka-delete-records.sh . and the offset 

 

  was:
 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\

 


> kafka offset is discontinuous when  appear when  appear EOFException
> 
>
> Key: KAFKA-9869
> URL: https://issues.apache.org/jira/browse/KAFKA-9869
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.4
>Reporter: Lee chen
>Priority: Major
>
>  My kafka version is 0.11.x .The producer I used add the config 
> conpression.type = gzip, which Configurate in broker is  “producer”.
>  When I send some messages in one topic ,and use simpleConsumer to read 
> the topic messages.When read the partition-1 's offset 167923404.I got a 
> error like this: java.util.zip.ZipException
>  I use the tools ./kafka-simple-consumer-shell.sh to read the  
> partition-1 's data of 167923404,but cannot read it. It seems like that this 
> message  has missed.\ and I am not damage the log dir of parititon-1,or not 
> delete this offset ,such as use the tools :kafka-delete-records.sh . and the 
> offset 
> 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9869) kafka offset is discontinuous when appear when appear EOFException

2020-04-14 Thread Lee chen (Jira)
Lee chen created KAFKA-9869:
---

 Summary: kafka offset is discontinuous when  appear when  appear 
EOFException
 Key: KAFKA-9869
 URL: https://issues.apache.org/jira/browse/KAFKA-9869
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.11.0.4
Reporter: Lee chen


 My kafka version is 0.11.x .The producer I used add the config 
conpression.type = gzip, which Configurate in broker is  “producer”.
 When I send some messages in one topic ,and use simpleConsumer to read the 
topic messages.When read the partition-1 's offset 167923404.I got a error like 
this: java.util.zip.ZipException
 I use the tools ./kafka-simple-consumer-shell.sh to read the  partition-1 
's data of 167923404,but cannot read it. It seems like that this message  has 
missed.\

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9857) Failed to build image ducker-ak-openjdk-8 on arm

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083744#comment-17083744
 ] 

ASF GitHub Bot commented on KAFKA-9857:
---

jiameixie commented on pull request #8489: KAFKA-9857:Failed to build image 
ducker-ak-openjdk-8 on arm
URL: https://github.com/apache/kafka/pull/8489
 
 
   The default OpenJDK base image is openjdk:8. When building image on arm, no
   matching manifest for linux/arm64/v8 in the manifest list entries error will
   occur. For arm, the default OpenJDK should be set to arm64v8/openjdk:8
   
   Change-Id: Ib450a36b3977a167743c24476ec1810f4830b66b
   Signed-off-by: Jiamei Xie 
   
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Failed to build image ducker-ak-openjdk-8 on arm
> 
>
> Key: KAFKA-9857
> URL: https://issues.apache.org/jira/browse/KAFKA-9857
> Project: Kafka
>  Issue Type: Bug
>  Components: system tests
>Reporter: jiamei xie
>Assignee: jiamei xie
>Priority: Major
>
> It failed to build image ducker-ak-openjdk-8 on arm and below is its log. 
> This issue is to fix it.
> kafka/tests/docker$ ./run_tests.sh
> Sending build context to Docker daemon  53.76kB
> Step 1/43 : ARG jdk_version=openjdk:8
> Step 2/43 : FROM $jdk_version
> 8: Pulling from library/openjdk
> no matching manifest for linux/arm64/v8 in the manifest list entries
> docker failed
> + die 'ducker-ak up failed'
> + echo ducker-ak up failed
> ducker-ak up failed
> + exit 1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9298) Reuse of a mapped stream causes an Invalid Topology

2020-04-14 Thread Bill Bejeck (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083736#comment-17083736
 ] 

Bill Bejeck commented on KAFKA-9298:


In Investigating this more to nail down when this bug occurred, it seems this 
has been an existing issue.  The following test fails with the same exception 
in 2.0, which is before the optimization layer was added.
{noformat}
@Test
public void testReuseJoin() {
final StreamsBuilder builder = new StreamsBuilder();
KStream stream1 = builder.stream("input");
KStream stream2 = builder.stream("input2");
KStream stream3 = builder.stream("input3");

KStream mappedStream = stream1.map(KeyValue::pair);
KStream joinOne = mappedStream.join(stream2, (v1, v2)-> v1 
+ v2, JoinWindows.of(5000));
KStream joinTwp = mappedStream.join(stream3, (v1, v2)-> v1 
+ v2, JoinWindows.of(5000));

System.out.println(builder.build().describe().toString());
}{noformat}

> Reuse of a mapped stream causes an Invalid Topology
> ---
>
> Key: KAFKA-9298
> URL: https://issues.apache.org/jira/browse/KAFKA-9298
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Walker Carlson
>Assignee: Bill Bejeck
>Priority: Minor
>  Labels: join, streams
>
> Can be found with in the KStreamKStreamJoinTest.java
> {code:java}
> @Test
> public void optimizerIsEager() {
>   final StreamsBuilder builder = new StreamsBuilder();
>   final KStream stream1 = builder.stream("topic", 
> Consumed.with(Serdes.String(), Serdes.String()));
>   final KStream stream2 = builder.stream("topic2", 
> Consumed.with(Serdes.String(), Serdes.String()));
>   final KStream stream3 = builder.stream("topic3", 
> Consumed.with(Serdes.String(), Serdes.String()));
>   final KStream newStream = stream1.map((k, v) -> new 
> KeyValue<>(v, k));
>   newStream.join(stream2, (value1, value2) -> value1 + value2, 
> JoinWindows.of(ofMillis(100)), StreamJoined.with(Serdes.String(), 
> Serdes.String(), Serdes.String()));
>   newStream.join(stream3, (value1, value2) -> value1 + value2, 
> JoinWindows.of(ofMillis(100)), StreamJoined.with(Serdes.String(), 
> Serdes.String(), Serdes.String()));
>   System.err.println(builder.build().describe().toString());
> }
>  
> {code}
> **results in 
>  **
>  Invalid topology: Topic KSTREAM-MAP-03-repartition has already been 
> registered by another source.
>  org.apache.kafka.streams.errors.TopologyException: Invalid topology: Topic 
> KSTREAM-MAP-03-repartition has already been registered by another 
> source.
>  at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.validateTopicNotAlreadyRegistered(InternalTopologyBuilder.java:578)
>  at 
> org.apache.kafka.streams.processor.internals.InternalTopologyBuilder.addSource(InternalTopologyBuilder.java:378)
>  at 
> org.apache.kafka.streams.kstream.internals.graph.OptimizableRepartitionNode.writeToTopology(OptimizableRepartitionNode.java:100)
>  at 
> org.apache.kafka.streams.kstream.internals.InternalStreamsBuilder.buildAndOptimizeTopology(InternalStreamsBuilder.java:303)
>  at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:562)
>  at org.apache.kafka.streams.StreamsBuilder.build(StreamsBuilder.java:551)
>  at 
> org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest.optimizerIsEager(KStreamKStreamJoinTest.java:136)
>  at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
>  at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
>  at org.junit.runners.ParentRunner.access$10

[jira] [Commented] (KAFKA-9819) Flaky Test StoreChangelogReaderTest#shouldNotThrowOnUnknownRevokedPartition[0]

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083734#comment-17083734
 ] 

ASF GitHub Bot commented on KAFKA-9819:
---

mjsax commented on pull request #8488: KAFKA-9819: Fix flaky test in 
StoreChangelogReaderTest
URL: https://github.com/apache/kafka/pull/8488
 
 
   Call for review @ableegoldman @vvcephei 
   
   The test failed with some log message from `AdminClient` however, we don't 
use any admin client in this test. Thus, we should not get the JVM shared root 
logger, but only the logger for the test class.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Flaky Test StoreChangelogReaderTest#shouldNotThrowOnUnknownRevokedPartition[0]
> --
>
> Key: KAFKA-9819
> URL: https://issues.apache.org/jira/browse/KAFKA-9819
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Matthias J. Sax
>Priority: Major
>  Labels: flaky-test
>
> h3. Error Message
> java.lang.AssertionError: expected:<[test-reader Changelog partition 
> unknown-0 could not be found, it could be already cleaned up during the 
> handlingof task corruption and never restore again]> but was:<[[AdminClient 
> clientId=adminclient-91] Connection to node -1 (localhost/127.0.0.1:8080) 
> could not be established. Broker may not be available., test-reader Changelog 
> partition unknown-0 could not be found, it could be already cleaned up during 
> the handlingof task corruption and never restore again]>
> h3. Stacktrace
> java.lang.AssertionError: expected:<[test-reader Changelog partition 
> unknown-0 could not be found, it could be already cleaned up during the 
> handlingof task corruption and never restore again]> but was:<[[AdminClient 
> clientId=adminclient-91] Connection to node -1 (localhost/127.0.0.1:8080) 
> could not be established. Broker may not be available., test-reader Changelog 
> partition unknown-0 could not be found, it could be already cleaned up during 
> the handlingof task corruption and never restore again]>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8611) Add KStream#repartition operation

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083731#comment-17083731
 ] 

ASF GitHub Bot commented on KAFKA-8611:
---

mjsax commented on pull request #8470: KAFKA-8611 / Refactor 
KStreamRepartitionIntegrationTest
URL: https://github.com/apache/kafka/pull/8470
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add KStream#repartition operation
> -
>
> Key: KAFKA-8611
> URL: https://issues.apache.org/jira/browse/KAFKA-8611
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Levani Kokhreidze
>Assignee: Levani Kokhreidze
>Priority: Minor
>  Labels: kip
> Fix For: 2.6.0
>
>
> When using DSL in Kafka Streams, data re-partition happens only when 
> key-changing operation is followed by stateful operation. On the other hand, 
> in DSL, stateful computation can happen using _transform()_ operation as 
> well. Problem with this approach is that, even if any upstream operation was 
> key-changing before calling _transform()_, no auto-repartition is triggered. 
> If repartitioning is required, a call to _through(String)_ should be 
> performed before _transform()_. With the current implementation, burden of 
> managing and creating the topic falls on user and introduces extra complexity 
> of managing Kafka Streams application.
> KIP-221: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-221%3A+Enhance+DSL+with+Connecting+Topic+Creation+and+Repartition+Hint]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-04-14 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083730#comment-17083730
 ] 

Matthias J. Sax commented on KAFKA-8079:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/5776/testReport/junit/kafka.server.epoch/EpochDrivenReplicationProtocolAcceptanceTest/shouldSurviveFastLeaderChange/]

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.6.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY

[jira] [Resolved] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-5676.

Resolution: Fixed

> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie
>  Time Spent: 96h
>  Remaining Estimate: 0h
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-5676) MockStreamsMetrics should be in o.a.k.test

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reopened KAFKA-5676:


> MockStreamsMetrics should be in o.a.k.test
> --
>
> Key: KAFKA-5676
> URL: https://issues.apache.org/jira/browse/KAFKA-5676
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: newbie
>  Time Spent: 96h
>  Remaining Estimate: 0h
>
> {{MockStreamsMetrics}}'s package should be `o.a.k.test` not 
> `o.a.k.streams.processor.internals`. 
> In addition, it should not require a {{Metrics}} parameter in its constructor 
> as it is only needed for its extended base class; the right way of mocking 
> should be implementing {{StreamsMetrics}} with mock behavior than extended a 
> real implementaion of {{StreamsMetricsImpl}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8856:
---
Fix Version/s: 2.4.0

> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.4.0
>
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-8179) Incremental Rebalance Protocol for Kafka Consumer

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8179:
---
Fix Version/s: (was: 2.5.0)

> Incremental Rebalance Protocol for Kafka Consumer
> -
>
> Key: KAFKA-8179
> URL: https://issues.apache.org/jira/browse/KAFKA-8179
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>Priority: Major
> Fix For: 2.4.0
>
>
> Recently Kafka community is promoting cooperative rebalancing to mitigate the 
> pain points in the stop-the-world rebalancing protocol. This ticket is 
> created to initiate that idea at the Kafka consumer client, which will be 
> beneficial for heavy-stateful consumers such as Kafka Streams applications.
> In short, the scope of this ticket includes reducing unnecessary rebalance 
> latency due to heavy partition migration: i.e. partitions being revoked and 
> re-assigned. This would make the built-in consumer assignors (range, 
> round-robin etc) to be aware of previously assigned partitions and be sticky 
> in best-effort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9831) Failing test: EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]

2020-04-14 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083707#comment-17083707
 ] 

Matthias J. Sax commented on KAFKA-9831:


Thanks [~ableegoldman]. Will investigate this in more details now.

> Failing test: 
> EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]
> --
>
> Key: KAFKA-9831
> URL: https://issues.apache.org/jira/browse/KAFKA-9831
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
> Attachments: one.stdout.txt, two.stdout.txt
>
>
> I've seen this fail twice in a row on the same build, but with different 
> errors. Stacktraces follow; stdout is attached.
> One:
> {noformat}
> java.lang.AssertionError: Did not receive all 40 records from topic 
> singlePartitionOutputTopic within 6 ms
> Expected: is a value equal to or greater than <40>
>  but: <39> was less than <40>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:517)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:513)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:491)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.readResult(EosIntegrationTest.java:766)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor

[jira] [Updated] (KAFKA-9868) Flaky Test EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9868:
---
Component/s: unit tests

> Flaky Test 
> EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore
> ---
>
> Key: KAFKA-9868
> URL: https://issues.apache.org/jira/browse/KAFKA-9868
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: Sophie Blee-Goldman
>Priority: Major
>  Labels: flaky-test
>
> h3. Error Message
> java.lang.AssertionError: Condition not met within timeout 15000. Expected 
> ERROR state but driver is on RUNNING



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7544) Transient Failure: org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7544.

Resolution: Duplicate

Closing this ticket as it links to the older, non-parameterized version if the 
test, in favor of KAFKA-9831.

> Transient Failure: 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails
> 
>
> Key: KAFKA-7544
> URL: https://issues.apache.org/jira/browse/KAFKA-7544
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
>
> Observed on Java 11: 
> [https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/264/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFails/]
> at 
> [https://github.com/apache/kafka/pull/5795/commits/5bdcd0e023c6f406d585155399f6541bb6a9f9c2]
>  
> stacktrace:
> {noformat}
> java.lang.AssertionError: 
> Expected: <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 2), KeyValue(0, 3), 
> KeyValue(0, 4), KeyValue(0, 5), KeyValue(0, 6), KeyValue(0, 7), KeyValue(0, 
> 8), KeyValue(0, 9)]>
>  but: was <[KeyValue(0, 0), KeyValue(0, 1), KeyValue(0, 2), KeyValue(0, 
> 3), KeyValue(0, 4), KeyValue(0, 5), KeyValue(0, 6), KeyValue(0, 7), 
> KeyValue(0, 8), KeyValue(0, 9), KeyValue(0, 10), KeyValue(0, 11), KeyValue(0, 
> 12), KeyValue(0, 13), KeyValue(0, 14)]>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:8)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.checkResultPerKey(EosIntegrationTest.java:218)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFails(EosIntegrationTest.java:360)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:106)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>   at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDi

[jira] [Commented] (KAFKA-9867) Log cleaner throws InvalidOffsetException for some partitions

2020-04-14 Thread Bradley Peterson (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083661#comment-17083661
 ] 

Bradley Peterson commented on KAFKA-9867:
-

I haven't. I didn't see any related changes in 2.4.1 so I was waiting for 2.5.0 
to upgrade. But I'll try 2.4.1 if there's reason to believe it will help.

> Log cleaner throws InvalidOffsetException for some partitions
> -
>
> Key: KAFKA-9867
> URL: https://issues.apache.org/jira/browse/KAFKA-9867
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.4.0
> Environment: AWS EC2 with data on EBS volumes.
>Reporter: Bradley Peterson
>Priority: Major
>
> About half the partitions for one topic are marked "uncleanable". This is 
> consistent across broker replicas – if a partition is uncleanable on one 
> broker, its replicas are also uncleanable.
> The log-cleaner log seems to suggest out of order offsets. We've seen corrupt 
> indexes before, so I removed the indexes from the affected segments and let 
> Kafka rebuild them, but it hit the same error.
> I don't know when the error first occurred because we've restarted the 
> brokers and rotated logs, but it is possible the broker's crashed at some 
> point.
> How can I repair these partitions?
> {noformat}
> [2020-04-09 00:14:16,979] INFO Cleaner 0: Cleaning segment 223293473 in log 
> x-9 (largest timestamp Wed Nov 13 20:35:57 UTC 2019
> ) into 223293473, retaining deletes. (kafka.log.LogCleaner)
> [2020-04-09 00:14:29,486] INFO Cleaner 0: Cleaning segment 226762315 in log 
> x-9 (largest timestamp Wed Nov 06 18:17:10 UTC 2019
> ) into 223293473, retaining deletes. (kafka.log.LogCleaner)
> [2020-04-09 00:14:29,502] WARN [kafka-log-cleaner-thread-0]: Unexpected 
> exception thrown when cleaning log Log(/var/lib/kafka/x
> -9). Marking its partition (x-9) as uncleanable (kafka.log.LogCleaner)
> org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an 
> offset (226765178) to position 941 no larger than the last offset appended 
> (228774155) to /var/lib/kafka/x-9/000223293473.index.cleaned.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9831) Failing test: EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]

2020-04-14 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083660#comment-17083660
 ] 

Matthias J. Sax commented on KAFKA-9831:


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/5765/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState_exactly_once_beta_/]

and

[https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1747/testReport/junit/org.apache.kafka.streams.integration/EosIntegrationTest/shouldNotViolateEosIfOneTaskFailsWithState_exactly_once_/]

> Failing test: 
> EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]
> --
>
> Key: KAFKA-9831
> URL: https://issues.apache.org/jira/browse/KAFKA-9831
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
> Attachments: one.stdout.txt, two.stdout.txt
>
>
> I've seen this fail twice in a row on the same build, but with different 
> errors. Stacktraces follow; stdout is attached.
> One:
> {noformat}
> java.lang.AssertionError: Did not receive all 40 records from topic 
> singlePartitionOutputTopic within 6 ms
> Expected: is a value equal to or greater than <40>
>  but: <39> was less than <40>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:517)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:513)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:491)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.readResult(EosIntegrationTest.java:766)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)

[jira] [Created] (KAFKA-9868) Flaky Test EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore

2020-04-14 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-9868:
--

 Summary: Flaky Test 
EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore
 Key: KAFKA-9868
 URL: https://issues.apache.org/jira/browse/KAFKA-9868
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Sophie Blee-Goldman


h3. Error Message

java.lang.AssertionError: Condition not met within timeout 15000. Expected 
ERROR state but driver is on RUNNING



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9867) Log cleaner throws InvalidOffsetException for some partitions

2020-04-14 Thread Ismael Juma (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083647#comment-17083647
 ] 

Ismael Juma commented on KAFKA-9867:


Thanks for the report. Have you tried 2.4.1?

> Log cleaner throws InvalidOffsetException for some partitions
> -
>
> Key: KAFKA-9867
> URL: https://issues.apache.org/jira/browse/KAFKA-9867
> Project: Kafka
>  Issue Type: Bug
>  Components: log cleaner
>Affects Versions: 2.4.0
> Environment: AWS EC2 with data on EBS volumes.
>Reporter: Bradley Peterson
>Priority: Major
>
> About half the partitions for one topic are marked "uncleanable". This is 
> consistent across broker replicas – if a partition is uncleanable on one 
> broker, its replicas are also uncleanable.
> The log-cleaner log seems to suggest out of order offsets. We've seen corrupt 
> indexes before, so I removed the indexes from the affected segments and let 
> Kafka rebuild them, but it hit the same error.
> I don't know when the error first occurred because we've restarted the 
> brokers and rotated logs, but it is possible the broker's crashed at some 
> point.
> How can I repair these partitions?
> {noformat}
> [2020-04-09 00:14:16,979] INFO Cleaner 0: Cleaning segment 223293473 in log 
> x-9 (largest timestamp Wed Nov 13 20:35:57 UTC 2019
> ) into 223293473, retaining deletes. (kafka.log.LogCleaner)
> [2020-04-09 00:14:29,486] INFO Cleaner 0: Cleaning segment 226762315 in log 
> x-9 (largest timestamp Wed Nov 06 18:17:10 UTC 2019
> ) into 223293473, retaining deletes. (kafka.log.LogCleaner)
> [2020-04-09 00:14:29,502] WARN [kafka-log-cleaner-thread-0]: Unexpected 
> exception thrown when cleaning log Log(/var/lib/kafka/x
> -9). Marking its partition (x-9) as uncleanable (kafka.log.LogCleaner)
> org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an 
> offset (226765178) to position 941 no larger than the last offset appended 
> (228774155) to /var/lib/kafka/x-9/000223293473.index.cleaned.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9831) Failing test: EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]

2020-04-14 Thread Sophie Blee-Goldman (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083598#comment-17083598
 ] 

Sophie Blee-Goldman commented on KAFKA-9831:


I don't think either of the bugs exposed by soak would cause this test to fail, 
at least not in the way it did – seems eos is violated in at least some 
failures, as the "Expected" set of records is shorter than the "Actual"

> Failing test: 
> EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState[exactly_once_beta]
> --
>
> Key: KAFKA-9831
> URL: https://issues.apache.org/jira/browse/KAFKA-9831
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Reporter: John Roesler
>Assignee: Matthias J. Sax
>Priority: Major
> Attachments: one.stdout.txt, two.stdout.txt
>
>
> I've seen this fail twice in a row on the same build, but with different 
> errors. Stacktraces follow; stdout is attached.
> One:
> {noformat}
> java.lang.AssertionError: Did not receive all 40 records from topic 
> singlePartitionOutputTopic within 6 ms
> Expected: is a value equal to or greater than <40>
>  but: <39> was less than <40>
>   at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:517)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:415)
>   at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:383)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:513)
>   at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:491)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.readResult(EosIntegrationTest.java:766)
>   at 
> org.apache.kafka.streams.integration.EosIntegrationTest.shouldNotViolateEosIfOneTaskFailsWithState(EosIntegrationTest.java:473)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>   at 
> org.gradle.api.inter

[jira] [Created] (KAFKA-9867) Log cleaner throws InvalidOffsetException for some partitions

2020-04-14 Thread Bradley (Jira)
Bradley created KAFKA-9867:
--

 Summary: Log cleaner throws InvalidOffsetException for some 
partitions
 Key: KAFKA-9867
 URL: https://issues.apache.org/jira/browse/KAFKA-9867
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Affects Versions: 2.4.0
 Environment: AWS EC2 with data on EBS volumes.
Reporter: Bradley


About half the partitions for one topic are marked "uncleanable". This is 
consistent across broker replicas – if a partition is uncleanable on one 
broker, its replicas are also uncleanable.

The log-cleaner log seems to suggest out of order offsets. We've seen corrupt 
indexes before, so I removed the indexes from the affected segments and let 
Kafka rebuild them, but it hit the same error.

I don't know when the error first occurred because we've restarted the brokers 
and rotated logs, but it is possible the broker's crashed at some point.

How can I repair these partitions?

{noformat}
[2020-04-09 00:14:16,979] INFO Cleaner 0: Cleaning segment 223293473 in log 
x-9 (largest timestamp Wed Nov 13 20:35:57 UTC 2019
) into 223293473, retaining deletes. (kafka.log.LogCleaner)
[2020-04-09 00:14:29,486] INFO Cleaner 0: Cleaning segment 226762315 in log 
x-9 (largest timestamp Wed Nov 06 18:17:10 UTC 2019
) into 223293473, retaining deletes. (kafka.log.LogCleaner)
[2020-04-09 00:14:29,502] WARN [kafka-log-cleaner-thread-0]: Unexpected 
exception thrown when cleaning log Log(/var/lib/kafka/x
-9). Marking its partition (x-9) as uncleanable (kafka.log.LogCleaner)
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an 
offset (226765178) to position 941 no larger than the last offset appended 
(228774155) to /var/lib/kafka/x-9/000223293473.index.cleaned.
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9866) Do not attempt to elect preferred leader replicas which are outside ISR

2020-04-14 Thread Stanislav Kozlovski (Jira)
Stanislav Kozlovski created KAFKA-9866:
--

 Summary: Do not attempt to elect preferred leader replicas which 
are outside ISR
 Key: KAFKA-9866
 URL: https://issues.apache.org/jira/browse/KAFKA-9866
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski


The controller automatically triggers a preferred leader election every N 
minutes. It tries to elect all preferred leaders of partitions without doing 
some basic checks like whether the leader is in sync.

This leads to a multitude of errors which cause confusion:
{code:java}
April 14th 2020, 17:01:11.015   [Controller id=0] Partition TOPIC-9 failed to 
complete preferred replica leader election to 1. Leader is still 0{code}
{code:java}
April 14th 2020, 17:01:11.002   [Controller id=0] Error completing replica 
leader election (PREFERRED) for partition TOPIC-9
kafka.common.StateChangeFailedException: Failed to elect leader for partition 
TOPIC-9 under strategy PreferredReplicaPartitionLeaderElectionStrategy {code}
It would be better if the Controller filtered out some of these elections, not 
attempt them at all and maybe log an aggregate INFO-level log



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9862) Enable --if-exists and --if-not-exists for AdminClient in TopicCommand

2020-04-14 Thread Cheng Tan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083525#comment-17083525
 ] 

Cheng Tan commented on KAFKA-9862:
--

Thanks. I put critical because it blocks the --zookeeper flag removal. Some 
system tests are testing this functionality with Zookeeper. Just changed.

> Enable --if-exists and --if-not-exists for AdminClient in TopicCommand
> --
>
> Key: KAFKA-9862
> URL: https://issues.apache.org/jira/browse/KAFKA-9862
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Critical
>
> In *kafka-topic.sh*, we expect to use --if-exists to ensure that the topic to 
> create or change exists. Similarly, we expect to use --if-not-exists to 
> ensure that the topic to create or change does not exist. Currently, only 
> *ZookeeperTopicService* supports these two options and we want to introduce 
> them to *AdminClientTopicService.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9862) Enable --if-exists and --if-not-exists for AdminClient in TopicCommand

2020-04-14 Thread Cheng Tan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Tan updated KAFKA-9862:
-
Priority: Minor  (was: Critical)

> Enable --if-exists and --if-not-exists for AdminClient in TopicCommand
> --
>
> Key: KAFKA-9862
> URL: https://issues.apache.org/jira/browse/KAFKA-9862
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Minor
>
> In *kafka-topic.sh*, we expect to use --if-exists to ensure that the topic to 
> create or change exists. Similarly, we expect to use --if-not-exists to 
> ensure that the topic to create or change does not exist. Currently, only 
> *ZookeeperTopicService* supports these two options and we want to introduce 
> them to *AdminClientTopicService.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9539) Add leader epoch in StopReplicaRequest

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083520#comment-17083520
 ] 

ASF GitHub Bot commented on KAFKA-9539:
---

hachikuji commented on pull request #8257: KAFKA-9539; Add leader epoch in 
StopReplicaRequest (KIP-570)
URL: https://github.com/apache/kafka/pull/8257
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add leader epoch in StopReplicaRequest
> --
>
> Key: KAFKA-9539
> URL: https://issues.apache.org/jira/browse/KAFKA-9539
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> Unlike the LeaderAndIsrRequest, the StopReplicaRequest does not include the 
> leader epoch which makes it vulnerable to reordering. This KIP proposes to 
> add the leader epoch for each partition in the StopReplicaRequest and the 
> broker will verify the epoch before proceeding with the StopReplicaRequest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9539) Add leader epoch in StopReplicaRequest

2020-04-14 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-9539.

Fix Version/s: 2.6.0
   Resolution: Fixed

> Add leader epoch in StopReplicaRequest
> --
>
> Key: KAFKA-9539
> URL: https://issues.apache.org/jira/browse/KAFKA-9539
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
> Fix For: 2.6.0
>
>
> Unlike the LeaderAndIsrRequest, the StopReplicaRequest does not include the 
> leader epoch which makes it vulnerable to reordering. This KIP proposes to 
> add the leader epoch for each partition in the StopReplicaRequest and the 
> broker will verify the epoch before proceeding with the StopReplicaRequest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9858) CVE-2016-3189 Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 allows remote attackers to cause a denial of service (crash) via a crafted bzip2 file, relate

2020-04-14 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083482#comment-17083482
 ] 

Guozhang Wang commented on KAFKA-9858:
--

For rocksdbjni, I saw that at the moment even current master is still using 
bzip version 1.0.6 so 3189 and 12900 would be existed in newest rocksDB 
version. I'd suggest you post on rocksdb community and see if their community 
has a better understanding on how to resolve this?

> CVE-2016-3189  Use-after-free vulnerability in bzip2recover in bzip2 1.0.6 
> allows remote attackers to cause a denial of service (crash) via a crafted 
> bzip2 file, related to block ends set to before the start of the block.
> -
>
> Key: KAFKA-9858
> URL: https://issues.apache.org/jira/browse/KAFKA-9858
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.2.2, 2.3.1, 2.4.1
>Reporter: sihuanx
>Priority: Major
>
> I'm not sure whether  CVE-2016-3189 affects kafka 2.4.1  or not?  This 
> vulnerability  was related to rocksdbjni-5.18.3.jar  which is compiled with 
> *bzip2 .* 
> Is there any task or plan to fix it? 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9840) Consumer should not use OffsetForLeaderEpoch without current epoch validation

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083444#comment-17083444
 ] 

ASF GitHub Bot commented on KAFKA-9840:
---

abbccdda commented on pull request #8486: KAFKA-9840: Skip End Offset 
validation when the leader epoch is not reliable
URL: https://github.com/apache/kafka/pull/8486
 
 
   As title suggests.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Consumer should not use OffsetForLeaderEpoch without current epoch validation
> -
>
> Key: KAFKA-9840
> URL: https://issues.apache.org/jira/browse/KAFKA-9840
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.4.1
>Reporter: Jason Gustafson
>Assignee: Boyang Chen
>Priority: Major
>
> We have observed a case where the consumer attempted to detect truncation 
> with the OffsetsForLeaderEpoch API against a broker which had become a 
> zombie. In this case, the last epoch known to the consumer was higher than 
> the last epoch known to the zombie broker, so the broker returned -1 as both 
> the end offset and epoch in the response. The consumer did not check for this 
> in the response, which resulted in the following message:
> {code}
> Truncation detected for partition topic-1 at offset 
> FetchPosition{offset=11859, offsetEpoch=Optional[46], 
> currentLeader=LeaderAndEpoch{leader=broker-host (id: 3 rack: null), 
> epoch=-1}}, resetting offset to the first offset known to diverge 
> FetchPosition{offset=-1, offsetEpoch=Optional[-1], 
> currentLeader=LeaderAndEpoch{broker-host (id: 3 rack: null), epoch=-1}} 
> (org.apache.kafka.clients.consumer.internals.SubscriptionState:414)
> {code}
> There are a couple ways we the consumer can handle this situation better. 
> First, the reason we did not detect the zombie broker is that we did not 
> include the current leader epoch in the OffsetForLeaderEpoch request. This 
> was likely because of KAFKA-9212. Following this patch, we would not 
> initialize the current leader epoch from metadata responses because there are 
> cases that we cannot rely on it. But if the client cannot rely on being able 
> to detect zombies, then the epoch validation is less useful anyway. So the 
> simple solution is to not bother with the validation unless we have a 
> reliable current leader epoch.
> Second, the consumer needs to check for the case when the returned offset and 
> epoch are not defined. In this case, we have to treat this as a normal 
> OffsetOutOfRange case and invoke the reset policy. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9764) Deprecate Stream Simple benchmark

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083422#comment-17083422
 ] 

ASF GitHub Bot commented on KAFKA-9764:
---

mjsax commented on pull request #8353: KAFKA-9764: Remove stream simple 
benchmark suite
URL: https://github.com/apache/kafka/pull/8353
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Deprecate Stream Simple benchmark
> -
>
> Key: KAFKA-9764
> URL: https://issues.apache.org/jira/browse/KAFKA-9764
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> Over the years, we are seeing this simple benchmark suite to be less valuable 
> over time. It is built on Jenkins infra which could not guarantee consistent 
> result out of each run, and most times could not bring in any insights as 
> well. In order to avoid wasting developer's time for testing performance 
> against this poor setup, we will remove the test suite as it is no longer 
> valuable to the community.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9865:
---
Issue Type: Improvement  (was: Bug)

> Expose output topic names from TopologyTestDriver
> -
>
> Key: KAFKA-9865
> URL: https://issues.apache.org/jira/browse/KAFKA-9865
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams-test-utils
>Affects Versions: 2.4.1
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Minor
>  Labels: kip
>
> Expose the output topic names from TopologyTestDriver, i.e. 
> `outputRecordsByTopic.keySet()`.
> This is useful to users of the test driver, as they can use it to determine 
> the names of all output topics. Which can then be used to capture all output 
> of a topology, without having to manually list all the output topics.
>  
> KIP: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9865:
---
Labels: kip  (was: )

> Expose output topic names from TopologyTestDriver
> -
>
> Key: KAFKA-9865
> URL: https://issues.apache.org/jira/browse/KAFKA-9865
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 2.4.1
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Minor
>  Labels: kip
>
> Expose the output topic names from TopologyTestDriver, i.e. 
> `outputRecordsByTopic.keySet()`.
> This is useful to users of the test driver, as they can use it to determine 
> the names of all output topics. Which can then be used to capture all output 
> of a topology, without having to manually list all the output topics.
>  
> KIP: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Lucas Bradstreet (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lucas Bradstreet reassigned KAFKA-9864:
---

Assignee: Lucas Bradstreet

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Assignee: Lucas Bradstreet
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-9864:
---
Comment: was deleted

(was: big-andy-coates commented on pull request #8483: fixes: 
https://issues.apache.org/jira/browse/KAFKA-9864
URL: https://github.com/apache/kafka/pull/8483
 
 
   fixes: https://issues.apache.org/jira/browse/KAFKA-9864
   
   This commit exposes the names of the topics the topology produced output to 
during its run.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org
)

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Andy Coates (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Coates updated KAFKA-9865:
---
Description: 
Expose the output topic names from TopologyTestDriver, i.e. 
`outputRecordsByTopic.keySet()`.

This is useful to users of the test driver, as they can use it to determine the 
names of all output topics. Which can then be used to capture all output of a 
topology, without having to manually list all the output topics.

 

KIP: 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver]

  was:
Expose the output topic names from TopologyTestDriver, i.e. 
`outputRecordsByTopic.keySet()`.

This is useful to users of the test driver, as they can use it to determine the 
names of all output topics. Which can then be used to capture all output of a 
topology, without having to manually list all the output topics.


> Expose output topic names from TopologyTestDriver
> -
>
> Key: KAFKA-9865
> URL: https://issues.apache.org/jira/browse/KAFKA-9865
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 2.4.1
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Minor
>
> Expose the output topic names from TopologyTestDriver, i.e. 
> `outputRecordsByTopic.keySet()`.
> This is useful to users of the test driver, as they can use it to determine 
> the names of all output topics. Which can then be used to capture all output 
> of a topology, without having to manually list all the output topics.
>  
> KIP: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-594%3A+Expose+output+topic+names+from+TopologyTestDriver]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Andy Coates (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Coates reassigned KAFKA-9865:
--

Assignee: Andy Coates

> Expose output topic names from TopologyTestDriver
> -
>
> Key: KAFKA-9865
> URL: https://issues.apache.org/jira/browse/KAFKA-9865
> Project: Kafka
>  Issue Type: Bug
>  Components: streams-test-utils
>Affects Versions: 2.4.1
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Minor
>
> Expose the output topic names from TopologyTestDriver, i.e. 
> `outputRecordsByTopic.keySet()`.
> This is useful to users of the test driver, as they can use it to determine 
> the names of all output topics. Which can then be used to capture all output 
> of a topology, without having to manually list all the output topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Andy Coates (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Coates reassigned KAFKA-9864:
--

Assignee: (was: Andy Coates)

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Andy Coates (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Coates updated KAFKA-9864:
---
Comment: was deleted

(was: Patch available:

[https://github.com/apache/kafka/pull/8483])

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Assignee: Andy Coates
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Lucas Bradstreet (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083308#comment-17083308
 ] 

Lucas Bradstreet commented on KAFKA-9864:
-

PR is available at [https://github.com/apache/kafka/pull/8477/files]. The above 
messages are referencing the wrong jira.

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Assignee: Andy Coates
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083300#comment-17083300
 ] 

ASF GitHub Bot commented on KAFKA-9864:
---

big-andy-coates commented on pull request #8483: fixes: 
https://issues.apache.org/jira/browse/KAFKA-9864
URL: https://github.com/apache/kafka/pull/8483
 
 
   fixes: https://issues.apache.org/jira/browse/KAFKA-9864
   
   This commit exposes the names of the topics the topology produced output to 
during its run.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Assignee: Andy Coates
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Andy Coates (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Coates reassigned KAFKA-9864:
--

Assignee: Andy Coates

> Avoid expensive QuotaViolationException usage
> -
>
> Key: KAFKA-9864
> URL: https://issues.apache.org/jira/browse/KAFKA-9864
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Lucas Bradstreet
>Assignee: Andy Coates
>Priority: Major
>
> QuotaViolationException generates stack traces and uses String.format in 
> exception generation. QuotaViolationException is used for control flow and 
> these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9865) Expose output topic names from TopologyTestDriver

2020-04-14 Thread Andy Coates (Jira)
Andy Coates created KAFKA-9865:
--

 Summary: Expose output topic names from TopologyTestDriver
 Key: KAFKA-9865
 URL: https://issues.apache.org/jira/browse/KAFKA-9865
 Project: Kafka
  Issue Type: Bug
  Components: streams-test-utils
Affects Versions: 2.4.1
Reporter: Andy Coates


Expose the output topic names from TopologyTestDriver, i.e. 
`outputRecordsByTopic.keySet()`.

This is useful to users of the test driver, as they can use it to determine the 
names of all output topics. Which can then be used to capture all output of a 
topology, without having to manually list all the output topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9864) Avoid expensive QuotaViolationException usage

2020-04-14 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-9864:
---

 Summary: Avoid expensive QuotaViolationException usage
 Key: KAFKA-9864
 URL: https://issues.apache.org/jira/browse/KAFKA-9864
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Lucas Bradstreet


QuotaViolationException generates stack traces and uses String.format in 
exception generation. QuotaViolationException is used for control flow and 
these costs add up even though the exception contents are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KAFKA-9863) update the deprecated --zookeeper option in the documentation into --bootstrap-server

2020-04-14 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen updated KAFKA-9863:
-
Description: 
Since V2.2.0, the -zookeeper option turned into deprecated because Kafka can 
directly connect to brokers with --bootstrap-server (KIP-377). But in the 
official documentation, there are many example commands use --zookeeper instead 
of --bootstrap-server. Follow the command in the documentation, you'll get this 
warning, which is not good.
{code:java}
Warning: --zookeeper is deprecated and will be removed in a future version of 
Kafka.
Use --bootstrap-server instead to specify a broker to connect to.{code}

  was:
Since V2.2.0, the --zookeeper option turned into deprecated because Kafka can 
directly connect to brokers with {{--bootstrap-server}} (KIP-377). But in the 
official documentation, there are many example commands use --zookeeper instead 
of --bootstrap-server. Follow the command in the documentation, you'll get this 
warning, which is not good.
{code:java}
Warning: --zookeeper is deprecated and will be removed in a future version of 
Kafka.
Use --bootstrap-server instead to specify a broker to connect to.{code}


> update the deprecated --zookeeper option in the documentation into 
> --bootstrap-server
> -
>
> Key: KAFKA-9863
> URL: https://issues.apache.org/jira/browse/KAFKA-9863
> Project: Kafka
>  Issue Type: Bug
>  Components: docs, documentation
>Affects Versions: 2.4.1
>Reporter: Luke Chen
>Assignee: Luke Chen
>Priority: Major
>
> Since V2.2.0, the -zookeeper option turned into deprecated because Kafka can 
> directly connect to brokers with --bootstrap-server (KIP-377). But in the 
> official documentation, there are many example commands use --zookeeper 
> instead of --bootstrap-server. Follow the command in the documentation, 
> you'll get this warning, which is not good.
> {code:java}
> Warning: --zookeeper is deprecated and will be removed in a future version of 
> Kafka.
> Use --bootstrap-server instead to specify a broker to connect to.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9863) update the deprecated --zookeeper option in the documentation into --bootstrap-server

2020-04-14 Thread Luke Chen (Jira)
Luke Chen created KAFKA-9863:


 Summary: update the deprecated --zookeeper option in the 
documentation into --bootstrap-server
 Key: KAFKA-9863
 URL: https://issues.apache.org/jira/browse/KAFKA-9863
 Project: Kafka
  Issue Type: Bug
  Components: docs, documentation
Affects Versions: 2.4.1
Reporter: Luke Chen
Assignee: Luke Chen


Since V2.2.0, the --zookeeper option turned into deprecated because Kafka can 
directly connect to brokers with {{--bootstrap-server}} (KIP-377). But in the 
official documentation, there are many example commands use --zookeeper instead 
of --bootstrap-server. Follow the command in the documentation, you'll get this 
warning, which is not good.
{code:java}
Warning: --zookeeper is deprecated and will be removed in a future version of 
Kafka.
Use --bootstrap-server instead to specify a broker to connect to.{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9034) kafka-run-class.sh will fail if JAVA_HOME has space

2020-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17083001#comment-17083001
 ] 

ASF GitHub Bot commented on KAFKA-9034:
---

sebwills commented on pull request #8481: KAFKA-9034 Tolerate spaces in 
$JAVA_HOME
URL: https://github.com/apache/kafka/pull/8481
 
 
   This allows kafka tools to work on Cygwin where $JAVA_HOME typically 
contains a space (e.g. "C:\Program Files\Java\jdkXXX")
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> kafka-run-class.sh will fail if JAVA_HOME has space
> ---
>
> Key: KAFKA-9034
> URL: https://issues.apache.org/jira/browse/KAFKA-9034
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
> Environment: macOS
>Reporter: Fenjin Wang
>Assignee: Manasvi Gupta
>Priority: Minor
>  Labels: easyfix, newbie
>
> If set JAVA_HOME to IntelliJ's java, bin/zookeeper-server-start.sh can't work 
> because the path has space in it.
>  
> {quote}export JAVA_HOME="/Applications/IntelliJ 
> IDEA.app/Contents/jbr/Contents/Home/"
> {quote}
>  
> We can fix this by quote "$JAVA" in the shell script according to: 
> [https://stackoverflow.com/a/7740746/1203241]
>  
> I can send a PR if you like.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-3346) Rename "Mode" to "SslMode"

2020-04-14 Thread Chia-Ping Tsai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-3346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082977#comment-17082977
 ] 

Chia-Ping Tsai commented on KAFKA-3346:
---

Is this still available? ConnectionMode is a good name to me as this 
[commit|https://github.com/apache/kafka/commit/ca0c071c108c9fd31a759e1cd1c4f89bdc5ac47e]
 had mentioned "Connection mode" (see below)

{quote}
Connection mode for SSL and SASL connections.
{quote}

> Rename "Mode" to "SslMode"
> --
>
> Key: KAFKA-3346
> URL: https://issues.apache.org/jira/browse/KAFKA-3346
> Project: Kafka
>  Issue Type: Bug
>  Components: security
>Reporter: Gwen Shapira
>Priority: Major
>
> In the channel builders, the Mode enum is undocumented, so it is unclear that 
> it is used to signify whether the connection is for SSL client or SSL server.
> I suggest renaming to SslMode (although adding documentation will be ok too, 
> if people object to the rename)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9862) Enable --if-exists and --if-not-exists for AdminClient in TopicCommand

2020-04-14 Thread Jira


[ 
https://issues.apache.org/jira/browse/KAFKA-9862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17082928#comment-17082928
 ] 

Sönke Liebau commented on KAFKA-9862:
-

Hi [~d8tltanc], that sounds useful, thanks! 

Just one comment, is this really a critical piece of work? Personally I think 
we should rather call this "minor".

> Enable --if-exists and --if-not-exists for AdminClient in TopicCommand
> --
>
> Key: KAFKA-9862
> URL: https://issues.apache.org/jira/browse/KAFKA-9862
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Cheng Tan
>Assignee: Cheng Tan
>Priority: Critical
>
> In *kafka-topic.sh*, we expect to use --if-exists to ensure that the topic to 
> create or change exists. Similarly, we expect to use --if-not-exists to 
> ensure that the topic to create or change does not exist. Currently, only 
> *ZookeeperTopicService* supports these two options and we want to introduce 
> them to *AdminClientTopicService.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)