[jira] [Updated] (KAFKA-7933) KTableKTableLeftJoinTest takes an hour to finish
[ https://issues.apache.org/jira/browse/KAFKA-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7933: -- Priority: Major (was: Critical) > KTableKTableLeftJoinTest takes an hour to finish > > > Key: KAFKA-7933 > URL: https://issues.apache.org/jira/browse/KAFKA-7933 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 2.2.0 >Reporter: Viktor Somogyi >Priority: Major > Attachments: jenkins-output-one-hour-test.log > > > PRs might time out as > {{KTableKTableLeftJoinTest.shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions}} > took one hour to complete. > {noformat} > 11:57:45 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions STARTED > 12:53:35 > 12:53:35 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions PASSED > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7933) KTableKTableLeftJoinTest takes an hour to finish
[ https://issues.apache.org/jira/browse/KAFKA-7933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16769385#comment-16769385 ] Viktor Somogyi commented on KAFKA-7933: --- For now I only saw this at one instance: https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/2394 > KTableKTableLeftJoinTest takes an hour to finish > > > Key: KAFKA-7933 > URL: https://issues.apache.org/jira/browse/KAFKA-7933 > Project: Kafka > Issue Type: Improvement > Components: streams >Affects Versions: 2.2.0 >Reporter: Viktor Somogyi >Priority: Major > Attachments: jenkins-output-one-hour-test.log > > > PRs might time out as > {{KTableKTableLeftJoinTest.shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions}} > took one hour to complete. > {noformat} > 11:57:45 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions STARTED > 12:53:35 > 12:53:35 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions PASSED > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7933) KTableKTableLeftJoinTest takes an hour to finish
Viktor Somogyi created KAFKA-7933: - Summary: KTableKTableLeftJoinTest takes an hour to finish Key: KAFKA-7933 URL: https://issues.apache.org/jira/browse/KAFKA-7933 Project: Kafka Issue Type: Improvement Components: streams Affects Versions: 2.2.0 Reporter: Viktor Somogyi Attachments: jenkins-output-one-hour-test.log PRs might time out as {{KTableKTableLeftJoinTest.shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions}} took one hour to complete. {noformat} 11:57:45 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions STARTED 12:53:35 12:53:35 org.apache.kafka.streams.kstream.internals.KTableKTableLeftJoinTest > shouldNotThrowIllegalStateExceptionWhenMultiCacheEvictions PASSED {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5722: -- Fix Version/s: (was: 2.2.0) 2.3.0 > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 2.3.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-5453) Controller may miss requests sent to the broker when zk session timeout happens.
[ https://issues.apache.org/jira/browse/KAFKA-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5453: -- Fix Version/s: (was: 2.2.0) 2.3.0 > Controller may miss requests sent to the broker when zk session timeout > happens. > > > Key: KAFKA-5453 > URL: https://issues.apache.org/jira/browse/KAFKA-5453 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 0.11.0.0 >Reporter: Jiangjie Qin >Assignee: Viktor Somogyi >Priority: Major > Fix For: 2.3.0 > > > The issue I encountered was the following: > 1. Partition reassignment was in progress, one replica of a partition is > being reassigned from broker 1 to broker 2. > 2. Controller received an ISR change notification which indicates broker 2 > has caught up. > 3. Controller was sending StopReplicaRequest to broker 1. > 4. Broker 1 zk session timeout occurs. Controller removed broker 1 from the > cluster and cleaned up the queue. i.e. the StopReplicaRequest was removed > from the ControllerChannelManager. > 5. Broker 1 reconnected to zk and act as if it is still a follower replica of > the partition. > 6. Broker 1 will always receive exception from the leader because it is not > in the replica list. > Not sure what is the correct fix here. It seems that broke 1 in this case > should ask the controller for the latest replica assignment. > There are two related bugs: > 1. when a {{NotAssignedReplicaException}} is thrown from > {{Partition.updateReplicaLogReadResult()}}, the other partitions in the same > request will failed to update the fetch timestamp and offset and thus also > drop out of the ISR. > 2. The {{NotAssignedReplicaException}} was not properly returned to the > replicas, instead, a UnknownServerException is returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7736) Consolidate Map usages in TransactionManager
[ https://issues.apache.org/jira/browse/KAFKA-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16768285#comment-16768285 ] Viktor Somogyi commented on KAFKA-7736: --- [~hachikuji] Had a go on refactoring these map usages into a single map. My approach was to create a TopicPartition to TopicPartitionEntry map which contains the values of the old maps, that way we can treat these data together. Please have a look once you have some time. > Consolidate Map usages in TransactionManager > > > Key: KAFKA-7736 > URL: https://issues.apache.org/jira/browse/KAFKA-7736 > Project: Kafka > Issue Type: Task > Components: producer >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: exactly-once > > There are a bunch of Map collections in TransactionManager > which could be consolidated into a single map to consolidate bookkeeping and > get rid of potential bugs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7823) Missing release notes pages
Viktor Somogyi created KAFKA-7823: - Summary: Missing release notes pages Key: KAFKA-7823 URL: https://issues.apache.org/jira/browse/KAFKA-7823 Project: Kafka Issue Type: Improvement Affects Versions: 1.1.0, 1.0.1, 1.0.0 Reporter: Viktor Somogyi These 3 pages are not available on the apache website, however other release are there (such as 2.0.0). https://www.apache.org/dist/kafka/1.1.0/RELEASE_NOTES.html https://www.apache.org/dist/kafka/1.0.1/RELEASE_NOTES.html https://www.apache.org/dist/kafka/1.0.0/RELEASE_NOTES.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7805) Use --bootstrap-server option in ducktape tests where applicable
Viktor Somogyi created KAFKA-7805: - Summary: Use --bootstrap-server option in ducktape tests where applicable Key: KAFKA-7805 URL: https://issues.apache.org/jira/browse/KAFKA-7805 Project: Kafka Issue Type: Improvement Components: system tests Reporter: Viktor Somogyi KIP-377 introduces the {{--bootstrap-server}} option and deprecates the {{--zookeeper}} option in {{kafka-topics.sh}}. I'd be nice to use the new option in the ducktape tests to gain higher test coverage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7804) Update the docs for KIP-377
[ https://issues.apache.org/jira/browse/KAFKA-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7804: -- Description: KIP-377 introduced the {{--bootstrap-server}} option to the {{kafka-topics.sh}} command. The documentation (examples and notable changes) should be updated accordingly. (was: KIP-377 introduced the {{--bootstrap-server}} option to the {{kafka-topics.sh}} command. The documentation (examples) should be updated accordingly and a release note should be added,) > Update the docs for KIP-377 > --- > > Key: KAFKA-7804 > URL: https://issues.apache.org/jira/browse/KAFKA-7804 > Project: Kafka > Issue Type: Improvement > Components: documentation >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > > KIP-377 introduced the {{--bootstrap-server}} option to the > {{kafka-topics.sh}} command. The documentation (examples and notable changes) > should be updated accordingly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7804) Update the docs for KIP-377
Viktor Somogyi created KAFKA-7804: - Summary: Update the docs for KIP-377 Key: KAFKA-7804 URL: https://issues.apache.org/jira/browse/KAFKA-7804 Project: Kafka Issue Type: Improvement Components: documentation Reporter: Viktor Somogyi Assignee: Viktor Somogyi KIP-377 introduced the {{--bootstrap-server}} option to the {{kafka-topics.sh}} command. The documentation (examples) should be updated accordingly and a release note should be added, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723159#comment-16723159 ] Viktor Somogyi commented on KAFKA-7703: --- [~rsivaram], [~hachikuji] I see you modified that part of the code lately, what is your take on this? > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723156#comment-16723156 ] Viktor Somogyi edited comment on KAFKA-7703 at 12/17/18 4:55 PM: - [~zsxwing], [~dongjoon], I've looked into the issue and it seems there is no easy fix for this in the code as it designed to be async, so it might take some time. Even if we make the method atomic the offset reset that arrives later will be discarded as the first reset nulls out the resetStrategy in SubscriptionState which triggers the {{else if (!subscriptions.isOffsetResetNeeded(partition)}} [check|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L579] to skip the offset reset. In the current situation as a workaround you could put a {{position}} call between {{poll}} and {{seekToEnd}} as it blocks until {{poll}} returns or some error happens. was (Author: viktorsomogyi): [~zsxwing], [~dongjoon], I've looked into the issue and it seems there is no easy fix for this in the code as it designed to be async, so it might take some time. Even if we make the method atomic the offset reset that arrives later will be discarded as the first reset nulls out the resetStrategy in SubscriptionState which triggers the {{else if (!subscriptions.isOffsetResetNeeded(partition)}} [check|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L579] to skip the offset reset. In the current situation you could put a {{position}} call between {{poll}} and {{seekToEnd}} as it blocks until {{poll}} returns or some error happens. > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16723156#comment-16723156 ] Viktor Somogyi commented on KAFKA-7703: --- [~zsxwing], [~dongjoon], I've looked into the issue and it seems there is no easy fix for this in the code as it designed to be async, so it might take some time. Even if we make the method atomic the offset reset that arrives later will be discarded as the first reset nulls out the resetStrategy in SubscriptionState which triggers the {{else if (!subscriptions.isOffsetResetNeeded(partition)}} [check|https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/internals/Fetcher.java#L579] to skip the offset reset. In the current situation you could put a {{position}} call between {{poll}} and {{seekToEnd}} as it blocks until {{poll}} returns or some error happens. > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7736) Consolidate Map usages in TransactionManager
[ https://issues.apache.org/jira/browse/KAFKA-7736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7736: -- Labels: exactly-once (was: ) > Consolidate Map usages in TransactionManager > > > Key: KAFKA-7736 > URL: https://issues.apache.org/jira/browse/KAFKA-7736 > Project: Kafka > Issue Type: Task > Components: producer >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: exactly-once > > There are a bunch of Map collections in TransactionManager > which could be consolidated into a single map to consolidate bookkeeping and > get rid of potential bugs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7737) Consolidate InitProducerId API
[ https://issues.apache.org/jira/browse/KAFKA-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7737: -- Priority: Minor (was: Major) > Consolidate InitProducerId API > -- > > Key: KAFKA-7737 > URL: https://issues.apache.org/jira/browse/KAFKA-7737 > Project: Kafka > Issue Type: Task > Components: producer >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > > We have two separate paths in the producer for the InitProducerId API: one > for the transactional producer and one for the idempotent producer. It would > be nice to find a way to consolidate these. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7737) Consolidate InitProducerId API
[ https://issues.apache.org/jira/browse/KAFKA-7737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7737: -- Labels: exactly-once (was: ) > Consolidate InitProducerId API > -- > > Key: KAFKA-7737 > URL: https://issues.apache.org/jira/browse/KAFKA-7737 > Project: Kafka > Issue Type: Task > Components: producer >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: exactly-once > > We have two separate paths in the producer for the InitProducerId API: one > for the transactional producer and one for the idempotent producer. It would > be nice to find a way to consolidate these. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7737) Consolidate InitProducerId API
Viktor Somogyi created KAFKA-7737: - Summary: Consolidate InitProducerId API Key: KAFKA-7737 URL: https://issues.apache.org/jira/browse/KAFKA-7737 Project: Kafka Issue Type: Task Components: producer Reporter: Viktor Somogyi Assignee: Viktor Somogyi We have two separate paths in the producer for the InitProducerId API: one for the transactional producer and one for the idempotent producer. It would be nice to find a way to consolidate these. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7736) Consolidate Map usages in TransactionManager
Viktor Somogyi created KAFKA-7736: - Summary: Consolidate Map usages in TransactionManager Key: KAFKA-7736 URL: https://issues.apache.org/jira/browse/KAFKA-7736 Project: Kafka Issue Type: Task Components: producer Reporter: Viktor Somogyi Assignee: Viktor Somogyi There are a bunch of Map collections in TransactionManager which could be consolidated into a single map to consolidate bookkeeping and get rid of potential bugs. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16715004#comment-16715004 ] Viktor Somogyi commented on KAFKA-7703: --- [~zsxwing], I just wanted to send a quick update that I have looked at the code and reproduced it based on your test and now I'm trying to figure out what's the best solution for this. I'll write an update once again when I have some solution proposal. > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6794) Support for incremental replica reassignment
[ https://issues.apache.org/jira/browse/KAFKA-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16713043#comment-16713043 ] Viktor Somogyi commented on KAFKA-6794: --- Also created an early pull request for matching up if the algorithm we figured is in line with the goals of this jira. > Support for incremental replica reassignment > > > Key: KAFKA-6794 > URL: https://issues.apache.org/jira/browse/KAFKA-6794 > Project: Kafka > Issue Type: Improvement >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Say you have a replication factor of 4 and you trigger a reassignment which > moves all replicas to new brokers. Now 8 replicas are fetching at the same > time which means you need to account for 8 times the current producer load > plus the catch-up replication. To make matters worse, the replicas won't all > become in-sync at the same time; in the worst case, you could have 7 replicas > in-sync while one is still catching up. Currently, the old replicas won't be > disabled until all new replicas are in-sync. This makes configuring the > throttle tricky since ISR traffic is not subject to it. > Rather than trying to bring all 4 new replicas online at the same time, a > friendlier approach would be to do it incrementally: bring one replica > online, bring it in-sync, then remove one of the old replicas. Repeat until > all replicas have been changed. This would reduce the impact of a > reassignment and make configuring the throttle easier at the cost of a slower > overall reassignment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6794) Support for incremental replica reassignment
[ https://issues.apache.org/jira/browse/KAFKA-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712629#comment-16712629 ] Viktor Somogyi commented on KAFKA-6794: --- I think this change doesn't need a KIP for now, so I'm collected the algorithm and some examples here, [~hachikuji] please have a look at it. h2. Calculating A Reassignment Step For calculating a reassignment step, always the final target replica (FTR) set and the current replica (CR) set is used. # Calculate the replicas to be dropped (DR): # Calculate n = size(FTR) - size(CR) ## Filter those replicas from CR which are not in FTR, this is the excess replica (ER) set ## Sort the ER set in an order where the leader is the last (this will ensure that it will be selected only when needed). ## Take the first n replicas of ER, that will be the set of dropped replicas # Calculate the new replica (NR) to be added by selecting the first replica from FTR that is not in CR # Create the target replica (TR) set: CR + NR - DR # If this is the last step, then order the replicas as specified by FTR. This means that the last step is always equals to FTR h2. Performing A Reassignment Step # Wait until CR is entirely in ISR. This will make sure that we're starting off with a solid base for reassignment. # Calculate the next reassignment step as described above based on the reassignment context. # Wait until all brokers in the target replicas (TR) of the reassignment step are alive. This will make sure that reassignment starts only when the target brokers can perform the actual reassignment step. # If we have new replicas in ISR from the previous step, change the states' of those to OnlineReplica # Update CR in Zookeeper with TR: with this the DR set will be drop and NR set will be added. # Send LeaderAndIsr request to all replicas in CR + NR so they would be notified of the Zookeeper events. # Start new replicas in NR by moving them to NewReplica state. # Set CR to TR in memory. # Send LeaderAndIsr request with a potential new leader (if current leader not in TR) and a new CR (using TR) and same ISR to every broker in TR # Replicas in DR -> Offline (force those replicas out of ISR) # Replicas in DR -> NonExistentReplica (force those replicas to be deleted) # Update the /admin/reassign_partitions path in ZK to remove this partition. # After electing leader, the replicas and ISR information changes, so resend the update metadata request to every broker h2. Example The following code block shows how a transition happens from (0, 1, 2) into (3, 4, 5) where the initial leader is 0. {noformat} (0, 1, 2) // starting assignment | (0, 1, 2, 3) // +3 | (0, 2, 3, 4) // -1 +4 | (0, 3, 4, 5) // -2 +5 | (3, 4, 5) // -0, new leader (3) is elected, requested order is matched, reassignment finished {noformat} Let's take a closer look at the third step above: {noformat} FTR = (3, 4, 5) CR = (0, 1, 2, 3) n = size(FTR) - size(CR) // 1 ER = CR - FTR // (0, 1, 2) ER = order(ER)// (1, 2, 0) DR = takeFirst(ER, n) // (1) NR = first(FTR - CR) // 4 TR = CR + NR - DR // (0, 2, 3, 4) {noformat} > Support for incremental replica reassignment > > > Key: KAFKA-6794 > URL: https://issues.apache.org/jira/browse/KAFKA-6794 > Project: Kafka > Issue Type: Improvement >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Say you have a replication factor of 4 and you trigger a reassignment which > moves all replicas to new brokers. Now 8 replicas are fetching at the same > time which means you need to account for 8 times the current producer load > plus the catch-up replication. To make matters worse, the replicas won't all > become in-sync at the same time; in the worst case, you could have 7 replicas > in-sync while one is still catching up. Currently, the old replicas won't be > disabled until all new replicas are in-sync. This makes configuring the > throttle tricky since ISR traffic is not subject to it. > Rather than trying to bring all 4 new replicas online at the same time, a > friendlier approach would be to do it incrementally: bring one replica > online, bring it in-sync, then remove one of the old replicas. Repeat until > all replicas have been changed. This would reduce the impact of a > reassignment and make configuring the throttle easier at the cost of a slower > overall reassignment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5214) Re-add KafkaAdminClient#apiVersions
[ https://issues.apache.org/jira/browse/KAFKA-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710105#comment-16710105 ] Viktor Somogyi commented on KAFKA-5214: --- +1 for this, it is a requirement for KAFKA-5723 too. > Re-add KafkaAdminClient#apiVersions > --- > > Key: KAFKA-5214 > URL: https://issues.apache.org/jira/browse/KAFKA-5214 > Project: Kafka > Issue Type: Improvement >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe >Priority: Minor > Fix For: 2.2.0 > > > We removed KafkaAdminClient#apiVersions just before 0.11.0.0 to give us a bit > more time to iterate on it before it's included in a release. We should add > the relevant methods back. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-7703: - Assignee: Viktor Somogyi > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-7703) KafkaConsumer.position may return a wrong offset after "seekToEnd" is called
[ https://issues.apache.org/jira/browse/KAFKA-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710012#comment-16710012 ] Viktor Somogyi commented on KAFKA-7703: --- [~zsxwing] I'll pick this up if you don't mind and look into it. > KafkaConsumer.position may return a wrong offset after "seekToEnd" is called > > > Key: KAFKA-7703 > URL: https://issues.apache.org/jira/browse/KAFKA-7703 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 1.1.0, 1.1.1, 2.0.0, 2.0.1, 2.1.0 >Reporter: Shixiong Zhu >Assignee: Viktor Somogyi >Priority: Major > > After "seekToEnd" is called, "KafkaConsumer.position" may return a wrong > offset set by another reset request. > Here is a reproducer: > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246 > In this reproducer, "poll(0)" will send an "earliest" request in background. > However, after "seekToEnd" is called, due to a race condition in > "Fetcher.resetOffsetIfNeeded" (It's not atomic, "seekToEnd" could happen > between the check > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R585 > and the seek > https://github.com/zsxwing/kafka/commit/4e1aa11bfa99a38ac1e2cb0872c055db56b33246#diff-b45245913eaae46aa847d2615d62cde0R605), > "KafkaConsumer.position" may return an "earliest" offset. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5453) Controller may miss requests sent to the broker when zk session timeout happens.
[ https://issues.apache.org/jira/browse/KAFKA-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16709927#comment-16709927 ] Viktor Somogyi commented on KAFKA-5453: --- [~becket_qin] I'd pick this up if you don't mind, I'm interested in this issue. > Controller may miss requests sent to the broker when zk session timeout > happens. > > > Key: KAFKA-5453 > URL: https://issues.apache.org/jira/browse/KAFKA-5453 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 0.11.0.0 >Reporter: Jiangjie Qin >Priority: Major > Fix For: 2.2.0 > > > The issue I encountered was the following: > 1. Partition reassignment was in progress, one replica of a partition is > being reassigned from broker 1 to broker 2. > 2. Controller received an ISR change notification which indicates broker 2 > has caught up. > 3. Controller was sending StopReplicaRequest to broker 1. > 4. Broker 1 zk session timeout occurs. Controller removed broker 1 from the > cluster and cleaned up the queue. i.e. the StopReplicaRequest was removed > from the ControllerChannelManager. > 5. Broker 1 reconnected to zk and act as if it is still a follower replica of > the partition. > 6. Broker 1 will always receive exception from the leader because it is not > in the replica list. > Not sure what is the correct fix here. It seems that broke 1 in this case > should ask the controller for the latest replica assignment. > There are two related bugs: > 1. when a {{NotAssignedReplicaException}} is thrown from > {{Partition.updateReplicaLogReadResult()}}, the other partitions in the same > request will failed to update the fetch timestamp and offset and thus also > drop out of the ISR. > 2. The {{NotAssignedReplicaException}} was not properly returned to the > replicas, instead, a UnknownServerException is returned. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5383) Additional Test Cases for ReplicaManager
[ https://issues.apache.org/jira/browse/KAFKA-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16709926#comment-16709926 ] Viktor Somogyi commented on KAFKA-5383: --- [~hachikuji] do you mind if I pick this up? Since I've been working on the incremental partition reassignment, I think this is a good candidate for me. > Additional Test Cases for ReplicaManager > > > Key: KAFKA-5383 > URL: https://issues.apache.org/jira/browse/KAFKA-5383 > Project: Kafka > Issue Type: Sub-task > Components: clients, core, producer >Reporter: Jason Gustafson >Priority: Major > Fix For: 2.2.0 > > > KAFKA-5355 and KAFKA-5376 have shown that current testing of ReplicaManager > is inadequate. This is definitely the case when it comes to KIP-98 and is > likely true in general. We should improve this. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5209) Transient failure: kafka.server.MetadataRequestTest.testControllerId
[ https://issues.apache.org/jira/browse/KAFKA-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16709880#comment-16709880 ] Viktor Somogyi commented on KAFKA-5209: --- [~umesh9...@gmail.com] are you planning to continue this? I've assigned it to you but if you think you won't continue, I'm happy to take over. > Transient failure: kafka.server.MetadataRequestTest.testControllerId > > > Key: KAFKA-5209 > URL: https://issues.apache.org/jira/browse/KAFKA-5209 > Project: Kafka > Issue Type: Sub-task > Components: unit tests >Reporter: Guozhang Wang >Assignee: Umesh Chaudhary >Priority: Major > > {code} > Stacktrace > java.lang.NullPointerException > at > kafka.server.MetadataRequestTest.testControllerId(MetadataRequestTest.scala:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) > at > org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) > at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) > at > org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109) > at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:147) > at > org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:129) > at > org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404) > at > org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) > at > org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableE
[jira] [Assigned] (KAFKA-5209) Transient failure: kafka.server.MetadataRequestTest.testControllerId
[ https://issues.apache.org/jira/browse/KAFKA-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-5209: - Assignee: Umesh Chaudhary > Transient failure: kafka.server.MetadataRequestTest.testControllerId > > > Key: KAFKA-5209 > URL: https://issues.apache.org/jira/browse/KAFKA-5209 > Project: Kafka > Issue Type: Sub-task > Components: unit tests >Reporter: Guozhang Wang >Assignee: Umesh Chaudhary >Priority: Major > > {code} > Stacktrace > java.lang.NullPointerException > at > kafka.server.MetadataRequestTest.testControllerId(MetadataRequestTest.scala:57) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57) > at > org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66) > at > org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51) > at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32) > at > org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93) > at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) > at > org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109) > at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35) > at > org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24) > at > org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:147) > at > org.gradle.internal.remote.internal.hub.MessageHubBackedObjectConnection$DispatchWrapper.dispatch(MessageHubBackedObjectConnection.java:129) > at > org.gradle.internal.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:404) > at > org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:63) > at > org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:46) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecu
[jira] [Commented] (KAFKA-5286) Producer should await transaction completion in close
[ https://issues.apache.org/jira/browse/KAFKA-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16709877#comment-16709877 ] Viktor Somogyi commented on KAFKA-5286: --- [~apurva], [~ijuma], [~hachikuji] Is this the same as KAFKA-6635? I have a wip solution on that but I'd be happy to receive some feedback if I'm going towards the right direction. > Producer should await transaction completion in close > - > > Key: KAFKA-5286 > URL: https://issues.apache.org/jira/browse/KAFKA-5286 > Project: Kafka > Issue Type: Sub-task > Components: clients, core, producer >Affects Versions: 0.11.0.0 >Reporter: Jason Gustafson >Priority: Major > Fix For: 2.2.0 > > > We should wait at least as long as the timeout for a transaction which has > begun completion (commit or abort) to be finished. Tricky thing is whether we > should abort a transaction which is in progress. It seems reasonable since > that's the coordinator will either timeout and abort the transaction or the > next producer using the same transactionalId will fence the producer and > abort the transaction. In any case, the transaction will be aborted, so > perhaps we should do it proactively. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6794) Support for incremental replica reassignment
[ https://issues.apache.org/jira/browse/KAFKA-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16703314#comment-16703314 ] Viktor Somogyi commented on KAFKA-6794: --- [~hachikuji] it seems we finally have a solution which passes for all the unit tests. I'm gonna clean it up, write a doc about it and will try to demo it to you guys in some ways (it could be a long KIP or a recording or we could even manage to do a live demo). > Support for incremental replica reassignment > > > Key: KAFKA-6794 > URL: https://issues.apache.org/jira/browse/KAFKA-6794 > Project: Kafka > Issue Type: Improvement >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Say you have a replication factor of 4 and you trigger a reassignment which > moves all replicas to new brokers. Now 8 replicas are fetching at the same > time which means you need to account for 8 times the current producer load > plus the catch-up replication. To make matters worse, the replicas won't all > become in-sync at the same time; in the worst case, you could have 7 replicas > in-sync while one is still catching up. Currently, the old replicas won't be > disabled until all new replicas are in-sync. This makes configuring the > throttle tricky since ISR traffic is not subject to it. > Rather than trying to bring all 4 new replicas online at the same time, a > friendlier approach would be to do it incrementally: bring one replica > online, bring it in-sync, then remove one of the old replicas. Repeat until > all replicas have been changed. This would reduce the impact of a > reassignment and make configuring the throttle easier at the cost of a slower > overall reassignment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6635) Producer close does not await pending transaction
[ https://issues.apache.org/jira/browse/KAFKA-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16703297#comment-16703297 ] Viktor Somogyi commented on KAFKA-6635: --- I've made some improvements on this issue, gonna create a PR, please have look if that (or something similar) was in your mind. > Producer close does not await pending transaction > - > > Key: KAFKA-6635 > URL: https://issues.apache.org/jira/browse/KAFKA-6635 > Project: Kafka > Issue Type: Bug > Components: producer >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Currently close() only awaits completion of pending produce requests. If > there is a transaction ongoing, it may be dropped. For example, if one thread > is calling {{commitTransaction()}} and another calls {{close()}}, then the > commit may never happen even if the caller is willing to wait for it (by > using a long timeout). What's more, the thread blocking in > {{commitTransaction()}} will be stuck since the result will not be completed > once the producer has shutdown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6635) Producer close does not await pending transaction
[ https://issues.apache.org/jira/browse/KAFKA-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-6635: - Assignee: Viktor Somogyi > Producer close does not await pending transaction > - > > Key: KAFKA-6635 > URL: https://issues.apache.org/jira/browse/KAFKA-6635 > Project: Kafka > Issue Type: Bug > Components: producer >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Currently close() only awaits completion of pending produce requests. If > there is a transaction ongoing, it may be dropped. For example, if one thread > is calling {{commitTransaction()}} and another calls {{close()}}, then the > commit may never happen even if the caller is willing to wait for it (by > using a long timeout). What's more, the thread blocking in > {{commitTransaction()}} will be stuck since the result will not be completed > once the producer has shutdown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6635) Producer close does not await pending transaction
[ https://issues.apache.org/jira/browse/KAFKA-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16700392#comment-16700392 ] Viktor Somogyi commented on KAFKA-6635: --- [~hachikuji] is this still relevant? Do you mind if I pick this up? > Producer close does not await pending transaction > - > > Key: KAFKA-6635 > URL: https://issues.apache.org/jira/browse/KAFKA-6635 > Project: Kafka > Issue Type: Bug > Components: producer >Reporter: Jason Gustafson >Priority: Major > > Currently close() only awaits completion of pending produce requests. If > there is a transaction ongoing, it may be dropped. For example, if one thread > is calling {{commitTransaction()}} and another calls {{close()}}, then the > commit may never happen even if the caller is willing to wait for it (by > using a long timeout). What's more, the thread blocking in > {{commitTransaction()}} will be stuck since the result will not be completed > once the producer has shutdown. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7617) Document security primitives
Viktor Somogyi created KAFKA-7617: - Summary: Document security primitives Key: KAFKA-7617 URL: https://issues.apache.org/jira/browse/KAFKA-7617 Project: Kafka Issue Type: Task Reporter: Viktor Somogyi Assignee: Viktor Somogyi Although the documentation gives help on configuring the authentication and authorization, it won't list what are the security primitives (operations and resources) that can be used which makes it hard for users to easily set up thorough authorization rules. This task would cover adding these to the security page of the Kafka documentation. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16626992#comment-16626992 ] Viktor Somogyi commented on KAFKA-3268: --- Published KIP-375 about TopicCommand. I'd be happy if anyone wants to participate the discussion: https://cwiki.apache.org/confluence/display/KAFKA/KIP-375%3A+TopicCommand+to+use+AdminClient > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6794) Support for incremental replica reassignment
[ https://issues.apache.org/jira/browse/KAFKA-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-6794: - Assignee: Viktor Somogyi (was: Sandor Murakozi) > Support for incremental replica reassignment > > > Key: KAFKA-6794 > URL: https://issues.apache.org/jira/browse/KAFKA-6794 > Project: Kafka > Issue Type: Improvement >Reporter: Jason Gustafson >Assignee: Viktor Somogyi >Priority: Major > > Say you have a replication factor of 4 and you trigger a reassignment which > moves all replicas to new brokers. Now 8 replicas are fetching at the same > time which means you need to account for 8 times the current producer load > plus the catch-up replication. To make matters worse, the replicas won't all > become in-sync at the same time; in the worst case, you could have 7 replicas > in-sync while one is still catching up. Currently, the old replicas won't be > disabled until all new replicas are in-sync. This makes configuring the > throttle tricky since ISR traffic is not subject to it. > Rather than trying to bring all 4 new replicas online at the same time, a > friendlier approach would be to do it incrementally: bring one replica > online, bring it in-sync, then remove one of the old replicas. Repeat until > all replicas have been changed. This would reduce the impact of a > reassignment and make configuring the throttle easier at the cost of a slower > overall reassignment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7433: -- Component/s: (was: core) admin > Introduce broker options in TopicCommand to use AdminClient > --- > > Key: KAFKA-7433 > URL: https://issues.apache.org/jira/browse/KAFKA-7433 > Project: Kafka > Issue Type: Improvement > Components: admin >Affects Versions: 2.1.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip > > This task aim to add --bootstrap-servers and --admin.config options which > enable kafka.admin.TopicCommand to work with the Java based AdminClient. > Ideally KAFKA-5561 might replace this task but as an incremental step until > that succeeds it might be enough just to add these options to the existing > command. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7433: -- Description: This task aim to add --bootstrap-servers and --admin.config options which enable kafka.admin.TopicCommand to work with the Java based AdminClient. Ideally KAFKA-5561 might replace this task but as an incremental step until that succeeds it might be enough just to add these options to the existing command. was:This task aim to add --bootstrap-servers and --admin.config options which enable TopicCommand to work with the Java based AdminClient. > Introduce broker options in TopicCommand to use AdminClient > --- > > Key: KAFKA-7433 > URL: https://issues.apache.org/jira/browse/KAFKA-7433 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 2.1.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip > > This task aim to add --bootstrap-servers and --admin.config options which > enable kafka.admin.TopicCommand to work with the Java based AdminClient. > Ideally KAFKA-5561 might replace this task but as an incremental step until > that succeeds it might be enough just to add these options to the existing > command. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7433: -- Description: This task aim to add --bootstrap-servers and --admin.config options which enable TopicCommand to work with the Java based AdminClient. > Introduce broker options in TopicCommand to use AdminClient > --- > > Key: KAFKA-7433 > URL: https://issues.apache.org/jira/browse/KAFKA-7433 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 2.1.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip > > This task aim to add --bootstrap-servers and --admin.config options which > enable TopicCommand to work with the Java based AdminClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-7433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-7433: -- Labels: kip (was: ) > Introduce broker options in TopicCommand to use AdminClient > --- > > Key: KAFKA-7433 > URL: https://issues.apache.org/jira/browse/KAFKA-7433 > Project: Kafka > Issue Type: Improvement > Components: core >Affects Versions: 2.1.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip > > This task aim to add --bootstrap-servers and --admin.config options which > enable TopicCommand to work with the Java based AdminClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient
Viktor Somogyi created KAFKA-7433: - Summary: Introduce broker options in TopicCommand to use AdminClient Key: KAFKA-7433 URL: https://issues.apache.org/jira/browse/KAFKA-7433 Project: Kafka Issue Type: Improvement Components: core Affects Versions: 2.1.0 Reporter: Viktor Somogyi Assignee: Viktor Somogyi -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5561) Java based TopicCommand to use the Admin client
[ https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625590#comment-16625590 ] Viktor Somogyi commented on KAFKA-5561: --- [~ppatierno] if you don't mind I've renamed this JIRA as your efforts are rather to create a Java based command and what I'm working on is to fix up the Scala one which is a bit parallel to yours. Hope you don't mind it. > Java based TopicCommand to use the Admin client > --- > > Key: KAFKA-5561 > URL: https://issues.apache.org/jira/browse/KAFKA-5561 > Project: Kafka > Issue Type: Improvement > Components: tools >Reporter: Paolo Patierno >Assignee: Paolo Patierno >Priority: Major > > Hi, > as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it > could be great to have the TopicCommand using the new Admin client instead of > the way it works today. > As pushed by [~gwenshap] in the above JIRA, I'm going to work on it. > Thanks, > Paolo -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-5561) Java based TopicCommand to use the Admin client
[ https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5561: -- Summary: Java based TopicCommand to use the Admin client (was: Refactor TopicCommand to use the Admin client) > Java based TopicCommand to use the Admin client > --- > > Key: KAFKA-5561 > URL: https://issues.apache.org/jira/browse/KAFKA-5561 > Project: Kafka > Issue Type: Improvement > Components: tools >Reporter: Paolo Patierno >Assignee: Paolo Patierno >Priority: Major > > Hi, > as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it > could be great to have the TopicCommand using the new Admin client instead of > the way it works today. > As pushed by [~gwenshap] in the above JIRA, I'm going to work on it. > Thanks, > Paolo -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625545#comment-16625545 ] Viktor Somogyi edited comment on KAFKA-3268 at 9/24/18 8:58 AM: [~enothereska] I'll publish a KIP this week about KAFKA-5561 (TopicCommand - PR too) and KAFKA-5722 is on its way too. Unfortunately lately I got stuck in other things. I'd be very happy if you could pick up KAFKA-5723 or KAFKA-5601, I think neither got continued. I'd gladly help in the KIP discussions or in the implementation too. was (Author: viktorsomogyi): [~enothereska] I'll publish a KIP this week about (TopicCommand - PR also) and is on its way too. Unfortunately lately I got stuck in other things. I'd be very happy if you could pick up or , I think neither got continued. I'd gladly help in the KIP discussions or in the implementation too. > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625545#comment-16625545 ] Viktor Somogyi edited comment on KAFKA-3268 at 9/24/18 8:57 AM: [~enothereska] I'll publish a KIP this week about (TopicCommand - PR also) and is on its way too. Unfortunately lately I got stuck in other things. I'd be very happy if you could pick up or , I think neither got continued. I'd gladly help in the KIP discussions or in the implementation too. was (Author: viktorsomogyi): [~enothereska] I'll publish a KIP this week about KAFKA-5561 (TopicCommand) and KAFKA-5722 is on its way too. Unfortunately lately I got stuck in other things. I'd be very happy if you could pick up KAFKA-5723 or KAFKA-5601, I think neither got continued. I'd gladly help in the KIP discussions or in the implementation too. > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625545#comment-16625545 ] Viktor Somogyi commented on KAFKA-3268: --- [~enothereska] I'll publish a KIP this week about KAFKA-5561 (TopicCommand) and KAFKA-5722 is on its way too. Unfortunately lately I got stuck in other things. I'd be very happy if you could pick up KAFKA-5723 or KAFKA-5601, I think neither got continued. I'd gladly help in the KIP discussions or in the implementation too. > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16625107#comment-16625107 ] Viktor Somogyi commented on KAFKA-5722: --- Update here: the KIP sinked a bit down. I've reorganized it a bit, will publish it next week. > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 2.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5561) Refactor TopicCommand to use the Admin client
[ https://issues.apache.org/jira/browse/KAFKA-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16623460#comment-16623460 ] Viktor Somogyi commented on KAFKA-5561: --- [~ppatierno] are you currently working on this? I've seen the last modification was around last august and I'd like to pick this up if you don't mind. > Refactor TopicCommand to use the Admin client > - > > Key: KAFKA-5561 > URL: https://issues.apache.org/jira/browse/KAFKA-5561 > Project: Kafka > Issue Type: Improvement > Components: tools >Reporter: Paolo Patierno >Assignee: Paolo Patierno >Priority: Major > > Hi, > as suggested in the https://issues.apache.org/jira/browse/KAFKA-3331, it > could be great to have the TopicCommand using the new Admin client instead of > the way it works today. > As pushed by [~gwenshap] in the above JIRA, I'm going to work on it. > Thanks, > Paolo -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP
[ https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16623300#comment-16623300 ] Viktor Somogyi commented on KAFKA-1774: --- [~ijuma], [~junrao] do you think this is still required? I have an implementation which works for topics (add, list, describe, delete), configs (set/update, delete, describe), consumergroups (describe, list), log dirs (describe, list). Are you interested in a demo? > REPL and Shell Client for Admin Message RQ/RP > - > > Key: KAFKA-1774 > URL: https://issues.apache.org/jira/browse/KAFKA-1774 > Project: Kafka > Issue Type: Sub-task >Reporter: Joe Stein >Assignee: Viktor Somogyi >Priority: Major > > We should have a REPL we can work in and execute the commands with the > arguments. With this we can do: > ./kafka.sh --shell > kafka>attach cluster -b localhost:9092; > kafka>describe topic sampleTopicNameForExample; > the command line version can work like it does now so folks don't have to > re-write all of their tooling. > kafka.sh --topics --everything the same like kafka-topics.sh is > kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh > is -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6794) Support for incremental replica reassignment
[ https://issues.apache.org/jira/browse/KAFKA-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16620507#comment-16620507 ] Viktor Somogyi commented on KAFKA-6794: --- Hi [~smurakozi], are you working on this? In case you don't, I'd reassign this to myself if you don't mind. > Support for incremental replica reassignment > > > Key: KAFKA-6794 > URL: https://issues.apache.org/jira/browse/KAFKA-6794 > Project: Kafka > Issue Type: Improvement >Reporter: Jason Gustafson >Assignee: Sandor Murakozi >Priority: Major > > Say you have a replication factor of 4 and you trigger a reassignment which > moves all replicas to new brokers. Now 8 replicas are fetching at the same > time which means you need to account for 8 times the current producer load > plus the catch-up replication. To make matters worse, the replicas won't all > become in-sync at the same time; in the worst case, you could have 7 replicas > in-sync while one is still catching up. Currently, the old replicas won't be > disabled until all new replicas are in-sync. This makes configuring the > throttle tricky since ISR traffic is not subject to it. > Rather than trying to bring all 4 new replicas online at the same time, a > friendlier approach would be to do it incrementally: bring one replica > online, bring it in-sync, then remove one of the old replicas. Repeat until > all replicas have been changed. This would reduce the impact of a > reassignment and make configuring the throttle easier at the cost of a slower > overall reassignment. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-1880) Add support for checking binary/source compatibility
[ https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16602052#comment-16602052 ] Viktor Somogyi commented on KAFKA-1880: --- [~ijuma] do you think this is still relevant? > Add support for checking binary/source compatibility > > > Key: KAFKA-1880 > URL: https://issues.apache.org/jira/browse/KAFKA-1880 > Project: Kafka > Issue Type: New Feature >Reporter: Ashish Singh >Assignee: Viktor Somogyi >Priority: Major > Attachments: compatibilityReport-only-incompatible.html, > compatibilityReport.html > > > Recent discussions around compatibility shows how important compatibility is > to users. Kafka should leverage a tool to find, report, and avoid > incompatibility issues in public methods. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-1880) Add support for checking binary/source compatibility
[ https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16598830#comment-16598830 ] Viktor Somogyi commented on KAFKA-1880: --- [~granthenke] if you don't mind I've reassigned this to continue your work as something similar has come up regarding KIP-336. > Add support for checking binary/source compatibility > > > Key: KAFKA-1880 > URL: https://issues.apache.org/jira/browse/KAFKA-1880 > Project: Kafka > Issue Type: New Feature >Reporter: Ashish Singh >Assignee: Viktor Somogyi >Priority: Major > Attachments: compatibilityReport-only-incompatible.html, > compatibilityReport.html > > > Recent discussions around compatibility shows how important compatibility is > to users. Kafka should leverage a tool to find, report, and avoid > incompatibility issues in public methods. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-1880) Add support for checking binary/source compatibility
[ https://issues.apache.org/jira/browse/KAFKA-1880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-1880: - Assignee: Viktor Somogyi (was: Grant Henke) > Add support for checking binary/source compatibility > > > Key: KAFKA-1880 > URL: https://issues.apache.org/jira/browse/KAFKA-1880 > Project: Kafka > Issue Type: New Feature >Reporter: Ashish Singh >Assignee: Viktor Somogyi >Priority: Major > Attachments: compatibilityReport-only-incompatible.html, > compatibilityReport.html > > > Recent discussions around compatibility shows how important compatibility is > to users. Kafka should leverage a tool to find, report, and avoid > incompatibility issues in public methods. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP
[ https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16573268#comment-16573268 ] Viktor Somogyi edited comment on KAFKA-1774 at 8/8/18 2:10 PM: --- [~abiletskyi] if you don't mind, since there were no responses within a month, I'd reassign this to myself and continue with your patch. Please let me know otherwise :). was (Author: viktorsomogyi): [~abiletskyi] if you don't mind, since there were no responses within a month, I'd reassign this to myself and continue with your patch. > REPL and Shell Client for Admin Message RQ/RP > - > > Key: KAFKA-1774 > URL: https://issues.apache.org/jira/browse/KAFKA-1774 > Project: Kafka > Issue Type: Sub-task >Reporter: Joe Stein >Assignee: Viktor Somogyi >Priority: Major > > We should have a REPL we can work in and execute the commands with the > arguments. With this we can do: > ./kafka.sh --shell > kafka>attach cluster -b localhost:9092; > kafka>describe topic sampleTopicNameForExample; > the command line version can work like it does now so folks don't have to > re-write all of their tooling. > kafka.sh --topics --everything the same like kafka-topics.sh is > kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh > is -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP
[ https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-1774: - Assignee: Viktor Somogyi (was: Andrii Biletskyi) > REPL and Shell Client for Admin Message RQ/RP > - > > Key: KAFKA-1774 > URL: https://issues.apache.org/jira/browse/KAFKA-1774 > Project: Kafka > Issue Type: Sub-task >Reporter: Joe Stein >Assignee: Viktor Somogyi >Priority: Major > > We should have a REPL we can work in and execute the commands with the > arguments. With this we can do: > ./kafka.sh --shell > kafka>attach cluster -b localhost:9092; > kafka>describe topic sampleTopicNameForExample; > the command line version can work like it does now so folks don't have to > re-write all of their tooling. > kafka.sh --topics --everything the same like kafka-topics.sh is > kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh > is -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP
[ https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16573268#comment-16573268 ] Viktor Somogyi commented on KAFKA-1774: --- [~abiletskyi] if you don't mind, since there were no responses within a month, I'd reassign this to myself and continue with your patch. > REPL and Shell Client for Admin Message RQ/RP > - > > Key: KAFKA-1774 > URL: https://issues.apache.org/jira/browse/KAFKA-1774 > Project: Kafka > Issue Type: Sub-task >Reporter: Joe Stein >Assignee: Andrii Biletskyi >Priority: Major > > We should have a REPL we can work in and execute the commands with the > arguments. With this we can do: > ./kafka.sh --shell > kafka>attach cluster -b localhost:9092; > kafka>describe topic sampleTopicNameForExample; > the command line version can work like it does now so folks don't have to > re-write all of their tooling. > kafka.sh --topics --everything the same like kafka-topics.sh is > kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh > is -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer
[ https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16545156#comment-16545156 ] Viktor Somogyi commented on KAFKA-6923: --- [~chia7712] thanks for the idea. Would you please also add it to the KIP discussion thread to see what the community is thinking about this? (this is the KIP: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=87298242 ). Personally I'd prefer not to deprecate the Serializer class. I think all of the serializer/deserializer implementations in the project (Short, Int, String Byte, etc.) uses the 2 parameter method and I think it would be a good practice to leave the telescopic methods given that Java really doesn't have any other ways to represent optional parameters which the "headers" would be. > Consolidate ExtendedSerializer/Serializer and > ExtendedDeserializer/Deserializer > --- > > Key: KAFKA-6923 > URL: https://issues.apache.org/jira/browse/KAFKA-6923 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Ismael Juma >Assignee: Viktor Somogyi >Priority: Major > Labels: needs-kip > Fix For: 2.1.0 > > > The Javadoc of ExtendedDeserializer states: > {code} > * Prefer {@link Deserializer} if access to the headers is not required. Once > Kafka drops support for Java 7, the > * {@code deserialize()} method introduced by this interface will be added to > Deserializer with a default implementation > * so that backwards compatibility is maintained. This interface may be > deprecated once that happens. > {code} > Since we have dropped Java 7 support, we should figure out how to do this. > There are compatibility implications, so a KIP is needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-1774) REPL and Shell Client for Admin Message RQ/RP
[ https://issues.apache.org/jira/browse/KAFKA-1774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16538687#comment-16538687 ] Viktor Somogyi commented on KAFKA-1774: --- [~abiletskyi] do you still work on this? We have a spike implementation which uses the new AdminClient. We'd like to publish in the foreseeable future and if you don't mind I'd like to pick up this task, integrate your work and continue with this. Do you mind me picking this up? > REPL and Shell Client for Admin Message RQ/RP > - > > Key: KAFKA-1774 > URL: https://issues.apache.org/jira/browse/KAFKA-1774 > Project: Kafka > Issue Type: Sub-task >Reporter: Joe Stein >Assignee: Andrii Biletskyi >Priority: Major > > We should have a REPL we can work in and execute the commands with the > arguments. With this we can do: > ./kafka.sh --shell > kafka>attach cluster -b localhost:9092; > kafka>describe topic sampleTopicNameForExample; > the command line version can work like it does now so folks don't have to > re-write all of their tooling. > kafka.sh --topics --everything the same like kafka-topics.sh is > kafka.sh --reassign --everything the same like kafka-reassign-partitions.sh > is -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7140) Remove deprecated poll usages
Viktor Somogyi created KAFKA-7140: - Summary: Remove deprecated poll usages Key: KAFKA-7140 URL: https://issues.apache.org/jira/browse/KAFKA-7140 Project: Kafka Issue Type: Improvement Reporter: Viktor Somogyi Assignee: Viktor Somogyi There are a couple of poll(long) usages of the consumer in test and non-test code. This jira would aim to remove the non-test usages of the method. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-7087) Option in TopicCommand to report not preferred leaders
Viktor Somogyi created KAFKA-7087: - Summary: Option in TopicCommand to report not preferred leaders Key: KAFKA-7087 URL: https://issues.apache.org/jira/browse/KAFKA-7087 Project: Kafka Issue Type: Improvement Components: core Reporter: Viktor Somogyi Assignee: Viktor Somogyi Options in topic describe exists for reporting unavailable and lagging partitions but it is often an ask to report partitions where the active leader is not the preferred one. This jira adds this extra option to TopicCommand. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer
[ https://issues.apache.org/jira/browse/KAFKA-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-6934: -- Description: Reproduction: Start a broker listening to localhost:9092. Start a console consumer with the following options: {code} bin/kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092 --csv-reporter-enabled --metrics-dir /tmp/metrics {code} The consumer consumes messages sent to the test topic, it will (re)create the /tmp/metrics dir, but it will not produce any metrics/files in that dir. was: Reproduction: Start a broker listening to localhost:9092. Start a console consumer with the following options: {code} kafka-console-consumer --topic test --bootstrap-server localhost:9092 --csv-reporter-enabled --metrics-dir /tmp/metrics {code} The consumer consumes messages sent to the test topic, it will (re)create the /tmp/metrics dir, but it will not produce any metrics/files in that dir. > Csv reporter doesn't work for ConsoleConsumer > - > > Key: KAFKA-6934 > URL: https://issues.apache.org/jira/browse/KAFKA-6934 > Project: Kafka > Issue Type: Bug >Affects Versions: 1.1.0, 2.0.0 >Reporter: Sandor Murakozi >Assignee: Viktor Somogyi >Priority: Major > > Reproduction: > Start a broker listening to localhost:9092. > Start a console consumer with the following options: > {code} > bin/kafka-console-consumer.sh --topic test --bootstrap-server localhost:9092 > --csv-reporter-enabled --metrics-dir /tmp/metrics > {code} > The consumer consumes messages sent to the test topic, it will (re)create the > /tmp/metrics dir, but it will not produce any metrics/files in that dir. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer
[ https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16500291#comment-16500291 ] Viktor Somogyi commented on KAFKA-6923: --- [~ijuma] just wanted to give a heads up that probably I'll publish this KIP after the 2.0.0 release as most of the community is probably busy with getting code changes in. And also it helps me with participating in code reviews during this period. > Consolidate ExtendedSerializer/Serializer and > ExtendedDeserializer/Deserializer > --- > > Key: KAFKA-6923 > URL: https://issues.apache.org/jira/browse/KAFKA-6923 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Ismael Juma >Assignee: Viktor Somogyi >Priority: Major > Labels: needs-kip > Fix For: 2.1.0 > > > The Javadoc of ExtendedDeserializer states: > {code} > * Prefer {@link Deserializer} if access to the headers is not required. Once > Kafka drops support for Java 7, the > * {@code deserialize()} method introduced by this interface will be added to > Deserializer with a default implementation > * so that backwards compatibility is maintained. This interface may be > deprecated once that happens. > {code} > Since we have dropped Java 7 support, we should figure out how to do this. > There are compatibility implications, so a KIP is needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer
[ https://issues.apache.org/jira/browse/KAFKA-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487254#comment-16487254 ] Viktor Somogyi commented on KAFKA-6934: --- I'll pick this up if you don't mind. > Csv reporter doesn't work for ConsoleConsumer > - > > Key: KAFKA-6934 > URL: https://issues.apache.org/jira/browse/KAFKA-6934 > Project: Kafka > Issue Type: Bug >Affects Versions: 1.1.0, 2.0.0 >Reporter: Sandor Murakozi >Priority: Major > > Reproduction: > Start a broker listening to localhost:9092. > Start a console consumer with the following options: > {code} > kafka-console-consumer --topic test --bootstrap-server localhost:9092 > --csv-reporter-enabled --metrics-dir /tmp/metrics > {code} > The consumer consumes messages sent to the test topic, it will (re)create the > /tmp/metrics dir, but it will not produce any metrics/files in that dir. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer
[ https://issues.apache.org/jira/browse/KAFKA-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-6934: - Assignee: Viktor Somogyi > Csv reporter doesn't work for ConsoleConsumer > - > > Key: KAFKA-6934 > URL: https://issues.apache.org/jira/browse/KAFKA-6934 > Project: Kafka > Issue Type: Bug >Affects Versions: 1.1.0, 2.0.0 >Reporter: Sandor Murakozi >Assignee: Viktor Somogyi >Priority: Major > > Reproduction: > Start a broker listening to localhost:9092. > Start a console consumer with the following options: > {code} > kafka-console-consumer --topic test --bootstrap-server localhost:9092 > --csv-reporter-enabled --metrics-dir /tmp/metrics > {code} > The consumer consumes messages sent to the test topic, it will (re)create the > /tmp/metrics dir, but it will not produce any metrics/files in that dir. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer
[ https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487200#comment-16487200 ] Viktor Somogyi commented on KAFKA-6923: --- As I know it is a valid thing to override the default interface method with an abstract one. Apparently leaving the ExtendedDeserializer as is just does that. Yea, there might be a better approach, just saw this jira and came up with an idea :). I'll try to think a little bit more and publish a KIP. > Consolidate ExtendedSerializer/Serializer and > ExtendedDeserializer/Deserializer > --- > > Key: KAFKA-6923 > URL: https://issues.apache.org/jira/browse/KAFKA-6923 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Ismael Juma >Assignee: Viktor Somogyi >Priority: Major > Labels: needs-kip > Fix For: 2.1.0 > > > The Javadoc of ExtendedDeserializer states: > {code} > * Prefer {@link Deserializer} if access to the headers is not required. Once > Kafka drops support for Java 7, the > * {@code deserialize()} method introduced by this interface will be added to > Deserializer with a default implementation > * so that backwards compatibility is maintained. This interface may be > deprecated once that happens. > {code} > Since we have dropped Java 7 support, we should figure out how to do this. > There are compatibility implications, so a KIP is needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer
[ https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-6923: - Assignee: Viktor Somogyi > Consolidate ExtendedSerializer/Serializer and > ExtendedDeserializer/Deserializer > --- > > Key: KAFKA-6923 > URL: https://issues.apache.org/jira/browse/KAFKA-6923 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Ismael Juma >Assignee: Viktor Somogyi >Priority: Major > Labels: needs-kip > Fix For: 2.1.0 > > > The Javadoc of ExtendedDeserializer states: > {code} > * Prefer {@link Deserializer} if access to the headers is not required. Once > Kafka drops support for Java 7, the > * {@code deserialize()} method introduced by this interface will be added to > Deserializer with a default implementation > * so that backwards compatibility is maintained. This interface may be > deprecated once that happens. > {code} > Since we have dropped Java 7 support, we should figure out how to do this. > There are compatibility implications, so a KIP is needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer
[ https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16484144#comment-16484144 ] Viktor Somogyi commented on KAFKA-6923: --- [~ijuma] have you started working on this? I've been wondering if I might pick this up if you don't mind. What compatibility implications were you referring to? I've been looking at this and if I'm right then this shouldn't be such a relatively small KIP & change. What I was thinking is that we could: # define the default method in {{Deserializer}}: {code} default T deserialize(String topic, Headers headers, byte[] data) { return deserialize(topic, data); } {code} Doing this in {{Deserializer}} wouldn't break compatibility as {{ExtendedSerializer}} would remain the same and it is allowed (and it seems to have the same signature looking at it with javap) # deprecate {{ExtendedDeserializer}} and {{Wrapper}} saying that new and existing implementations should use {{Deserializer}} # leave {{ExtendedDeserializer}} as is, so existing behavior won't change. # get rid of the internal usages of {{ExtendedSerializer}} & {{Wrapper.ensureExtended()}} # do the same for {{Serializer}}/{{ExtendedSerializer}} Is there anything else to this that you were thinking of? > Consolidate ExtendedSerializer/Serializer and > ExtendedDeserializer/Deserializer > --- > > Key: KAFKA-6923 > URL: https://issues.apache.org/jira/browse/KAFKA-6923 > Project: Kafka > Issue Type: Improvement > Components: clients >Reporter: Ismael Juma >Priority: Major > Labels: needs-kip > Fix For: 2.1.0 > > > The Javadoc of ExtendedDeserializer states: > {code} > * Prefer {@link Deserializer} if access to the headers is not required. Once > Kafka drops support for Java 7, the > * {@code deserialize()} method introduced by this interface will be added to > Deserializer with a default implementation > * so that backwards compatibility is maintained. This interface may be > deprecated once that happens. > {code} > Since we have dropped Java 7 support, we should figure out how to do this. > There are compatibility implications, so a KIP is needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures
[ https://issues.apache.org/jira/browse/KAFKA-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16406696#comment-16406696 ] Viktor Somogyi commented on KAFKA-6084: --- [~nickt] it seems the linked PR doesn't fix my general problem. That is when I (in a very simple case) enter an invalid json which is just as simple as having "dgdfgdfhdfgfgd" in it, I get a "kafka.common.AdminCommandFailedException: Partition reassignment data file is empty" exception. This is true for more complex json parsing cases too as mentioned in the description. Do you think it's possible once your commit is merged I rebase on it and add my amendments? > ReassignPartitionsCommand should propagate JSON parsing failures > > > Key: KAFKA-6084 > URL: https://issues.apache.org/jira/browse/KAFKA-6084 > Project: Kafka > Issue Type: Improvement > Components: admin >Affects Versions: 0.11.0.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: easyfix, newbie > Attachments: Screen Shot 2017-10-18 at 23.31.22.png > > > Basically looking at Json.scala it will always swallow any parsing errors: > {code} > def parseFull(input: String): Option[JsonValue] = > try Option(mapper.readTree(input)).map(JsonValue(_)) > catch { case _: JsonProcessingException => None } > {code} > However sometimes it is easy to figure out the problem by simply looking at > the JSON, in some cases it is not very trivial, such as some invisible > characters (like byte order mark) won't be displayed by most of the text > editors and can people spend time on figuring out what's the problem. > As Jackson provides a really detailed exception about what failed and how, it > is easy to propagate the failure to the user. > As an example I attached a BOM prefixed JSON which fails with the following > error which is very counterintuitive: > {noformat} > [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 > --reassignment-json-file /root/increase-replication-factor.json --execute > Partitions reassignment failed due to Partition reassignment data file > /root/increase-replication-factor.json is empty > kafka.common.AdminCommandFailedException: Partition reassignment data file > /root/increase-replication-factor.json is empty > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52) > at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > ... > {noformat} > In case of the above error it would be much better to see what fails exactly: > {noformat} > kafka.common.AdminCommandFailedException: Admin command failed > at > kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267) > at > kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64) > at > kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected > character ('' (code 65279 / 0xfeff)): expected a valid value (number, > String, array, object, 'true', 'false' or 'null') > at [Source: (String)"{"version":1, > "partitions":[ >{"topic": "test1", "partition": 0, "replicas": [1,2]}, >{"topic": "test2", "partition": 1, "replicas": [2,3]} > ]}"; line: 1, column: 2] > at > com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747) > at > com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030) > at > com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539) > at kafka.utils.Json$.kafka$utils$Json$$doParseFull(Json.scala:46) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at scala.util.Try$.apply(Try.sc
[jira] [Updated] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5722: -- Fix Version/s: (was: 1.1.0) 1.2.0 Since the KIP is still on vote, I'll update the fix version to 1.2 > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 1.2.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16326916#comment-16326916 ] Viktor Somogyi commented on KAFKA-5722: --- Also there is a WIP PR: [https://github.com/apache/kafka/pull/4416] (don't know why github didn't automatically comment on this jira) > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 1.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16326914#comment-16326914 ] Viktor Somogyi edited comment on KAFKA-5722 at 1/16/18 9:13 AM: [~rsivaram] I have published the KIP here: https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient Could you please take a look at it? was (Author: viktorsomogyi): [~rsivaram] I have published the KIP [here|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient].] Could you please take a look at it? > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 1.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16326914#comment-16326914 ] Viktor Somogyi commented on KAFKA-5722: --- [~rsivaram] I have published the KIP [here|[https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+ConfigCommand+That+Uses+The+New+AdminClient].] Could you please take a look at it? > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Major > Labels: kip, needs-kip > Fix For: 1.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291488#comment-16291488 ] Viktor Somogyi commented on KAFKA-5722: --- [~rsivaram] just wanted to update you, might publish it only next week, takes a bit more work than anticipated :). > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > Labels: kip, needs-kip > Fix For: 1.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5722) Refactor ConfigCommand to use the AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16280473#comment-16280473 ] Viktor Somogyi commented on KAFKA-5722: --- [~rsivaram] yes, I'm planning to submit it. I'm not sure though if I can get to it this week as I'm kind of full but I hope I can do it next week (half of it is already done). I'll ask permission to publish on the dev mail list. > Refactor ConfigCommand to use the AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > Labels: kip, needs-kip > Fix For: 1.1.0 > > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-6187) Remove the Logging class in favor of LazyLogging in scala-logging
[ https://issues.apache.org/jira/browse/KAFKA-6187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-6187: -- Description: In KAFKA-1044 we removed the hard dependency on junit and enabled users to exclude it in their environment without causing any problems. We also agreed to remove the kafka.utils.Logging class as it can be made redundant by LazyLogging in scala-logging. In this JIRA we will get rid of Logging by refactoring its remaining functionalities (swallow* methods and logIdent). was: In KAFKA-1044 we removed the hard dependency on junit and enabled users to exclude it in their environment without causing any problems. We also agreed to remove the kafka.utils.Logging class as it can be made redundant by LazyLogging in scala-logging. In this JIRA we will get rid of Logging by replacing its remaining functionalities with other features. > Remove the Logging class in favor of LazyLogging in scala-logging > - > > Key: KAFKA-6187 > URL: https://issues.apache.org/jira/browse/KAFKA-6187 > Project: Kafka > Issue Type: Task > Components: core >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > In KAFKA-1044 we removed the hard dependency on junit and enabled users to > exclude it in their environment without causing any problems. We also agreed > to remove the kafka.utils.Logging class as it can be made redundant by > LazyLogging in scala-logging. > In this JIRA we will get rid of Logging by refactoring its remaining > functionalities (swallow* methods and logIdent). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6187) Remove the Logging class in favor of LazyLogging in scala-logging
Viktor Somogyi created KAFKA-6187: - Summary: Remove the Logging class in favor of LazyLogging in scala-logging Key: KAFKA-6187 URL: https://issues.apache.org/jira/browse/KAFKA-6187 Project: Kafka Issue Type: Task Components: core Reporter: Viktor Somogyi Assignee: Viktor Somogyi In KAFKA-1044 we removed the hard dependency on junit and enabled users to exclude it in their environment without causing any problems. We also agreed to remove the kafka.utils.Logging class as it can be made redundant by LazyLogging in scala-logging. In this JIRA we will get rid of Logging by replacing its remaining functionalities with other features. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures
[ https://issues.apache.org/jira/browse/KAFKA-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16227014#comment-16227014 ] Viktor Somogyi commented on KAFKA-6084: --- [~ijuma] is there something I can do something to get my PR reviewed? > ReassignPartitionsCommand should propagate JSON parsing failures > > > Key: KAFKA-6084 > URL: https://issues.apache.org/jira/browse/KAFKA-6084 > Project: Kafka > Issue Type: Improvement > Components: admin >Affects Versions: 0.11.0.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: easyfix, newbie > Attachments: Screen Shot 2017-10-18 at 23.31.22.png > > > Basically looking at Json.scala it will always swallow any parsing errors: > {code} > def parseFull(input: String): Option[JsonValue] = > try Option(mapper.readTree(input)).map(JsonValue(_)) > catch { case _: JsonProcessingException => None } > {code} > However sometimes it is easy to figure out the problem by simply looking at > the JSON, in some cases it is not very trivial, such as some invisible > characters (like byte order mark) won't be displayed by most of the text > editors and can people spend time on figuring out what's the problem. > As Jackson provides a really detailed exception about what failed and how, it > is easy to propagate the failure to the user. > As an example I attached a BOM prefixed JSON which fails with the following > error which is very counterintuitive: > {noformat} > [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 > --reassignment-json-file /root/increase-replication-factor.json --execute > Partitions reassignment failed due to Partition reassignment data file > /root/increase-replication-factor.json is empty > kafka.common.AdminCommandFailedException: Partition reassignment data file > /root/increase-replication-factor.json is empty > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52) > at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > ... > {noformat} > In case of the above error it would be much better to see what fails exactly: > {noformat} > kafka.common.AdminCommandFailedException: Admin command failed > at > kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267) > at > kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64) > at > kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected > character ('' (code 65279 / 0xfeff)): expected a valid value (number, > String, array, object, 'true', 'false' or 'null') > at [Source: (String)"{"version":1, > "partitions":[ >{"topic": "test1", "partition": 0, "replicas": [1,2]}, >{"topic": "test2", "partition": 1, "replicas": [2,3]} > ]}"; line: 1, column: 2] > at > com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747) > at > com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030) > at > com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539) > at kafka.utils.Json$.kafka$utils$Json$$doParseFull(Json.scala:46) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at scala.util.Try$.apply(Try.scala:192) > at kafka.utils.Json$.tryParseFull(Json.scala:44) > at > kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:241) > ... 5 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures
[ https://issues.apache.org/jira/browse/KAFKA-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212280#comment-16212280 ] Viktor Somogyi commented on KAFKA-6084: --- [~ijuma] thank you for the positive feedback. Do you know who's the right person to contact to get the PR reviewed? :) > ReassignPartitionsCommand should propagate JSON parsing failures > > > Key: KAFKA-6084 > URL: https://issues.apache.org/jira/browse/KAFKA-6084 > Project: Kafka > Issue Type: Improvement > Components: admin >Affects Versions: 0.11.0.0 >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi >Priority: Minor > Labels: easyfix, newbie > Attachments: Screen Shot 2017-10-18 at 23.31.22.png > > > Basically looking at Json.scala it will always swallow any parsing errors: > {code} > def parseFull(input: String): Option[JsonValue] = > try Option(mapper.readTree(input)).map(JsonValue(_)) > catch { case _: JsonProcessingException => None } > {code} > However sometimes it is easy to figure out the problem by simply looking at > the JSON, in some cases it is not very trivial, such as some invisible > characters (like byte order mark) won't be displayed by most of the text > editors and can people spend time on figuring out what's the problem. > As Jackson provides a really detailed exception about what failed and how, it > is easy to propagate the failure to the user. > As an example I attached a BOM prefixed JSON which fails with the following > error which is very counterintuitive: > {noformat} > [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 > --reassignment-json-file /root/increase-replication-factor.json --execute > Partitions reassignment failed due to Partition reassignment data file > /root/increase-replication-factor.json is empty > kafka.common.AdminCommandFailedException: Partition reassignment data file > /root/increase-replication-factor.json is empty > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52) > at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > ... > {noformat} > In case of the above error it would be much better to see what fails exactly: > {noformat} > kafka.common.AdminCommandFailedException: Admin command failed > at > kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267) > at > kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193) > at > kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64) > at > kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) > Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected > character ('' (code 65279 / 0xfeff)): expected a valid value (number, > String, array, object, 'true', 'false' or 'null') > at [Source: (String)"{"version":1, > "partitions":[ >{"topic": "test1", "partition": 0, "replicas": [1,2]}, >{"topic": "test2", "partition": 1, "replicas": [2,3]} > ]}"; line: 1, column: 2] > at > com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663) > at > com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892) > at > com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747) > at > com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030) > at > com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539) > at kafka.utils.Json$.kafka$utils$Json$$doParseFull(Json.scala:46) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) > at scala.util.Try$.apply(Try.scala:192) > at kafka.utils.Json$.tryParseFull(Json.scala:44) > at > kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:241) > ... 5 more > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-6084) ReassignPartitionsCommand should propagate JSON parsing failures
Viktor Somogyi created KAFKA-6084: - Summary: ReassignPartitionsCommand should propagate JSON parsing failures Key: KAFKA-6084 URL: https://issues.apache.org/jira/browse/KAFKA-6084 Project: Kafka Issue Type: Improvement Components: admin Affects Versions: 0.11.0.0 Reporter: Viktor Somogyi Assignee: Viktor Somogyi Priority: Minor Attachments: Screen Shot 2017-10-18 at 23.31.22.png Basically looking at Json.scala it will always swallow any parsing errors: {code} def parseFull(input: String): Option[JsonValue] = try Option(mapper.readTree(input)).map(JsonValue(_)) catch { case _: JsonProcessingException => None } {code} However sometimes it is easy to figure out the problem by simply looking at the JSON, in some cases it is not very trivial, such as some invisible characters (like byte order mark) won't be displayed by most of the text editors and can people spend time on figuring out what's the problem. As Jackson provides a really detailed exception about what failed and how, it is easy to propagate the failure to the user. As an example I attached a BOM prefixed JSON which fails with the following error which is very counterintuitive: {noformat} [root@localhost ~]# kafka-reassign-partitions --zookeeper localhost:2181 --reassignment-json-file /root/increase-replication-factor.json --execute Partitions reassignment failed due to Partition reassignment data file /root/increase-replication-factor.json is empty kafka.common.AdminCommandFailedException: Partition reassignment data file /root/increase-replication-factor.json is empty at kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:120) at kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:52) at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) ... {noformat} In case of the above error it would be much better to see what fails exactly: {noformat} kafka.common.AdminCommandFailedException: Admin command failed at kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:267) at kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:275) at kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:197) at kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:193) at kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:64) at kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala) Caused by: com.fasterxml.jackson.core.JsonParseException: Unexpected character ('' (code 65279 / 0xfeff)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') at [Source: (String)"{"version":1, "partitions":[ {"topic": "test1", "partition": 0, "replicas": [1,2]}, {"topic": "test2", "partition": 1, "replicas": [2,3]} ]}"; line: 1, column: 2] at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1798) at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:663) at com.fasterxml.jackson.core.base.ParserMinimalBase._reportUnexpectedChar(ParserMinimalBase.java:561) at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1892) at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:747) at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4030) at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2539) at kafka.utils.Json$.kafka$utils$Json$$doParseFull(Json.scala:46) at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) at kafka.utils.Json$$anonfun$tryParseFull$1.apply(Json.scala:44) at scala.util.Try$.apply(Try.scala:192) at kafka.utils.Json$.tryParseFull(Json.scala:44) at kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:241) ... 5 more {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.
[ https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16162640#comment-16162640 ] Viktor Somogyi commented on KAFKA-4477: --- Thanks [~ewencp], I must have missed it, there are a few comments here. :) > Node reduces its ISR to itself, and doesn't recover. Other nodes do not take > leadership, cluster remains sick until node is restarted. > -- > > Key: KAFKA-4477 > URL: https://issues.apache.org/jira/browse/KAFKA-4477 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 0.10.1.0 > Environment: RHEL7 > java version "1.8.0_66" > Java(TM) SE Runtime Environment (build 1.8.0_66-b17) > Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode) >Reporter: Michael Andre Pearce >Assignee: Apurva Mehta >Priority: Critical > Labels: reliability > Fix For: 0.10.1.1 > > Attachments: 2016_12_15.zip, 72_Server_Thread_Dump.txt, > 73_Server_Thread_Dump.txt, 74_Server_Thread_Dump, issue_node_1001_ext.log, > issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, > issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, > server_1_72server.log, server_2_73_server.log, server_3_74Server.log, > state_change_controller.tar.gz > > > We have encountered a critical issue that has re-occured in different > physical environments. We haven't worked out what is going on. We do though > have a nasty work around to keep service alive. > We do have not had this issue on clusters still running 0.9.01. > We have noticed a node randomly shrinking for the partitions it owns the > ISR's down to itself, moments later we see other nodes having disconnects, > followed by finally app issues, where producing to these partitions is > blocked. > It seems only by restarting the kafka instance java process resolves the > issues. > We have had this occur multiple times and from all network and machine > monitoring the machine never left the network, or had any other glitches. > Below are seen logs from the issue. > Node 7: > [2016-12-01 07:01:28,112] INFO Partition > [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking > ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from > 1,2,7 to 7 (kafka.cluster.Partition) > All other nodes: > [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch > kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 > (kafka.server.ReplicaFetcherThread) > java.io.IOException: Connection to 7 was disconnected before the response was > read > All clients: > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.NetworkException: The server disconnected > before a response was received. > After this occurs, we then suddenly see on the sick machine an increasing > amount of close_waits and file descriptors. > As a work around to keep service we are currently putting in an automated > process that tails and regex's for: and where new_partitions hit just itself > we restart the node. > "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for > partition \[.*\] from (?P.+) to (?P.+) > \(kafka.cluster.Partition\)" -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-1044) change log4j to slf4j
[ https://issues.apache.org/jira/browse/KAFKA-1044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16158635#comment-16158635 ] Viktor Somogyi commented on KAFKA-1044: --- Hi [~grussell], currently I need to resolve some merge conflits but let me do that and try to get it reviewed with someone :). > change log4j to slf4j > -- > > Key: KAFKA-1044 > URL: https://issues.apache.org/jira/browse/KAFKA-1044 > Project: Kafka > Issue Type: Bug > Components: log >Affects Versions: 0.8.0 >Reporter: shijinkui >Assignee: Viktor Somogyi >Priority: Minor > Labels: newbie > > can u chanage the log4j to slf4j, in my project, i use logback, it's conflict > with log4j. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155296#comment-16155296 ] Viktor Somogyi edited comment on KAFKA-5723 at 9/6/17 12:59 PM: [~ijuma] [~ewencp] [~granthenke] (you guys seem to be the major contributors to the tools project :) ) what do you think about creating a working branch for this? We think it would be valuable to coordinate the work among us and progress effectively. There seems to be many common components and having these on our own private forks seems to be a little bit obstructive. I'd make sure regularly that the upstream branch to be created keeps up with trunk. was (Author: viktorsomogyi): [~ijuma] [~ewencp] [~granthenke] (you guys seem to be the major contributors to the tools project :) ) what do you guys think about creating a working branch for this? We think it would be valuable to coordinate the work among us and progress effectively. There seems to be many common components and having these on our own private forks seems to be a little bit obstructive. I'd make sure regularly that the upstream branch to be created keeps up with trunk. > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155296#comment-16155296 ] Viktor Somogyi edited comment on KAFKA-5723 at 9/6/17 12:59 PM: [~ijuma] [~ewencp] [~granthenke] (you guys seem to be the major contributors to the tools project :) ) what do you guys think about creating a working branch for this? We think it would be valuable to coordinate the work among us and progress effectively. There seems to be many common components and having these on our own private forks seems to be a little bit obstructive. I'd make sure regularly that the upstream branch to be created keeps up with trunk. was (Author: viktorsomogyi): [~ijuma] [~ewencp] [~granthenke] what do you guys think about creating a working branch for this? We think it would be valuable to coordinate the work among us and progress effectively. There seems to be many common components and having these on our own private forks seems to be a little bit obstructive. I'd make sure regularly that the upstream branch to be created keeps up with trunk. > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155296#comment-16155296 ] Viktor Somogyi commented on KAFKA-5723: --- [~ijuma] [~ewencp] [~granthenke] what do you guys think about creating a working branch for this? We think it would be valuable to coordinate the work among us and progress effectively. There seems to be many common components and having these on our own private forks seems to be a little bit obstructive. I'd make sure regularly that the upstream branch to be created keeps up with trunk. > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-4477) Node reduces its ISR to itself, and doesn't recover. Other nodes do not take leadership, cluster remains sick until node is restarted.
[ https://issues.apache.org/jira/browse/KAFKA-4477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155156#comment-16155156 ] Viktor Somogyi commented on KAFKA-4477: --- Could anyone can help me please with sharing the commit hash related to this jira? I'd like to look into the fix but couldn't find any related commits in the git history. > Node reduces its ISR to itself, and doesn't recover. Other nodes do not take > leadership, cluster remains sick until node is restarted. > -- > > Key: KAFKA-4477 > URL: https://issues.apache.org/jira/browse/KAFKA-4477 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 0.10.1.0 > Environment: RHEL7 > java version "1.8.0_66" > Java(TM) SE Runtime Environment (build 1.8.0_66-b17) > Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode) >Reporter: Michael Andre Pearce >Assignee: Apurva Mehta >Priority: Critical > Labels: reliability > Fix For: 0.10.1.1 > > Attachments: 2016_12_15.zip, 72_Server_Thread_Dump.txt, > 73_Server_Thread_Dump.txt, 74_Server_Thread_Dump, issue_node_1001_ext.log, > issue_node_1001.log, issue_node_1002_ext.log, issue_node_1002.log, > issue_node_1003_ext.log, issue_node_1003.log, kafka.jstack, > server_1_72server.log, server_2_73_server.log, server_3_74Server.log, > state_change_controller.tar.gz > > > We have encountered a critical issue that has re-occured in different > physical environments. We haven't worked out what is going on. We do though > have a nasty work around to keep service alive. > We do have not had this issue on clusters still running 0.9.01. > We have noticed a node randomly shrinking for the partitions it owns the > ISR's down to itself, moments later we see other nodes having disconnects, > followed by finally app issues, where producing to these partitions is > blocked. > It seems only by restarting the kafka instance java process resolves the > issues. > We have had this occur multiple times and from all network and machine > monitoring the machine never left the network, or had any other glitches. > Below are seen logs from the issue. > Node 7: > [2016-12-01 07:01:28,112] INFO Partition > [com_ig_trade_v1_position_event--demo--compacted,10] on broker 7: Shrinking > ISR for partition [com_ig_trade_v1_position_event--demo--compacted,10] from > 1,2,7 to 7 (kafka.cluster.Partition) > All other nodes: > [2016-12-01 07:01:38,172] WARN [ReplicaFetcherThread-0-7], Error in fetch > kafka.server.ReplicaFetcherThread$FetchRequest@5aae6d42 > (kafka.server.ReplicaFetcherThread) > java.io.IOException: Connection to 7 was disconnected before the response was > read > All clients: > java.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.NetworkException: The server disconnected > before a response was received. > After this occurs, we then suddenly see on the sick machine an increasing > amount of close_waits and file descriptors. > As a work around to keep service we are currently putting in an automated > process that tails and regex's for: and where new_partitions hit just itself > we restart the node. > "\[(?P.+)\] INFO Partition \[.*\] on broker .* Shrinking ISR for > partition \[.*\] from (?P.+) to (?P.+) > \(kafka.cluster.Partition\)" -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16130351#comment-16130351 ] Viktor Somogyi commented on KAFKA-5723: --- +1 for creating our own branch upstream, we could name it after the KIP and basically do all related tasks there. > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16125601#comment-16125601 ] Viktor Somogyi commented on KAFKA-5723: --- [~adyachkov] I don't think it requires, since it doesn't touch any public API that is enlisted on the [KIP wiki|https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals]. Your solution makes sense to me, I was thinking about this as well (I might stick with the listAllBrokerVersionInfo name as describeCluster lets me think about something more complicated). Please coordinate with [~ppatierno] regarding how to organize your code as he already started on some [other commands|https://issues.apache.org/jira/browse/KAFKA-5561]. > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16124553#comment-16124553 ] Viktor Somogyi commented on KAFKA-5723: --- Sure, go on :) > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-5722) Refactor ConfigCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5722: -- Summary: Refactor ConfigCommand to use the new AdminClient (was: Refactor ConfigCommand to use the KafkaAdminClient) > Refactor ConfigCommand to use the new AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > Labels: kip, needs-kip > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5723: -- Summary: Refactor BrokerApiVersionsCommand to use the new AdminClient (was: Refactor BrokerApiVersionsCommand to use the KafkaAdminClient) > Refactor BrokerApiVersionsCommand to use the new AdminClient > > > Key: KAFKA-5723 > URL: https://issues.apache.org/jira/browse/KAFKA-5723 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > > Currently it uses the deprecated AdminClient and in order to remove usages of > that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16123279#comment-16123279 ] Viktor Somogyi commented on KAFKA-3268: --- [~ppatierno] sure, it'd be great to sync on the solution/interfaces, basic utilities and behavior. I'll review your changes. > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5723) Refactor BrokerApiVersionsCommand to use the KafkaAdminClient
Viktor Somogyi created KAFKA-5723: - Summary: Refactor BrokerApiVersionsCommand to use the KafkaAdminClient Key: KAFKA-5723 URL: https://issues.apache.org/jira/browse/KAFKA-5723 Project: Kafka Issue Type: Improvement Reporter: Viktor Somogyi Assignee: Viktor Somogyi Currently it uses the deprecated AdminClient and in order to remove usages of that client, this class needs to be refactored. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-5722) Refactor ConfigCommand to use the KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5722: -- Summary: Refactor ConfigCommand to use the KafkaAdminClient (was: Refactor ConfigCommand to use the new AdminClient) > Refactor ConfigCommand to use the KafkaAdminClient > -- > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > Labels: kip, needs-kip > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-5722) Refactor ConfigCommand to use the new AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-5722: -- Labels: kip needs-kip (was: ) > Refactor ConfigCommand to use the new AdminClient > - > > Key: KAFKA-5722 > URL: https://issues.apache.org/jira/browse/KAFKA-5722 > Project: Kafka > Issue Type: Improvement >Reporter: Viktor Somogyi >Assignee: Viktor Somogyi > Labels: kip, needs-kip > > The ConfigCommand currently uses a direct connection to zookeeper. The > zookeeper dependency should be deprecated and an AdminClient API created to > be used instead. > This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5722) Refactor ConfigCommand to use the new AdminClient
Viktor Somogyi created KAFKA-5722: - Summary: Refactor ConfigCommand to use the new AdminClient Key: KAFKA-5722 URL: https://issues.apache.org/jira/browse/KAFKA-5722 Project: Kafka Issue Type: Improvement Reporter: Viktor Somogyi Assignee: Viktor Somogyi The ConfigCommand currently uses a direct connection to zookeeper. The zookeeper dependency should be deprecated and an AdminClient API created to be used instead. This change requires a KIP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16112591#comment-16112591 ] Viktor Somogyi commented on KAFKA-3268: --- [~tombentley], thanks for the heads up, I didn't know about these. I guess we can close this jira then? Do you know if there are any more jiras around refactoring the admin clients? Is there an umbrella jira which tracks the subtasks? I can see two tasks I'd be happy to do: refactoring the BrokerApiVersionsCommand command (this uses metadata request to collect the info, although through the deprecated AdminClient) and ConfigCommand (uses zookeeper directly, therefore probably needs kip). Shall I open jiras/kips accordingly? > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (KAFKA-3268) Refactor existing CLI scripts to use KafkaAdminClient
[ https://issues.apache.org/jira/browse/KAFKA-3268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi updated KAFKA-3268: -- Summary: Refactor existing CLI scripts to use KafkaAdminClient (was: Refactor existing CLI scripts to use new AdminClient) > Refactor existing CLI scripts to use KafkaAdminClient > - > > Key: KAFKA-3268 > URL: https://issues.apache.org/jira/browse/KAFKA-3268 > Project: Kafka > Issue Type: Sub-task >Reporter: Grant Henke >Assignee: Viktor Somogyi > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5674) max.connections.per.ip minimum value to be zero to allow IP address blocking
[ https://issues.apache.org/jira/browse/KAFKA-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16112426#comment-16112426 ] Viktor Somogyi commented on KAFKA-5674: --- [~tmgstev] could you please review my PR once you have time? > max.connections.per.ip minimum value to be zero to allow IP address blocking > > > Key: KAFKA-5674 > URL: https://issues.apache.org/jira/browse/KAFKA-5674 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.11.0.0 >Reporter: Tristan Stevens >Assignee: Viktor Somogyi > > Currently the max.connections.per.ip (KAFKA-1512) config has a minimum value > of 1, however, as suggested in > https://issues.apache.org/jira/browse/KAFKA-1512?focusedCommentId=14051914, > having this with a minimum value of zero would allow IP-based filtering of > inbound connections (effectively prohibit those IP addresses from connecting > altogether). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (KAFKA-5674) max.connections.per.ip minimum value to be zero to allow IP address blocking
[ https://issues.apache.org/jira/browse/KAFKA-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viktor Somogyi reassigned KAFKA-5674: - Assignee: Viktor Somogyi > max.connections.per.ip minimum value to be zero to allow IP address blocking > > > Key: KAFKA-5674 > URL: https://issues.apache.org/jira/browse/KAFKA-5674 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.11.0.0 >Reporter: Tristan Stevens >Assignee: Viktor Somogyi > > Currently the max.connections.per.ip (KAFKA-1512) config has a minimum value > of 1, however, as suggested in > https://issues.apache.org/jira/browse/KAFKA-1512?focusedCommentId=14051914, > having this with a minimum value of zero would allow IP-based filtering of > inbound connections (effectively prohibit those IP addresses from connecting > altogether). -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-5674) max.connections.per.ip minimum value to be zero to allow IP address blocking
[ https://issues.apache.org/jira/browse/KAFKA-5674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107570#comment-16107570 ] Viktor Somogyi commented on KAFKA-5674: --- [~tmgstev], I'd assign this to myself if you don't mind or you don't work on it. > max.connections.per.ip minimum value to be zero to allow IP address blocking > > > Key: KAFKA-5674 > URL: https://issues.apache.org/jira/browse/KAFKA-5674 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.11.0.0 >Reporter: Tristan Stevens > > Currently the max.connections.per.ip (KAFKA-1512) config has a minimum value > of 1, however, as suggested in > https://issues.apache.org/jira/browse/KAFKA-1512?focusedCommentId=14051914, > having this with a minimum value of zero would allow IP-based filtering of > inbound connections (effectively prohibit those IP addresses from connecting > altogether). -- This message was sent by Atlassian JIRA (v6.4.14#64029)