[jira] [Resolved] (KAFKA-13910) Test metadata refresh for Kraft admin client
[ https://issues.apache.org/jira/browse/KAFKA-13910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13910. Resolution: Won't Fix > Test metadata refresh for Kraft admin client > > > Key: KAFKA-13910 > URL: https://issues.apache.org/jira/browse/KAFKA-13910 > Project: Kafka > Issue Type: Test >Reporter: dengziming >Priority: Minor > Labels: newbie > > [https://github.com/apache/kafka/pull/12110#discussion_r875418603] > currently we don't get the real controller from MetadtaCache in KRaft mode, > we should test it in another way -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-14210) client quota config key is sanitized in Kraft broker
[ https://issues.apache.org/jira/browse/KAFKA-14210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-14210. Resolution: Not A Problem > client quota config key is sanitized in Kraft broker > -- > > Key: KAFKA-14210 > URL: https://issues.apache.org/jira/browse/KAFKA-14210 > Project: Kafka > Issue Type: Improvement >Reporter: dengziming >Assignee: dengziming >Priority: Major > > we sanitized the key in zk mode but don't sanitized in Kraft mode. > {code:java} > public class AdminClientExample { > public static void main(String[] args) throws ExecutionException, > InterruptedException { > Properties properties = new Properties(); > properties.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, > "localhost:9092"); > Admin admin = Admin.create(properties); > // Alter config > admin.alterClientQuotas(Collections.singleton( > new ClientQuotaAlteration( > new > ClientQuotaEntity(Collections.singletonMap(ClientQuotaEntity.CLIENT_ID, > "default")), > Collections.singletonList(new > ClientQuotaAlteration.Op("request_percentage", 0.02))) > )).all().get(); > Map> clientQuotaEntityMapMap = > admin.describeClientQuotas( > > ClientQuotaFilter.contains(Collections.singletonList(ClientQuotaFilterComponent.ofDefaultEntity(ClientQuotaEntity.CLIENT_ID))) > ).entities().get(); > System.out.println(clientQuotaEntityMapMap); > } > } {code} > The code should have return request_percentage=0.02, but it returned > Map.empty. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14210) client quota config key is sanitized in Kraft broker
dengziming created KAFKA-14210: -- Summary: client quota config key is sanitized in Kraft broker Key: KAFKA-14210 URL: https://issues.apache.org/jira/browse/KAFKA-14210 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Update client quota default config successfully: root@7604c498c154:/opt/kafka-3.2.0# bin/kafka-configs.sh --bootstrap-server localhost:9092 --alter --entity-type clients --entity-default --add-config REQUEST_PERCENTAGE_OVERRIDE_CONFIG=0.01 Describe client quota default config, but there is no output: Only quota configs can be added for 'clients' using --bootstrap-server. Unexpected config names: Set(REQUEST_PERCENTAGE_OVERRIDE_CONFIG) root@7604c498c154:/opt/kafka-3.2.0# bin/kafka-configs.sh --bootstrap-server localhost:9092 --describe --entity-type clients --entity-default The reason is that we sanitized the key in zk mode but don't sanitized in Kraft mode. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-13990) Update features will fail in KRaft mode
[ https://issues.apache.org/jira/browse/KAFKA-13990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13990. Resolution: Fixed > Update features will fail in KRaft mode > --- > > Key: KAFKA-13990 > URL: https://issues.apache.org/jira/browse/KAFKA-13990 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.3.0 >Reporter: dengziming >Assignee: dengziming >Priority: Blocker > Fix For: 3.3.0 > > > We return empty supported features in Controller ApiVersionResponse, so the > {{quorumSupportedFeature}} will always return empty, we should return > Map(metadata.version -> latest) > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-13850) kafka-metadata-shell is missing some record types
[ https://issues.apache.org/jira/browse/KAFKA-13850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13850. Fix Version/s: 3.3 Resolution: Fixed > kafka-metadata-shell is missing some record types > - > > Key: KAFKA-13850 > URL: https://issues.apache.org/jira/browse/KAFKA-13850 > Project: Kafka > Issue Type: Bug > Components: kraft >Reporter: David Arthur >Assignee: dengziming >Priority: Major > Fix For: 3.3 > > > Noticed while working on feature flags in KRaft, the in-memory tree of the > metadata (MetadataNodeManager) is missing support for a few of record types. > * DelegationTokenRecord > * UserScramCredentialRecord (should we include this?) > * FeatureLevelRecord > * AccessControlEntryRecord > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14073) Logging the reason for creating a snapshot
dengziming created KAFKA-14073: -- Summary: Logging the reason for creating a snapshot Key: KAFKA-14073 URL: https://issues.apache.org/jira/browse/KAFKA-14073 Project: Kafka Issue Type: Improvement Reporter: dengziming So far we have two reasons for creating a snapshot. 1. X bytes were applied. 2. the metadata version changed. we should log the reason when creating snapshot both in the broker side and controller side. see https://github.com/apache/kafka/pull/12265#discussion_r915972383 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14047) Use KafkaRaftManager in KRaft TestKit
dengziming created KAFKA-14047: -- Summary: Use KafkaRaftManager in KRaft TestKit Key: KAFKA-14047 URL: https://issues.apache.org/jira/browse/KAFKA-14047 Project: Kafka Issue Type: Test Reporter: dengziming We are using lower-level {{ControllerServer}} and {{BrokerServer}} in TestKit, we can improve it to use KafkaRaftManager. see the discussion here: https://github.com/apache/kafka/pull/12157#discussion_r882179407 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-13991) Add Admin.updateFeatures() API.
dengziming created KAFKA-13991: -- Summary: Add Admin.updateFeatures() API. Key: KAFKA-13991 URL: https://issues.apache.org/jira/browse/KAFKA-13991 Project: Kafka Issue Type: Bug Reporter: dengziming We have Admin.updateFeatures(Map, UpdateFeaturesOptions) but don't have Admin.updateFeatures(Map), this is inconsistent with other Admin API. We may need a KIP to change this. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13990) Update features will fail in KRaft mode
dengziming created KAFKA-13990: -- Summary: Update features will fail in KRaft mode Key: KAFKA-13990 URL: https://issues.apache.org/jira/browse/KAFKA-13990 Project: Kafka Issue Type: Bug Reporter: dengziming Assignee: dengziming We return empty supported features in Controller ApiVersionResponse, so the {{quorumSupportedFeature}} will always return empty, we should return Map(metadata.version -> latest) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13968) Broker should not generator snapshot until been unfenced
dengziming created KAFKA-13968: -- Summary: Broker should not generator snapshot until been unfenced Key: KAFKA-13968 URL: https://issues.apache.org/jira/browse/KAFKA-13968 Project: Kafka Issue Type: Bug Components: kraft Reporter: dengziming Assignee: dengziming There is a bug when computing `FeaturesDelta` which cause us to generate snapshot on every commit. [2022-06-08 13:07:43,010] INFO [BrokerMetadataSnapshotter id=0] Creating a new snapshot at offset 0... (kafka.server.metadata.BrokerMetadataSnapshotter:66) [2022-06-08 13:07:43,222] INFO [BrokerMetadataSnapshotter id=0] Creating a new snapshot at offset 2... (kafka.server.metadata.BrokerMetadataSnapshotter:66) [2022-06-08 13:07:43,727] INFO [BrokerMetadataSnapshotter id=0] Creating a new snapshot at offset 3... (kafka.server.metadata.BrokerMetadataSnapshotter:66) [2022-06-08 13:07:44,228] INFO [BrokerMetadataSnapshotter id=0] Creating a new snapshot at offset 4... (kafka.server.metadata.BrokerMetadataSnapshotter:66) Before a broker being unfenced, it won't starting publishing metadata, so it's meaningless to generate a snapshot. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-13845) Add support for reading KRaft snapshots in kafka-dump-log
[ https://issues.apache.org/jira/browse/KAFKA-13845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13845. Resolution: Fixed > Add support for reading KRaft snapshots in kafka-dump-log > - > > Key: KAFKA-13845 > URL: https://issues.apache.org/jira/browse/KAFKA-13845 > Project: Kafka > Issue Type: Sub-task >Reporter: David Arthur >Assignee: dengziming >Priority: Minor > Labels: kip-500 > > Even though the metadata snapshots use the same format as log segments, the > kafka-dump-log tool (DumpLogSegments.scala) does not support the file > extension or file name pattern. > For example, a metadata snapshot will be named like: > {code:java} > __cluster_metadata-0/0004-01.checkpoint{code} > whereas regular log segments (including the metadata log) are named like: > {code:java} > __cluster_metadata-0/.log {code} > > We need to enhance the tool to support snapshots. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-13833) Remove the "min_version_level" version from the finalized version range that is written to ZooKeeper
[ https://issues.apache.org/jira/browse/KAFKA-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13833. Resolution: Fixed > Remove the "min_version_level" version from the finalized version range that > is written to ZooKeeper > > > Key: KAFKA-13833 > URL: https://issues.apache.org/jira/browse/KAFKA-13833 > Project: Kafka > Issue Type: Sub-task >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-12902) Add UINT32 type in generator
[ https://issues.apache.org/jira/browse/KAFKA-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12902. Resolution: Fixed > Add UINT32 type in generator > > > Key: KAFKA-12902 > URL: https://issues.apache.org/jira/browse/KAFKA-12902 > Project: Kafka > Issue Type: Improvement >Reporter: dengziming >Assignee: dengziming >Priority: Major > > We support unit32 in Struct but don't support unit32 in generator protocol. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13910) Test metadata refresh for Kraft admin client
dengziming created KAFKA-13910: -- Summary: Test metadata refresh for Kraft admin client Key: KAFKA-13910 URL: https://issues.apache.org/jira/browse/KAFKA-13910 Project: Kafka Issue Type: Test Reporter: dengziming [https://github.com/apache/kafka/pull/12110#discussion_r875418603] currently we don't get the real controller from MetadtaCache in KRaft mode, we should test it in another way -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13907) Fix hanging ServerShutdownTest.testCleanShutdownWithKRaftControllerUnavailable
dengziming created KAFKA-13907: -- Summary: Fix hanging ServerShutdownTest.testCleanShutdownWithKRaftControllerUnavailable Key: KAFKA-13907 URL: https://issues.apache.org/jira/browse/KAFKA-13907 Project: Kafka Issue Type: Bug Reporter: dengziming ServerShutdownTest.testCleanShutdownWithKRaftControllerUnavailable will hang up waiting for controlled shutdown, there may be some bug related to it. since this bug can be reproduced locally, it won't be hard to investigated. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-13893) Add BrokerApiVersions Api in AdminClient
[ https://issues.apache.org/jira/browse/KAFKA-13893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13893. Resolution: Invalid > Add BrokerApiVersions Api in AdminClient > > > Key: KAFKA-13893 > URL: https://issues.apache.org/jira/browse/KAFKA-13893 > Project: Kafka > Issue Type: Task >Reporter: dengziming >Assignee: dengziming >Priority: Major > Labels: kip-required > > We already have a BrokerApiVersionsCommand to get broker api version, yet we > lack similar api in AdminClient. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13893) Add BrokerApiVersions Api in AdminClient
dengziming created KAFKA-13893: -- Summary: Add BrokerApiVersions Api in AdminClient Key: KAFKA-13893 URL: https://issues.apache.org/jira/browse/KAFKA-13893 Project: Kafka Issue Type: Task Reporter: dengziming Assignee: dengziming We already have a BrokerApiVersionsCommand to get broker api version, yet we lack similar api in AdminClient. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-13854) Refactor ApiVersion to MetadataVersion
[ https://issues.apache.org/jira/browse/KAFKA-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13854. Resolution: Fixed > Refactor ApiVersion to MetadataVersion > -- > > Key: KAFKA-13854 > URL: https://issues.apache.org/jira/browse/KAFKA-13854 > Project: Kafka > Issue Type: Sub-task >Reporter: David Arthur >Assignee: Alyssa Huang >Priority: Major > > In KRaft, we will have a value for {{metadata.version}} corresponding to each > IBP. In order to keep this association and make it obvious for developers, we > will consolidate the IBP and metadata version into a new MetadataVersion > enum. This new enum will replace the existing ApiVersion trait. > For IBPs that precede the first KRaft preview version (AK 3.0), we will use a > value of -1 for the metadata.version. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13863) Prevent null config value when create topic in KRaft mode
dengziming created KAFKA-13863: -- Summary: Prevent null config value when create topic in KRaft mode Key: KAFKA-13863 URL: https://issues.apache.org/jira/browse/KAFKA-13863 Project: Kafka Issue Type: Bug Reporter: dengziming -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13862) Add And Subtract multiple config values is not supported in KRaft mode
dengziming created KAFKA-13862: -- Summary: Add And Subtract multiple config values is not supported in KRaft mode Key: KAFKA-13862 URL: https://issues.apache.org/jira/browse/KAFKA-13862 Project: Kafka Issue Type: Bug Reporter: dengziming -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (KAFKA-13859) SCRAM authentication issues with kafka-clients 3.0.1
[ https://issues.apache.org/jira/browse/KAFKA-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13859. Resolution: Not A Problem In [KIP-679]([https://cwiki.apache.org/confluence/display/KAFKA/KIP-679%3A+Producer+will+enable+the+strongest+delivery+guarantee+by+default#KIP679:Producerwillenablethestrongestdeliveryguaranteebydefault-%60IDEMPOTENT_WRITE%60Deprecation)] We are relaxing the ACL restriction from {{IDEMPOTENT_WRITE}} to {{WRITE}} earlier (release version 2.8) and changing the producer defaults later (release version 3.0) in order to give the community users enough time to upgrade their broker first. So their later client-side upgrading, which enables idempotence by default, won't get blocked by the {{IDEMPOTENT_WRITE}} ACL required by the old version brokers. so this is designed intentionally, we should help the users to make this change. > SCRAM authentication issues with kafka-clients 3.0.1 > > > Key: KAFKA-13859 > URL: https://issues.apache.org/jira/browse/KAFKA-13859 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.0.1 >Reporter: Oliver Payne >Assignee: dengziming >Priority: Major > > When attempting to produce records to Kafka using a client configured with > SCRAM authentication, the authentication is being rejected, and the following > exception is thrown: > {{org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster > authorization failed.}} > I am seeing this happen with a Springboot service that was recently upgraded > to 2.6.5. After looking into this, I learned that Springboot moved to > kafka-clients 3.0.1 from 3.0.0 in that version. And sure enough, downgrading > to kafka-clients resolved the issue, with no changes made to the configs. > I have also attempted to connect to a separate server with kafka-clients > 3.0.1, using plaintext authentication. That works fine. So the issue appears > to be with SCRAM authentication. > I will note that I am attempting to connect to an AWS MSK instance. We use > SCRAM-SHA-512 as our sasl mechanism, using the basic {{ScramLoginModule.}} -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13836) Improve KRaft broker heartbeat logic
dengziming created KAFKA-13836: -- Summary: Improve KRaft broker heartbeat logic Key: KAFKA-13836 URL: https://issues.apache.org/jira/browse/KAFKA-13836 Project: Kafka Issue Type: Improvement Reporter: dengziming # Don't advertise an offset to the controller until it has been published # only unfence a broker when it has seen it's own registration -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-13833) Remove the "min_version_level" version from the finalized version range that is written to ZooKeeper
dengziming created KAFKA-13833: -- Summary: Remove the "min_version_level" version from the finalized version range that is written to ZooKeeper Key: KAFKA-13833 URL: https://issues.apache.org/jira/browse/KAFKA-13833 Project: Kafka Issue Type: Sub-task Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13832) Flaky test TopicCommandIntegrationTest.testAlterAssignment
dengziming created KAFKA-13832: -- Summary: Flaky test TopicCommandIntegrationTest.testAlterAssignment Key: KAFKA-13832 URL: https://issues.apache.org/jira/browse/KAFKA-13832 Project: Kafka Issue Type: Improvement Components: unit tests Reporter: dengziming -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-13242) KRaft Controller doesn't handle UpdateFeaturesRequest
[ https://issues.apache.org/jira/browse/KAFKA-13242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13242. Resolution: Fixed > KRaft Controller doesn't handle UpdateFeaturesRequest > - > > Key: KAFKA-13242 > URL: https://issues.apache.org/jira/browse/KAFKA-13242 > Project: Kafka > Issue Type: Sub-task >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-13743) kraft controller should prevent topics with conflicting metrics names from being created
[ https://issues.apache.org/jira/browse/KAFKA-13743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13743. Reviewer: Colin McCabe Resolution: Fixed > kraft controller should prevent topics with conflicting metrics names from > being created > > > Key: KAFKA-13743 > URL: https://issues.apache.org/jira/browse/KAFKA-13743 > Project: Kafka > Issue Type: Bug >Reporter: Colin McCabe >Assignee: dengziming >Priority: Major > Labels: kip-500 > Fix For: 3.3.0 > > > The kraft controller should prevent topics with conflicting metrics names > from being created, like the zk code does. > Example: > {code} > [cmccabe@zeratul kafka1]$ ./bin/kafka-topics.sh --create --topic f.oo > --bootstrap-server localhost:9092 > > WARNING: Due to limitations in metric names, topics with a period ('.') or > underscore ('_') could collide. To avoid issues it is best to use either, but > not both. > Created topic f.oo. > > > [cmccabe@zeratul kafka1]$ ./bin/kafka-topics.sh --create --topic f_oo > --bootstrap-server localhost:9092 > WARNING: Due to limitations in metric names, topics with a period ('.') or > underscore ('_') could collide. To avoid issues it is best to use either, but > not both. > Error while executing topic command : Topic 'f_oo' collides with existing > topics: f.oo > [2022-03-15 09:48:49,563] ERROR > org.apache.kafka.common.errors.InvalidTopicException: Topic 'f_oo' collides > with existing topics: f.oo > (kafka.admin.TopicCommand$) > {code} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-13733) Reset consumer group offset with not exist topic throw wrong exception
[ https://issues.apache.org/jira/browse/KAFKA-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13733. Resolution: Not A Problem > Reset consumer group offset with not exist topic throw wrong exception > -- > > Key: KAFKA-13733 > URL: https://issues.apache.org/jira/browse/KAFKA-13733 > Project: Kafka > Issue Type: Bug > Components: core >Reporter: Yujie Li >Priority: Major > > Hey, > I'm seen a bug with misleading exception when I try to reset consumer group > offset with not exist topic by: > `kafka-consumer-groups --bootstrap-server $brokers --reset-offsets --group > --topic :0 --to-offset ` > And got: > > ``` > Error: Executing consumer group command failed due to > org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node > assignment. Call: metadatajava.util.concurrent.ExecutionException: > org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node > assignment. Call: metadata > at > java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) > at > java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999) > at > org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.getLogStartOffsets(ConsumerGroupCommand.scala:654) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.checkOffsetsRange(ConsumerGroupCommand.scala:888) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.prepareOffsetsToReset(ConsumerGroupCommand.scala:796) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.$anonfun$resetOffsets$1(ConsumerGroupCommand.scala:437) > at scala.collection.IterableOnceOps.foldLeft(IterableOnce.scala:646) > at scala.collection.IterableOnceOps.foldLeft$(IterableOnce.scala:642) > at scala.collection.AbstractIterable.foldLeft(Iterable.scala:919) > at > kafka.admin.ConsumerGroupCommand$ConsumerGroupService.resetOffsets(ConsumerGroupCommand.scala:432) > at kafka.admin.ConsumerGroupCommand$.run(ConsumerGroupCommand.scala:76) > at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59) > at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala) > Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting > for a node assignment. Call: metadata > I think it should throw TopicNotExistException/NotAuthorizeException instead. > ``` > Thoughts: we should add topic permission / exist check in reset offset call > https://github.com/apache/kafka/blob/3.1/core/src/main/scala/kafka/admin/ConsumerGroupCommand.scala#L421 > > Let me know what do you think! > > Thanks, > Yujie -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-12251) Add topic ID support to StopReplica
[ https://issues.apache.org/jira/browse/KAFKA-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12251. Resolution: Won't Do > Add topic ID support to StopReplica > --- > > Key: KAFKA-12251 > URL: https://issues.apache.org/jira/browse/KAFKA-12251 > Project: Kafka > Issue Type: Sub-task >Reporter: dengziming >Assignee: dengziming >Priority: Major > > Remove topic name and Add topic id to StopReplicaReq and StopReplicaResp -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13637) User default.api.timeout.ms config as default timeout for KafkaConsumer.endOffsets
dengziming created KAFKA-13637: -- Summary: User default.api.timeout.ms config as default timeout for KafkaConsumer.endOffsets Key: KAFKA-13637 URL: https://issues.apache.org/jira/browse/KAFKA-13637 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming In KafkaConsumer, we use `request.timeout.ms` in `endOffsets` and `default.api.timeout.ms` when in `beginningOffsets`, we should use `default.api.timeout.ms` for both. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-13528) KRaft RegisterBroker should validate that the cluster ID matches
[ https://issues.apache.org/jira/browse/KAFKA-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13528. Resolution: Fixed > KRaft RegisterBroker should validate that the cluster ID matches > > > Key: KAFKA-13528 > URL: https://issues.apache.org/jira/browse/KAFKA-13528 > Project: Kafka > Issue Type: Bug >Reporter: Colin McCabe >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13509) Support max timestamp in GetOffsetShell
dengziming created KAFKA-13509: -- Summary: Support max timestamp in GetOffsetShell Key: KAFKA-13509 URL: https://issues.apache.org/jira/browse/KAFKA-13509 Project: Kafka Issue Type: Sub-task Components: tools Reporter: dengziming Assignee: dengziming We would list offset with max timestamp using `kafka.tools.GetOffsetShell` : ``` bin/kafka-run-class.sh kafka.tools.GetOffsetShell --bootstrap-server localhost:9092 --topic topic1 --time -3 ``` -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (KAFKA-13462) KRaft server does not return internal topics on list topics RPC
[ https://issues.apache.org/jira/browse/KAFKA-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13462. Resolution: Invalid __consumer_offsets will not be created unless we store commit offset > KRaft server does not return internal topics on list topics RPC > --- > > Key: KAFKA-13462 > URL: https://issues.apache.org/jira/browse/KAFKA-13462 > Project: Kafka > Issue Type: Bug >Reporter: dengziming >Assignee: dengziming >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13462) KRaft server does not return internal topics on list topics RPC
dengziming created KAFKA-13462: -- Summary: KRaft server does not return internal topics on list topics RPC Key: KAFKA-13462 URL: https://issues.apache.org/jira/browse/KAFKA-13462 Project: Kafka Issue Type: Bug Reporter: dengziming -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (KAFKA-13316) Convert CreateTopicsRequestWithPolicyTest to use ClusterTest
dengziming created KAFKA-13316: -- Summary: Convert CreateTopicsRequestWithPolicyTest to use ClusterTest Key: KAFKA-13316 URL: https://issues.apache.org/jira/browse/KAFKA-13316 Project: Kafka Issue Type: Sub-task Reporter: dengziming Follows up for KAFKA-13279 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13242) KRaft Controller doesn't handle UpdateFeaturesRequest
dengziming created KAFKA-13242: -- Summary: KRaft Controller doesn't handle UpdateFeaturesRequest Key: KAFKA-13242 URL: https://issues.apache.org/jira/browse/KAFKA-13242 Project: Kafka Issue Type: Bug Reporter: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13228) ApiVersionRequest are not correctly handled in kraft mode
dengziming created KAFKA-13228: -- Summary: ApiVersionRequest are not correctly handled in kraft mode Key: KAFKA-13228 URL: https://issues.apache.org/jira/browse/KAFKA-13228 Project: Kafka Issue Type: Bug Reporter: dengziming Assignee: dengziming Fix For: 3.0.1 I'am trying to describe quorum in kraft mode but got `org.apache.kafka.common.errors.UnsupportedVersionException: The broker does not support DESCRIBE_QUORUM`. This happens because we only concerns `ApiKeys.zkBrokerApis()` when we call `NodeApiVersions.create()` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12701) NPE in MetadataRequest when using topic IDs
[ https://issues.apache.org/jira/browse/KAFKA-12701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12701. Assignee: Justine Olshan (was: dengziming) Resolution: Fixed > NPE in MetadataRequest when using topic IDs > --- > > Key: KAFKA-12701 > URL: https://issues.apache.org/jira/browse/KAFKA-12701 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Travis Bischel >Assignee: Justine Olshan >Priority: Major > > Authorized result checking relies on topic name to not be null, which, when > using topic IDs, it is. > Unlike the logic in handleDeleteTopicsRequest, handleMetadataRequest does not > check zk for the names corresponding to topic IDs if topic IDs are present. > {noformat} > [2021-04-21 05:53:01,463] ERROR [KafkaApi-1] Error when handling request: > clientId=kgo, correlationId=1, api=METADATA, version=11, > body=MetadataRequestData(topics=[MetadataRequestTopic(topicId=LmqOoFOASnqQp_4-oJgeKA, > name=null)], allowAutoTopicCreation=false, > includeClusterAuthorizedOperations=false, > includeTopicAuthorizedOperations=false) (kafka.server.RequestHandlerHelper) > java.lang.NullPointerException: name > at java.base/java.util.Objects.requireNonNull(Unknown Source) > at > org.apache.kafka.common.resource.ResourcePattern.(ResourcePattern.java:50) > at > kafka.server.AuthHelper.$anonfun$filterByAuthorized$3(AuthHelper.scala:121) > at scala.collection.Iterator$$anon$9.next(Iterator.scala:575) > at scala.collection.mutable.Growable.addAll(Growable.scala:62) > at scala.collection.mutable.Growable.addAll$(Growable.scala:57) > at scala.collection.mutable.ArrayBuffer.addAll(ArrayBuffer.scala:142) > at scala.collection.mutable.ArrayBuffer.addAll(ArrayBuffer.scala:42) > at scala.collection.mutable.ArrayBuffer$.from(ArrayBuffer.scala:258) > at scala.collection.mutable.ArrayBuffer$.from(ArrayBuffer.scala:247) > at scala.collection.SeqFactory$Delegate.from(Factory.scala:306) > at scala.collection.IterableOnceOps.toBuffer(IterableOnce.scala:1270) > at scala.collection.IterableOnceOps.toBuffer$(IterableOnce.scala:1270) > at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1288) > at kafka.server.AuthHelper.filterByAuthorized(AuthHelper.scala:120) > at > kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:1146) > at kafka.server.KafkaApis.handle(KafkaApis.scala:170) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74) > at java.base/java.lang.Thread.run(Unknown Source) > [2021-04-21 05:53:01,464] ERROR [Kafka Request Handler 1 on Broker 1], > Exception when handling request (kafka.server.KafkaRequestHandler) > java.lang.NullPointerException > at > org.apache.kafka.common.message.MetadataResponseData$MetadataResponseTopic.addSize(MetadataResponseData.java:1247) > at > org.apache.kafka.common.message.MetadataResponseData.addSize(MetadataResponseData.java:417) > at > org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218) > at > org.apache.kafka.common.protocol.SendBuilder.buildResponseSend(SendBuilder.java:200) > at > org.apache.kafka.common.requests.AbstractResponse.toSend(AbstractResponse.java:43) > at > org.apache.kafka.common.requests.RequestContext.buildResponseSend(RequestContext.java:111) > at > kafka.network.RequestChannel$Request.buildResponseSend(RequestChannel.scala:132) > at > kafka.server.RequestHandlerHelper.sendResponse(RequestHandlerHelper.scala:185) > at > kafka.server.RequestHandlerHelper.sendErrorOrCloseConnection(RequestHandlerHelper.scala:155) > at > kafka.server.RequestHandlerHelper.sendErrorResponseMaybeThrottle(RequestHandlerHelper.scala:109) > at > kafka.server.RequestHandlerHelper.handleError(RequestHandlerHelper.scala:79) > at kafka.server.KafkaApis.handle(KafkaApis.scala:229) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:74) > at java.base/java.lang.Thread.run(Unknown Source) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12155) Delay increasing the log start offset
[ https://issues.apache.org/jira/browse/KAFKA-12155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12155. Resolution: Fixed > Delay increasing the log start offset > - > > Key: KAFKA-12155 > URL: https://issues.apache.org/jira/browse/KAFKA-12155 > Project: Kafka > Issue Type: Sub-task > Components: replication >Reporter: Jose Armando Garcia Sancio >Assignee: David Arthur >Priority: Major > > The implementation in [https://github.com/apache/kafka/pull/9816] increases > the log start offset as soon as a snapshot is created that is greater than > the log start offset. This is correct but causes some inefficiency in some > cases. > # Any follower, voters or observers, with an end offset between the leader's > log start offset and the leader's latest snapshot will get invalidated. This > will cause those follower to fetch the new snapshot and reload it's state > machine. > # Any {{Listener}} or state machine that has a {{nextExpectedOffset()}} less > than the latest snapshot will get invalidated. This will cause the state > machine to have to reload its state from the latest snapshot. > To minimize the frequency of these reloads KIP-630 proposes adding the > following configuration: > * {{metadata.start.offset.lag.time.max.ms}} - The maximum amount of time > that leader will wait for an offset to get replicated to all of the live > replicas before advancing the {{LogStartOffset}}. See section “When to > Increase the LogStartOffset”. The default is 7 days. > This description and implementation should be extended to also apply to the > state machine, or {{Listener}}. The local log start offset should be > increased when all of the {{ListenerContext}}'s {{nextExpectedOffset()}} is > greater than the offset of the latest snapshot. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-13042) Flaky test KafkaMetadataLogTest.testDeleteSnapshots()
[ https://issues.apache.org/jira/browse/KAFKA-13042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-13042. Resolution: Fixed > Flaky test KafkaMetadataLogTest.testDeleteSnapshots() > - > > Key: KAFKA-13042 > URL: https://issues.apache.org/jira/browse/KAFKA-13042 > Project: Kafka > Issue Type: Bug >Reporter: dengziming >Priority: Major > > This test fails many times but succeeds locally. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-13042) Flaky test KafkaMetadataLogTest.testDeleteSnapshots()
dengziming created KAFKA-13042: -- Summary: Flaky test KafkaMetadataLogTest.testDeleteSnapshots() Key: KAFKA-13042 URL: https://issues.apache.org/jira/browse/KAFKA-13042 Project: Kafka Issue Type: Bug Reporter: dengziming This test fails many times but succeeds locally. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10898) Support snakeCaseName/camelCaseName JSON field name in JsonConverterGenerator
[ https://issues.apache.org/jira/browse/KAFKA-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10898. Resolution: Won't Fix > Support snakeCaseName/camelCaseName JSON field name in JsonConverterGenerator > - > > Key: KAFKA-10898 > URL: https://issues.apache.org/jira/browse/KAFKA-10898 > Project: Kafka > Issue Type: Improvement > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Minor > > I find many JSON-format command line params for example > `kafka-reassign-partitions.sh --reassignment-json-file my.json` which we can > use JsonConverter to read and write. > However, currently, we use camelCaseName when converting protocol data to > JSON, but most JSON-format command line params use snakeCaseName, so we > should support snakeCaseName in JsonConverterGenerator. > In the post-KIP-500 world, we will move all configs in zookeeper to > kafka-raft, and the data in zookeeper is also stored as snakeCaseName JSON, > so it's useful to support snakeCaseName in JsonConverterGenerator in advance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12903) Replace producer state entry with auto-generated protocol
dengziming created KAFKA-12903: -- Summary: Replace producer state entry with auto-generated protocol Key: KAFKA-12903 URL: https://issues.apache.org/jira/browse/KAFKA-12903 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12902) Add UINT32 type in generator
dengziming created KAFKA-12902: -- Summary: Add UINT32 type in generator Key: KAFKA-12902 URL: https://issues.apache.org/jira/browse/KAFKA-12902 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming We support unit32 in Struct but don't support unit32 in generator protocol. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10195) Move offset management codes from ConsumerCoordinator to a new class
[ https://issues.apache.org/jira/browse/KAFKA-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10195. Resolution: Won't Fix > Move offset management codes from ConsumerCoordinator to a new class > > > Key: KAFKA-10195 > URL: https://issues.apache.org/jira/browse/KAFKA-10195 > Project: Kafka > Issue Type: Improvement > Components: clients, consumer >Reporter: dengziming >Assignee: dengziming >Priority: Minor > > ConsumerCoordinator has 2 main functions: > # partitions assignment > # offset management > We are adding some new features in it, for example KAFKA-9657 add a field > `throwOnFetchStableOffsetsUnsupported` which only used in offset management. > And the 2 functions almost don't interact with each other, so it's not wise > to put these code in one single class, can we try to move offset management > code to a new class? > For example, the below fields only used in offset management: > ``` > // can be move to another class directly > private final OffsetCommitCallback defaultOffsetCommitCallback; > private final ConsumerInterceptors interceptors; > private final AtomicInteger pendingAsyncCommits; > private final ConcurrentLinkedQueue > completedOffsetCommits; > private AtomicBoolean asyncCommitFenced; > private final boolean throwOnFetchStableOffsetsUnsupported; > private PendingCommittedOffsetRequest pendingCommittedOffsetRequest = null; > > // used in `onJoinComplete` but can also be moved out. > private final boolean autoCommitEnabled; > private final int autoCommitIntervalMs; > private Timer nextAutoCommitTimer; > ``` > So we can just create a new class `OffsetManageCoordinator` and move the > related codes into it. Similarly, a new class `SubscribeManager` can also be > created. here is the UML class diagram: > !image-2020-06-28-19-50-26-570.png! > > The above is the current design in which KafkaConsumer interact with Consumer > directly. the below is the new design, we add a `ConsumerCoordinatorFacade` > in which we put `OffsetCoordinator` and `SubscribeCoordinator` to manage > offset and assigning respectively. both `OffsetCoordinator` and > `SubscribeCoordinator` need a `AbstractCoordinator` cause they will interact > with each other(even rarely). > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12772) Move all TransactionState transition rules into their states
[ https://issues.apache.org/jira/browse/KAFKA-12772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12772. Resolution: Fixed > Move all TransactionState transition rules into their states > > > Key: KAFKA-12772 > URL: https://issues.apache.org/jira/browse/KAFKA-12772 > Project: Kafka > Issue Type: Improvement >Reporter: dengziming >Assignee: dengziming >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12772) Move all TransactionState transition rules into their states
dengziming created KAFKA-12772: -- Summary: Move all TransactionState transition rules into their states Key: KAFKA-12772 URL: https://issues.apache.org/jira/browse/KAFKA-12772 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10783) Rewrite TopicPartitionStateZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10783. Resolution: Won't Do > Rewrite TopicPartitionStateZNode struct with auto-generated protocol > > > Key: KAFKA-10783 > URL: https://issues.apache.org/jira/browse/KAFKA-10783 > Project: Kafka > Issue Type: Sub-task >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10785) Rewrite ConfigEntityChangeNotificationSequenceZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10785. Resolution: Won't Do > Rewrite ConfigEntityChangeNotificationSequenceZNode struct with > auto-generated protocol > --- > > Key: KAFKA-10785 > URL: https://issues.apache.org/jira/browse/KAFKA-10785 > Project: Kafka > Issue Type: Sub-task > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10782) Rewrite TopicZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10782. Resolution: Won't Do > Rewrite TopicZNode struct with auto-generated protocol > -- > > Key: KAFKA-10782 > URL: https://issues.apache.org/jira/browse/KAFKA-10782 > Project: Kafka > Issue Type: Sub-task > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10784) Rewrite ConfigEntityZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10784. Resolution: Won't Do > Rewrite ConfigEntityZNode struct with auto-generated protocol > - > > Key: KAFKA-10784 > URL: https://issues.apache.org/jira/browse/KAFKA-10784 > Project: Kafka > Issue Type: Sub-task > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10781) Rewrite BrokerIdZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10781. Resolution: Won't Fix > Rewrite BrokerIdZNode struct with auto-generated protocol > -- > > Key: KAFKA-10781 > URL: https://issues.apache.org/jira/browse/KAFKA-10781 > Project: Kafka > Issue Type: Sub-task > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-12607) Allow votes to be granted in resigned state
[ https://issues.apache.org/jira/browse/KAFKA-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-12607. Resolution: Fixed > Allow votes to be granted in resigned state > --- > > Key: KAFKA-12607 > URL: https://issues.apache.org/jira/browse/KAFKA-12607 > Project: Kafka > Issue Type: Sub-task >Reporter: Jason Gustafson >Assignee: dengziming >Priority: Major > Labels: newbie++ > > When the leader is shutting down, it transitions to a resigned state. > Currently all votes are rejected in this state, but we should allow the > resigned leader to help a candidate get elected. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12609) Rewrite ListOffsets using AdminApiDriver
dengziming created KAFKA-12609: -- Summary: Rewrite ListOffsets using AdminApiDriver Key: KAFKA-12609 URL: https://issues.apache.org/jira/browse/KAFKA-12609 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming See here: https://github.com/apache/kafka/pull/10275#issuecomment-806331897 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12539) Move some logic in handleVoteRequest to EpochState
dengziming created KAFKA-12539: -- Summary: Move some logic in handleVoteRequest to EpochState Key: KAFKA-12539 URL: https://issues.apache.org/jira/browse/KAFKA-12539 Project: Kafka Issue Type: Improvement Reporter: dengziming Reduce the cyclomatic complexity of KafkaRaftClient, see the comment for details: https://github.com/apache/kafka/pull/10289#discussion_r597274570 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10821) Send cluster id information with the FetchSnapshot request
[ https://issues.apache.org/jira/browse/KAFKA-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10821. Resolution: Duplicate > Send cluster id information with the FetchSnapshot request > -- > > Key: KAFKA-10821 > URL: https://issues.apache.org/jira/browse/KAFKA-10821 > Project: Kafka > Issue Type: Sub-task >Reporter: Jose Armando Garcia Sancio >Assignee: Rohit Deshpande >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12511) Flaky test DynamicConnectionQuotaTest.testDynamicListenerConnectionCreationRateQuota()
dengziming created KAFKA-12511: -- Summary: Flaky test DynamicConnectionQuotaTest.testDynamicListenerConnectionCreationRateQuota() Key: KAFKA-12511 URL: https://issues.apache.org/jira/browse/KAFKA-12511 Project: Kafka Issue Type: Bug Reporter: dengziming First time: Listener PLAINTEXT connection rate 14.419389476913636 must be below 14.399 ==> expected: but was: Second time: Listener EXTERNAL connection rate 10.998243336133811 must be below 10.799 ==> expected: but was: details: https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10289/4/testReport/junit/kafka.network/DynamicConnectionQuotaTest/Build___JDK_11___testDynamicListenerConnectionCreationRateQuota__/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12489) Flaky test ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir
dengziming created KAFKA-12489: -- Summary: Flaky test ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir Key: KAFKA-12489 URL: https://issues.apache.org/jira/browse/KAFKA-12489 Project: Kafka Issue Type: Bug Components: system tests Reporter: dengziming org.opentest4j.AssertionFailedError: expected: but was: at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:55) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:40) at org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:35) at org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:162) at kafka.utils.TestUtils$.causeLogDirFailure(TestUtils.scala:1251) at kafka.controller.ControllerIntegrationTest.testPartitionReassignmentToBrokerWithOfflineLogDir(ControllerIntegrationTest.scala:329) details: [https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-10289/2/testReport/junit/kafka.controller/ControllerIntegrationTest/Build___JDK_11___testPartitionReassignmentToBrokerWithOfflineLogDir__/] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12465) Decide whether inconsistent cluster id error are fatal
dengziming created KAFKA-12465: -- Summary: Decide whether inconsistent cluster id error are fatal Key: KAFKA-12465 URL: https://issues.apache.org/jira/browse/KAFKA-12465 Project: Kafka Issue Type: Sub-task Reporter: dengziming Currently, we just log an error when an inconsistent cluster-id occurred. We should set a window during startup when these errors are fatal but after that window, we no longer treat them to be fatal. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12398) Fix flaky test `ConsumerBounceTest.testClose`
dengziming created KAFKA-12398: -- Summary: Fix flaky test `ConsumerBounceTest.testClose` Key: KAFKA-12398 URL: https://issues.apache.org/jira/browse/KAFKA-12398 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Attachments: image-2021-03-02-14-22-34-367.png Sometimes it failed with the following error: !image-2021-03-02-14-22-34-367.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12388) Share broker channel between alterIsrManager and lifecycleManager
dengziming created KAFKA-12388: -- Summary: Share broker channel between alterIsrManager and lifecycleManager Key: KAFKA-12388 URL: https://issues.apache.org/jira/browse/KAFKA-12388 Project: Kafka Issue Type: Sub-task Reporter: dengziming There are some BorkerToControllerChannerManager in BrokerServer and KafkaServer, We are planning to consolidate into two channels eventually: # broker to controller channel # client to controller channel Auto topic creation and forwarding fall into the 2nd category, while AlterIsr, lifecycleManager, and logDirEventManager(see KAFKA-9837) would be the 1st category. KAFKA-10348 are consolidating 2nd category, this task is trying to consolidate the 1st category. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12338) The code of MetadataRecordSerde duplicate with MetadataParser
dengziming created KAFKA-12338: -- Summary: The code of MetadataRecordSerde duplicate with MetadataParser Key: KAFKA-12338 URL: https://issues.apache.org/jira/browse/KAFKA-12338 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming For example: MetadataRecordSerde.recordSize ``` size += ByteUtils.sizeOfUnsignedVarint(data.message().apiKey()); size += ByteUtils.sizeOfUnsignedVarint(data.version()); size += data.message().size(serializationCache, data.version()); ``` MetadataParser.size ``` long messageSize = message.size(cache, version); long totalSize = messageSize + ByteUtils.sizeOfUnsignedVarint(message.apiKey()) + ByteUtils.sizeOfUnsignedVarint(version); ``` we can see that the logic is duplicated except that `MetadataRecordSerde` has an extra `DEFAULT_FRAME_VERSION`, if we want to change the serde format of metadata, we should modify 2 classes, this is unreasonable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12277) Replace MetadataResponse.TopicMetadata with auto-generated protocol
dengziming created KAFKA-12277: -- Summary: Replace MetadataResponse.TopicMetadata with auto-generated protocol Key: KAFKA-12277 URL: https://issues.apache.org/jira/browse/KAFKA-12277 Project: Kafka Issue Type: Improvement Components: clients Reporter: dengziming Assignee: dengziming Follow KAFKA-12269 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12269) Replace MetadataResponse.PartitionMetadata with auto-generated protocol
dengziming created KAFKA-12269: -- Summary: Replace MetadataResponse.PartitionMetadata with auto-generated protocol Key: KAFKA-12269 URL: https://issues.apache.org/jira/browse/KAFKA-12269 Project: Kafka Issue Type: Improvement Components: clients Reporter: dengziming Assignee: dengziming MetadataResponse.PartitionMetadata is almost the same as MetadataResponsePartition so we can replace it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12251) Add topic ID support to StopReplica
dengziming created KAFKA-12251: -- Summary: Add topic ID support to StopReplica Key: KAFKA-12251 URL: https://issues.apache.org/jira/browse/KAFKA-12251 Project: Kafka Issue Type: Sub-task Reporter: dengziming Assignee: dengziming Remove topic name and Add topic id to StopReplicaReq and StopReplicaResp -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-12233) `FileRecords.writeTo` set length incorrectly
dengziming created KAFKA-12233: -- Summary: `FileRecords.writeTo` set length incorrectly Key: KAFKA-12233 URL: https://issues.apache.org/jira/browse/KAFKA-12233 Project: Kafka Issue Type: Bug Reporter: dengziming Assignee: dengziming [https://github.com/apache/kafka/pull/2140/files#r563471404] we set `int count = Math.min(length, oldSize)`, but we are expected to write from `offset`, so the count should be `Math.min(length, oldSize - offset)`. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10880) Fetch offset 2587113 is out of range for partition $TOPIC_NAME-0, resetting offset
[ https://issues.apache.org/jira/browse/KAFKA-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10880. Resolution: Duplicate duplicate to KAFKA-10181 > Fetch offset 2587113 is out of range for partition $TOPIC_NAME-0, resetting > offset > -- > > Key: KAFKA-10880 > URL: https://issues.apache.org/jira/browse/KAFKA-10880 > Project: Kafka > Issue Type: Bug > Components: clients, consumer >Affects Versions: 1.1.0 >Reporter: Rupesh >Priority: Blocker > > Getting out of range exception for partition 0, total we have 3 partition > while consuming to partition 0 we are facing this error and offset value > going to lower value. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10898) Support snakeCaseName/camelCaseName JSON field name in JsonConverterGenerator
dengziming created KAFKA-10898: -- Summary: Support snakeCaseName/camelCaseName JSON field name in JsonConverterGenerator Key: KAFKA-10898 URL: https://issues.apache.org/jira/browse/KAFKA-10898 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming I find many JSON-format command line params for example `kafka-reassign-partitions.sh --reassignment-json-file my.json` which we can use JsonConverter to read and write. However, currently, we use camelCaseName when converting protocol data to JSON, but most JSON-format command line params use snakeCaseName, so we should support snakeCaseName in JsonConverterGenerator. In the post-KIP-500 world, we will move all configs in zookeeper to kafka-raft, and the data in zookeeper is also stored as snakeCaseName JSON, so it's useful to support snakeCaseName in JsonConverterGenerator in advance. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10547) Add topic IDs to MetadataResponse
[ https://issues.apache.org/jira/browse/KAFKA-10547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10547. Resolution: Done > Add topic IDs to MetadataResponse > - > > Key: KAFKA-10547 > URL: https://issues.apache.org/jira/browse/KAFKA-10547 > Project: Kafka > Issue Type: Sub-task >Reporter: Justine Olshan >Assignee: dengziming >Priority: Major > > Will be able to use TopicDescription to identify the topic ID -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10864) Convert End Transaction Marker to use auto-generated protocal
dengziming created KAFKA-10864: -- Summary: Convert End Transaction Marker to use auto-generated protocal Key: KAFKA-10864 URL: https://issues.apache.org/jira/browse/KAFKA-10864 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Similar to other issues such as KAFKA-10497 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10863) Conver CONTROL_RECORD_KEY_SCHEMA_VERSION to use auto-generated protocal
dengziming created KAFKA-10863: -- Summary: Conver CONTROL_RECORD_KEY_SCHEMA_VERSION to use auto-generated protocal Key: KAFKA-10863 URL: https://issues.apache.org/jira/browse/KAFKA-10863 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Similar to other issues such as KAFKA-10497 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10858) Convert connect protocol header schemas to use generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10858. Resolution: Duplicate > Convert connect protocol header schemas to use generated protocol > - > > Key: KAFKA-10858 > URL: https://issues.apache.org/jira/browse/KAFKA-10858 > Project: Kafka > Issue Type: Improvement > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > > manual managed schema code in and ConnectProtocol > IncrementalCooperativeConnectProtocol should be replaced by auto-generated > protocol. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10858) Convert connect protocol header schemas to use generated protocol
dengziming created KAFKA-10858: -- Summary: Convert connect protocol header schemas to use generated protocol Key: KAFKA-10858 URL: https://issues.apache.org/jira/browse/KAFKA-10858 Project: Kafka Issue Type: Improvement Components: protocol Reporter: dengziming Assignee: dengziming manual managed schema code in IncrementalCooperativeConnectProtocol should be replaced by auto generated protocol. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10856) Convert sticky assignor userData schemas to use generated protocol
dengziming created KAFKA-10856: -- Summary: Convert sticky assignor userData schemas to use generated protocol Key: KAFKA-10856 URL: https://issues.apache.org/jira/browse/KAFKA-10856 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Similar to other issues such as KAFKA-10497 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10849) Remove useless ApiKeys#parseResponse and ApiKeys#parseRequest
dengziming created KAFKA-10849: -- Summary: Remove useless ApiKeys#parseResponse and ApiKeys#parseRequest Key: KAFKA-10849 URL: https://issues.apache.org/jira/browse/KAFKA-10849 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming KAFKA-10818 has removed the conversion to Struct when parsing resp, `{{ApiVersionsResponse.parse`}} will be used instead of `{{API_VERSIONS.parseResponse`}}, so the overwrite in {{ApiKeys.API_VERSIONS}} is useless, and in the same way, `ApiKeys#parseResponse` and `ApiKeys#parseRequest` can also be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10845) Introduce a `VisibleForTesting` annotation
dengziming created KAFKA-10845: -- Summary: Introduce a `VisibleForTesting` annotation Key: KAFKA-10845 URL: https://issues.apache.org/jira/browse/KAFKA-10845 Project: Kafka Issue Type: Improvement Components: clients Reporter: dengziming Assignee: dengziming There are so much code with a "Visible for testing" "public for testing" , it's better to introduce a`{{VisibleForTesting`}} annotation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10780) Rewrite ControllerZNode struct with auto-generated protocol
[ https://issues.apache.org/jira/browse/KAFKA-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10780. Resolution: Won't Do KIP-500 will replace all this code. > Rewrite ControllerZNode struct with auto-generated protocol > > > Key: KAFKA-10780 > URL: https://issues.apache.org/jira/browse/KAFKA-10780 > Project: Kafka > Issue Type: Sub-task > Components: protocol >Reporter: dengziming >Assignee: dengziming >Priority: Major > > User auto-generated protocol to rewrite zk controller node -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10785) Rewrite ConfigEntityChangeNotificationSequenceZNode struct with auto-generated protocol
dengziming created KAFKA-10785: -- Summary: Rewrite ConfigEntityChangeNotificationSequenceZNode struct with auto-generated protocol Key: KAFKA-10785 URL: https://issues.apache.org/jira/browse/KAFKA-10785 Project: Kafka Issue Type: Sub-task Components: protocol Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10784) Rewrite ConfigEntityZNode struct with auto-generated protocol
dengziming created KAFKA-10784: -- Summary: Rewrite ConfigEntityZNode struct with auto-generated protocol Key: KAFKA-10784 URL: https://issues.apache.org/jira/browse/KAFKA-10784 Project: Kafka Issue Type: Sub-task Components: protocol Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10783) Rewrite TopicPartitionStateZNode struct with auto-generated protocol
dengziming created KAFKA-10783: -- Summary: Rewrite TopicPartitionStateZNode struct with auto-generated protocol Key: KAFKA-10783 URL: https://issues.apache.org/jira/browse/KAFKA-10783 Project: Kafka Issue Type: Sub-task Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10782) Rewrite TopicZNode struct with auto-generated protocol
dengziming created KAFKA-10782: -- Summary: Rewrite TopicZNode struct with auto-generated protocol Key: KAFKA-10782 URL: https://issues.apache.org/jira/browse/KAFKA-10782 Project: Kafka Issue Type: Sub-task Components: protocol Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10781) Rewrite BrokerIdZNode struct with auto-generated protocol
dengziming created KAFKA-10781: -- Summary: Rewrite BrokerIdZNode struct with auto-generated protocol Key: KAFKA-10781 URL: https://issues.apache.org/jira/browse/KAFKA-10781 Project: Kafka Issue Type: Sub-task Components: protocol Reporter: dengziming Assignee: dengziming -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10780) Rewrite ControllerZNode struct with auto-generated protocol
dengziming created KAFKA-10780: -- Summary: Rewrite ControllerZNode struct with auto-generated protocol Key: KAFKA-10780 URL: https://issues.apache.org/jira/browse/KAFKA-10780 Project: Kafka Issue Type: Sub-task Components: protocol Reporter: dengziming Assignee: dengziming User auto-generated protocol to rewrite zk controller node -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10774) Support Describe topic using topic IDs
dengziming created KAFKA-10774: -- Summary: Support Describe topic using topic IDs Key: KAFKA-10774 URL: https://issues.apache.org/jira/browse/KAFKA-10774 Project: Kafka Issue Type: Sub-task Reporter: dengziming Assignee: dengziming Similar to KAFKA-10547 which add topic IDs in MetadataResp, we add topic IDs to MetadataReq and can get TopicDesc by topic IDs -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10757) KAFKA-10755 brings a compile error
[ https://issues.apache.org/jira/browse/KAFKA-10757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10757. Resolution: Fixed > KAFKA-10755 brings a compile error > --- > > Key: KAFKA-10757 > URL: https://issues.apache.org/jira/browse/KAFKA-10757 > Project: Kafka > Issue Type: Bug >Reporter: dengziming >Assignee: dengziming >Priority: Blocker > > The `new TaskManager` has 10 params but StreamThreadTest call a `new > StreamThreadTest` with 9 params. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10757) KAFKA-10755 brings a compile error
dengziming created KAFKA-10757: -- Summary: KAFKA-10755 brings a compile error Key: KAFKA-10757 URL: https://issues.apache.org/jira/browse/KAFKA-10757 Project: Kafka Issue Type: Bug Reporter: dengziming Assignee: dengziming The `new TaskManager` has 10 params but StreamThreadTest call a `new StreamThreadTest` with 9 params. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10756) Add missing unit test for `UnattachedState`
dengziming created KAFKA-10756: -- Summary: Add missing unit test for `UnattachedState` Key: KAFKA-10756 URL: https://issues.apache.org/jira/browse/KAFKA-10756 Project: Kafka Issue Type: Sub-task Reporter: dengziming Assignee: dengziming Add unit test for UnattachedState, similar to KAFKA-10519 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-10691) AlterIsr Respond with wrong Error Id
[ https://issues.apache.org/jira/browse/KAFKA-10691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-10691. Resolution: Abandoned > AlterIsr Respond with wrong Error Id > > > Key: KAFKA-10691 > URL: https://issues.apache.org/jira/browse/KAFKA-10691 > Project: Kafka > Issue Type: Sub-task > Components: controller >Reporter: dengziming >Assignee: dengziming >Priority: Minor > > AlterIsr send by an unknown broker will respond with an STALE_BROKER_EPOCH, > which should be UNKNOWN_MEMBER_ID. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10691) AlterIsrR Respond with wrong Error Id
dengziming created KAFKA-10691: -- Summary: AlterIsrR Respond with wrong Error Id Key: KAFKA-10691 URL: https://issues.apache.org/jira/browse/KAFKA-10691 Project: Kafka Issue Type: Sub-task Components: controller Reporter: dengziming Assignee: dengziming AlterIsr send by an unknown broker will respond with an STALE_BROKER_EPOCH, which should be UNKNOWN_MEMBER_ID. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10644) Fix VotedToUnattached test error
dengziming created KAFKA-10644: -- Summary: Fix VotedToUnattached test error Key: KAFKA-10644 URL: https://issues.apache.org/jira/browse/KAFKA-10644 Project: Kafka Issue Type: Sub-task Components: unit tests Reporter: dengziming codes of `QuorumStateTest.testVotedToUnattachedHigherEpoch` is not in consistent with its name, the method name is VotedToUnattached, but the code is UnattachedToUnattached: ``` state.initialize(new OffsetAndEpoch(0L, logEndEpoch)); state.transitionToUnattached(5); long remainingElectionTimeMs = state.unattachedStateOrThrow().remainingElectionTimeMs(time.milliseconds()); time.sleep(1000); state.transitionToUnattached(6); ``` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-10195) Move offset management codes from ConsumerCoordinator to a new class
dengziming created KAFKA-10195: -- Summary: Move offset management codes from ConsumerCoordinator to a new class Key: KAFKA-10195 URL: https://issues.apache.org/jira/browse/KAFKA-10195 Project: Kafka Issue Type: Improvement Components: clients, consumer Reporter: dengziming Assignee: dengziming ConsumerCoordinator has 2 main functions: # partitions assignment # offset management We are adding some new features in it, for example KAFKA-9657 add a field `throwOnFetchStableOffsetsUnsupported` which only used in offset management. And the 2 functions almost don't interact with each other, so it's not wise to put these code in one single class, can we try to move offset management code to a new class. For example, the below fields only used in offset management: ``` // can be move to another class directly private final OffsetCommitCallback defaultOffsetCommitCallback; private final ConsumerInterceptors interceptors; private final AtomicInteger pendingAsyncCommits; private final ConcurrentLinkedQueue completedOffsetCommits; private AtomicBoolean asyncCommitFenced; private final boolean throwOnFetchStableOffsetsUnsupported; private PendingCommittedOffsetRequest pendingCommittedOffsetRequest = null; // used in `onJoinComplete` but can also be moved out. private final boolean autoCommitEnabled; private final int autoCommitIntervalMs; private Timer nextAutoCommitTimer; ``` so we can just create a new class `OffsetManageCoordinator` and move the related codes into it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9353) Add groupInstanceId to DescribeGroup for better visibility
dengziming created KAFKA-9353: - Summary: Add groupInstanceId to DescribeGroup for better visibility Key: KAFKA-9353 URL: https://issues.apache.org/jira/browse/KAFKA-9353 Project: Kafka Issue Type: Improvement Components: admin Reporter: dengziming Kafka-8538 has already added `group.instance.id` to `MemberDescription` but didn't print it, so we just print it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (KAFKA-9291) error starting consumer
[ https://issues.apache.org/jira/browse/KAFKA-9291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] dengziming resolved KAFKA-9291. --- Resolution: Abandoned I made a mistake when debugging > error starting consumer > --- > > Key: KAFKA-9291 > URL: https://issues.apache.org/jira/browse/KAFKA-9291 > Project: Kafka > Issue Type: Bug >Reporter: dengziming >Assignee: dengziming >Priority: Blocker > Attachments: image-2019-12-11-19-23-04-499.png, > image-2019-12-11-19-28-29-559.png, image-2019-12-11-19-29-28-156.png, > image-2019-12-11-19-29-53-353.png > > > KAFKA-9288 add some code which I didn't now fully understand, but it indeed > bring some bug which is serious and I debugged the process: > 1. when consumer client start, an `ApiVersionsRequest` are send > 2. KafkaApis. handleApiVersionsRequest(request) are invoke > 3. ApiVersionsResponse.createApiVersionsResponse() > 4. and it will add all `ApiVersionsResponseKey` to > `ApiVersionsResponseKeyCollection` > 5. every time add an element will return false! ( *this is where the bug is*, > but I didn't find the reason) > !image-2019-12-11-19-23-04-499.png! > > 6. after the for loop, the `ApiVersionsResponseKeyCollection` is EMPTY! > !image-2019-12-11-19-28-29-559.png! > > 7. when the Client receive the response, ERROR will occur. > !image-2019-12-11-19-29-28-156.png! > > 8. and my application was terminated > !image-2019-12-11-19-29-53-353.png! > > So we can conclude that the reason is the new code of KAFKA-9288 in > ApiVersionsResponseKeyCollection -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9291) remove the code of KAFKA-9288 which bring fatal bug
dengziming created KAFKA-9291: - Summary: remove the code of KAFKA-9288 which bring fatal bug Key: KAFKA-9291 URL: https://issues.apache.org/jira/browse/KAFKA-9291 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Attachments: image-2019-12-11-19-23-04-499.png, image-2019-12-11-19-28-29-559.png, image-2019-12-11-19-29-28-156.png, image-2019-12-11-19-29-53-353.png KAFKA-9288 add some code which I didn't now fully understand, but it indeed bring some bug which is serious and I debugged the process: 1. when consumer client start, an `ApiVersionsRequest` are send 2. KafkaApis. handleApiVersionsRequest(request) are invoke 3. ApiVersionsResponse.createApiVersionsResponse() 4. and it will add all `ApiVersionsResponseKey` to `ApiVersionsResponseKeyCollection` 5. every time add an element will return false!( I didn't find the reason) !image-2019-12-11-19-23-04-499.png! 6. after the for loop, the `ApiVersionsResponseKeyCollection` is EMPTY! !image-2019-12-11-19-28-29-559.png! 7. when the Client receive the response, ERROR will occur. !image-2019-12-11-19-29-28-156.png! 8. and my application was terminated !image-2019-12-11-19-29-53-353.png! -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9277) move all group state transition rules into their states
dengziming created KAFKA-9277: - Summary: move all group state transition rules into their states Key: KAFKA-9277 URL: https://issues.apache.org/jira/browse/KAFKA-9277 Project: Kafka Issue Type: Improvement Reporter: dengziming Assignee: dengziming Today the `GroupMetadata` maintain a validPreviousStates map of all GroupState: ``` private val validPreviousStates: Map[GroupState, Set[GroupState]] = Map(Dead -> Set(Stable, PreparingRebalance, CompletingRebalance, Empty, Dead), CompletingRebalance -> Set(PreparingRebalance), Stable -> Set(CompletingRebalance), PreparingRebalance -> Set(Stable, CompletingRebalance, Empty), Empty -> Set(PreparingRebalance)) ``` It would be cleaner to move all state transition rules into their states : ``` private[group] sealed trait GroupState { val validPreviousStates: Set[GroupState] } ``` -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (KAFKA-9246) Update Heartbeat timeout when ConsumerCoordinator commit offset
dengziming created KAFKA-9246: - Summary: Update Heartbeat timeout when ConsumerCoordinator commit offset Key: KAFKA-9246 URL: https://issues.apache.org/jira/browse/KAFKA-9246 Project: Kafka Issue Type: Improvement Components: clients Reporter: dengziming Fix For: 2.3.0, 2.2.0 when a consumer sends OffsetCommitRequest, it also can be treated as a Heartbeat and the `GroupCoordinator` will invoke `completeAndScheduleNextHeartbeatExpiration` to update the expires when handleCommitOffsets. so we can also update the heartbeat expires of ConsumerCoordinator when send OffsetCommitRequest to reduce Heartbeat Request. -- This message was sent by Atlassian Jira (v8.3.4#803005)