[GitHub] [kafka] Justinwins commented on pull request #12350: KAFKA-13856:MirrorCheckpointTask meets ConcurrentModificationException
Justinwins commented on PR #12350: URL: https://github.com/apache/kafka/pull/12350#issuecomment-1212684275 @mimaison will you pls merge this simple issue ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] niket-goel opened a new pull request, #12508: KAFKA-13888: Addition of Information in DescribeQuorumResponse
niket-goel opened a new pull request, #12508: URL: https://github.com/apache/kafka/pull/12508 This commit adds in the implementation for two new fields: * LastFetchTimestamp * LastCaughtUpTimestamp as they are described in KIP-836 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji merged pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
hachikuji merged PR #12506: URL: https://github.com/apache/kafka/pull/12506 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-13986) DescribeQuorum does not return the observers (brokers) for the Metadata log
[ https://issues.apache.org/jira/browse/KAFKA-13986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gustafson resolved KAFKA-13986. - Fix Version/s: 3.3.0 Resolution: Fixed > DescribeQuorum does not return the observers (brokers) for the Metadata log > --- > > Key: KAFKA-13986 > URL: https://issues.apache.org/jira/browse/KAFKA-13986 > Project: Kafka > Issue Type: Improvement > Components: kraft >Affects Versions: 3.0.0, 3.0.1 >Reporter: Niket Goel >Assignee: Niket Goel >Priority: Major > Fix For: 3.3.0 > > > h2. Background > While working on the [PR|https://github.com/apache/kafka/pull/12206] for > KIP-836, we realized that the `DescribeQuorum` API does not return the > brokers as observers for the metadata log. > As noted by [~dengziming] : > _We set nodeId=-1 if it's a broker so observers.size==0_ > The related code is: > [https://github.com/apache/kafka/blob/4c9eeef5b2dff9a4f0977fbc5ac7eaaf930d0d0e/core/src/main/scala/kafka/raft/RaftManager.scala#L185-L189] > {code:java} > val nodeId = if (config.processRoles.contains(ControllerRole)) > { OptionalInt.of(config.nodeId) } > else > { OptionalInt.empty() } > {code} > h2. ToDo > We should fix this and have the DescribeMetadata API return the brokers as > observers for the metadata log. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one and $JMX_OPTS is always defined/used at the end of kafka-run-class.sh A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed)}} {{NO_JMX="false"}} {{{}shift{}}}{{{};;{}}} # {{Add an elif at the end of kafka-run-class.sh }} {{elif [ "$NO_JMX" = "true" ] ; then}} {{exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one and $JMX_OPTS is always defined/used at the end of kafka-run-class.sh A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh{{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed)}} {{NO_JMX="false"}} {{{}shift{}}}{{{};;{}}} # {{Add an elif at the end of kafka-run-class.sh }} {{elif [ "$NO_JMX" = "true" ] ; then}} {{exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables and a process listening on > the defined port. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one and $JMX_OPTS is always > defined/used at the end of kafka-run-class.sh > A proposed solution is editing the kafka-run-class file. > # add a variable declaration NO_JMX="true" > # Add the
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one and $JMX_OPTS is always defined/used at the end of kafka-run-class.sh A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh{{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed)}} {{NO_JMX="false"}} {{{}shift{}}}{{{};;{}}} # {{Add an elif at the end of kafka-run-class.sh }} {{elif [ "$NO_JMX" = "true" ] ; then}} {{exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one and $JMX_OPTS is always defined/used at the end of kafka-run-class.sh A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{ { ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables and a process listening on > the defined port. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one and $JMX_OPTS is always > defined/used at the end of kafka-run-class.sh > A proposed solution is editing the kafka-run-class file. > # add a variable declaration
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one and $JMX_OPTS is always defined/used at the end of kafka-run-class.sh A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{ { ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{ { ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables and a process listening on > the defined port. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one and $JMX_OPTS is always > defined/used at the end of kafka-run-class.sh > A proposed solution is editing the kafka-run-class file. > # add a variable declaration NO_JMX="true" > # Add the following to the case statement
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables and a process listening on the defined port. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{ { ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{{ ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables and a process listening on > the defined port. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one. > A proposed solution is editing the kafka-run-class file. > # add a variable declaration NO_JMX="true" > # Add the following to the case statement in kafka-run-class.sh > {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) > NO_JMX="false"}} > {\{ shift}} > Unknown macro: \{ { ;;} > } > # {{Add an elif at the end of
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} Unknown macro: \{{ ;;} } # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} The workaround is every time you run a script like kafka-topics.sh or kafka-configs.sh, run it like this JMX_PORT='' KAFKA_JMX_OPTS='' bin/kafka-topics was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} {\{ ;;}} # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one. > A proposed solution is editing the kafka-run-class file. > # add a variable declaration NO_JMX="true" > # Add the following to the case statement in kafka-run-class.sh > {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) > NO_JMX="false"}} > {\{ shift}} > Unknown macro: \{{ ;;} > } > # {{Add an elif at the end of kafka-run-class.sh }} > {\{{}elif [ "$NO_JMX" = "true" ] ; then > exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS > $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} > The workaround is every time you run a script like
[jira] [Updated] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
[ https://issues.apache.org/jira/browse/KAFKA-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prime Minister of Funhavistan updated KAFKA-14164: -- Description: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {\{ shift}} {\{ ;;}} # {{Add an elif at the end of kafka-run-class.sh }} {\{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} was: Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {{ shift}} {{ ;;}} # {{{}Add an elif at the end of kafka-run-class.sh{}}}{{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}{{{}{}}} > Enabling JMX on one Service Breaks kafka-run-class Shell Script for All > > > Key: KAFKA-14164 > URL: https://issues.apache.org/jira/browse/KAFKA-14164 > Project: Kafka > Issue Type: Improvement > Components: packaging >Affects Versions: 3.2.1 >Reporter: Prime Minister of Funhavistan >Priority: Minor > > Step 1: On a server running kafka using the kafka-server-start script and > KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx > listener. > Step 2: Use another script like bin/kafka-topics, the shell script always > fails because the call to kafka-run-class always triggers starting a jmx > listener on the same port in the environment variables. > Expected behavior: Be able to use a script like kafka-topics without error on > a host that already has JMX environment variables. > Actual behavior: bin/kafka-topics errors out trying to start another jmx > listener on the same port in KAFKA_JMX_OPTS. > When kafka-topics shell script pass arguments to kafka-run-class, > kafka-run-class tries and fails to initialize a jmx listener because, in this > example, the kafka-server-start already started one. > A proposed solution is editing the kafka-run-class file. > # add a variable declaration NO_JMX="true" > # Add the following to the case statement in kafka-run-class.sh > {{ kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) > NO_JMX="false"}} > {\{ shift}} > {\{ ;;}} > # {{Add an elif at the end of kafka-run-class.sh }} > {\{{}elif [ "$NO_JMX" = "true" ] ; then > exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS > $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}}}{}}} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] hachikuji merged pull request #12498: KAFKA-13986; Brokers should include node.id in fetches to metadata quorum
hachikuji merged PR #12498: URL: https://github.com/apache/kafka/pull/12498 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-14164) Enabling JMX on one Service Breaks kafka-run-class Shell Script for All
Prime Minister of Funhavistan created KAFKA-14164: - Summary: Enabling JMX on one Service Breaks kafka-run-class Shell Script for All Key: KAFKA-14164 URL: https://issues.apache.org/jira/browse/KAFKA-14164 Project: Kafka Issue Type: Improvement Components: packaging Affects Versions: 3.2.1 Reporter: Prime Minister of Funhavistan Step 1: On a server running kafka using the kafka-server-start script and KAFKA_JMX_OPTS populated to turn on JMX. Kafka broker starts with a jmx listener. Step 2: Use another script like bin/kafka-topics, the shell script always fails because the call to kafka-run-class always triggers starting a jmx listener on the same port in the environment variables. Expected behavior: Be able to use a script like kafka-topics without error on a host that already has JMX environment variables. Actual behavior: bin/kafka-topics errors out trying to start another jmx listener on the same port in KAFKA_JMX_OPTS. When kafka-topics shell script pass arguments to kafka-run-class, kafka-run-class tries and fails to initialize a jmx listener because, in this example, the kafka-server-start already started one. A proposed solution is editing the kafka-run-class file. # add a variable declaration NO_JMX="true" # Add the following to the case statement in kafka-run-class.sh {{kafka.Kafka|org.apache.kafka.connect.cli.ConnectDistributed) NO_JMX="false"}} {{ shift}} {{ ;;}} # {{{}Add an elif at the end of kafka-run-class.sh{}}}{{{}elif [ "$NO_JMX" = "true" ] ; then exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"{}}}{{{}{}}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-14163) Build failing in streams-scala:compileScala due to zinc compiler cache
[ https://issues.apache.org/jira/browse/KAFKA-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gustafson resolved KAFKA-14163. - Resolution: Workaround > Build failing in streams-scala:compileScala due to zinc compiler cache > -- > > Key: KAFKA-14163 > URL: https://issues.apache.org/jira/browse/KAFKA-14163 > Project: Kafka > Issue Type: Bug >Reporter: Jason Gustafson >Priority: Major > > I have been seeing builds failing recently with the following error: > {code:java} > [2022-08-11T17:08:22.279Z] * What went wrong: > [2022-08-11T17:08:22.279Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-11T17:08:22.279Z] Owner PID: 25008 > [2022-08-11T17:08:22.279Z] Our PID: 25524 > [2022-08-11T17:08:22.279Z] Owner Operation: > [2022-08-11T17:08:22.279Z] Our operation: > [2022-08-11T17:08:22.279Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} > And another: > {code:java} > [2022-08-10T21:30:41.779Z] * What went wrong: > [2022-08-10T21:30:41.779Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-10T21:30:41.779Z] Owner PID: 11022 > [2022-08-10T21:30:41.779Z] Our PID: 11766 > [2022-08-10T21:30:41.779Z] Owner Operation: > [2022-08-10T21:30:41.779Z] Our operation: > [2022-08-10T21:30:41.779Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] hachikuji merged pull request #12507: KAFKA-14163; Retry compilation after zinc compile cache error
hachikuji merged PR #12507: URL: https://github.com/apache/kafka/pull/12507 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jsancio commented on a diff in pull request #12274: KAFKA-13959: Controller should unfence Broker with busy metadata log
jsancio commented on code in PR #12274: URL: https://github.com/apache/kafka/pull/12274#discussion_r943618096 ## raft/src/main/java/org/apache/kafka/raft/internals/FuturePurgatory.java: ## @@ -56,7 +56,7 @@ CompletableFuture await(T threshold, long maxWaitTimeMs); /** - * Complete awaiting futures whose associated values are larger than the given threshold value. + * Complete awaiting futures whose associated values are smaller than the given threshold value. Review Comment: Thanks for this fix. I think the phrase "associated values" is still misleading. How about: > Complete awaiting futures whose threshold value from {@link await} is smaller than the given threshold value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji commented on pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
hachikuji commented on PR #12506: URL: https://github.com/apache/kafka/pull/12506#issuecomment-1212307242 @jsancio Yeah, let's use the same jira for the kraft issue if that's ok with you. Let me see if I can get a PR out today. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji opened a new pull request, #12507: KAFKA-14163; Retry compilation after zinc compile cache error
hachikuji opened a new pull request, #12507: URL: https://github.com/apache/kafka/pull/12507 Borrowing this workaround from @lbradstreet for the same problem. Co-authored-by: Lucas Bradstreet ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14163) Build failing in streams-scala:compileScala due to zinc compiler cache
[ https://issues.apache.org/jira/browse/KAFKA-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gustafson updated KAFKA-14163: Summary: Build failing in streams-scala:compileScala due to zinc compiler cache (was: Build failing in scala:compileScala due to zinc compiler cache) > Build failing in streams-scala:compileScala due to zinc compiler cache > -- > > Key: KAFKA-14163 > URL: https://issues.apache.org/jira/browse/KAFKA-14163 > Project: Kafka > Issue Type: Bug >Reporter: Jason Gustafson >Priority: Major > > I have been seeing builds failing recently with the following error: > {code:java} > [2022-08-11T17:08:22.279Z] * What went wrong: > [2022-08-11T17:08:22.279Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-11T17:08:22.279Z] Owner PID: 25008 > [2022-08-11T17:08:22.279Z] Our PID: 25524 > [2022-08-11T17:08:22.279Z] Owner Operation: > [2022-08-11T17:08:22.279Z] Our operation: > [2022-08-11T17:08:22.279Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} > And another: > {code:java} > [2022-08-10T21:30:41.779Z] * What went wrong: > [2022-08-10T21:30:41.779Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-10T21:30:41.779Z] Owner PID: 11022 > [2022-08-10T21:30:41.779Z] Our PID: 11766 > [2022-08-10T21:30:41.779Z] Owner Operation: > [2022-08-10T21:30:41.779Z] Our operation: > [2022-08-10T21:30:41.779Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] dajac commented on a diff in pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
dajac commented on code in PR #12506: URL: https://github.com/apache/kafka/pull/12506#discussion_r943741675 ## core/src/main/scala/kafka/controller/KafkaController.scala: ## @@ -2336,7 +2336,14 @@ class KafkaController(val config: KafkaConfig, controllerContext.partitionLeadershipInfo(tp) match { case Some(leaderIsrAndControllerEpoch) => val currentLeaderAndIsr = leaderIsrAndControllerEpoch.leaderAndIsr - if (newLeaderAndIsr.leaderEpoch != currentLeaderAndIsr.leaderEpoch) { + if (newLeaderAndIsr.partitionEpoch > currentLeaderAndIsr.partitionEpoch +|| newLeaderAndIsr.leaderEpoch > currentLeaderAndIsr.leaderEpoch) { Review Comment: nit: Should we align this line on « newLeader… »? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14163) Build failing in scala:compileScala due to zinc compiler cache
[ https://issues.apache.org/jira/browse/KAFKA-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gustafson updated KAFKA-14163: Description: I have been seeing builds failing recently with the following error: {code:java} [2022-08-11T17:08:22.279Z] * What went wrong: [2022-08-11T17:08:22.279Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-11T17:08:22.279Z] Owner PID: 25008 [2022-08-11T17:08:22.279Z] Our PID: 25524 [2022-08-11T17:08:22.279Z] Owner Operation: [2022-08-11T17:08:22.279Z] Our operation: [2022-08-11T17:08:22.279Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} And another: {code:java} [2022-08-10T21:30:41.779Z] * What went wrong: [2022-08-10T21:30:41.779Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-10T21:30:41.779Z] Owner PID: 11022 [2022-08-10T21:30:41.779Z] Our PID: 11766 [2022-08-10T21:30:41.779Z] Owner Operation: [2022-08-10T21:30:41.779Z] Our operation: [2022-08-10T21:30:41.779Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} was: I have been seeing builds failing recently with the following error: {code:java} [2022-08-10T21:30:41.779Z] * What went wrong: [2022-08-10T21:30:41.779Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. {code} > Build failing in scala:compileScala due to zinc compiler cache > -- > > Key: KAFKA-14163 > URL: https://issues.apache.org/jira/browse/KAFKA-14163 > Project: Kafka > Issue Type: Bug >Reporter: Jason Gustafson >Priority: Major > > I have been seeing builds failing recently with the following error: > > {code:java} > [2022-08-11T17:08:22.279Z] * What went wrong: > [2022-08-11T17:08:22.279Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-11T17:08:22.279Z] Owner PID: 25008 > [2022-08-11T17:08:22.279Z] Our PID: 25524 > [2022-08-11T17:08:22.279Z] Owner Operation: > [2022-08-11T17:08:22.279Z] Our operation: > [2022-08-11T17:08:22.279Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} > And another: > {code:java} > [2022-08-10T21:30:41.779Z] * What went wrong: > [2022-08-10T21:30:41.779Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-10T21:30:41.779Z] Owner PID: 11022 > [2022-08-10T21:30:41.779Z] Our PID: 11766 > [2022-08-10T21:30:41.779Z] Owner Operation: > [2022-08-10T21:30:41.779Z] Our operation: > [2022-08-10T21:30:41.779Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14163) Build failing in scala:compileScala due to zinc compiler cache
[ https://issues.apache.org/jira/browse/KAFKA-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gustafson updated KAFKA-14163: Description: I have been seeing builds failing recently with the following error: {code:java} [2022-08-11T17:08:22.279Z] * What went wrong: [2022-08-11T17:08:22.279Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-11T17:08:22.279Z] Owner PID: 25008 [2022-08-11T17:08:22.279Z] Our PID: 25524 [2022-08-11T17:08:22.279Z] Owner Operation: [2022-08-11T17:08:22.279Z] Our operation: [2022-08-11T17:08:22.279Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} And another: {code:java} [2022-08-10T21:30:41.779Z] * What went wrong: [2022-08-10T21:30:41.779Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-10T21:30:41.779Z] Owner PID: 11022 [2022-08-10T21:30:41.779Z] Our PID: 11766 [2022-08-10T21:30:41.779Z] Owner Operation: [2022-08-10T21:30:41.779Z] Our operation: [2022-08-10T21:30:41.779Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} was: I have been seeing builds failing recently with the following error: {code:java} [2022-08-11T17:08:22.279Z] * What went wrong: [2022-08-11T17:08:22.279Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-11T17:08:22.279Z] Owner PID: 25008 [2022-08-11T17:08:22.279Z] Our PID: 25524 [2022-08-11T17:08:22.279Z] Owner Operation: [2022-08-11T17:08:22.279Z] Our operation: [2022-08-11T17:08:22.279Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} And another: {code:java} [2022-08-10T21:30:41.779Z] * What went wrong: [2022-08-10T21:30:41.779Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. [2022-08-10T21:30:41.779Z] Owner PID: 11022 [2022-08-10T21:30:41.779Z] Our PID: 11766 [2022-08-10T21:30:41.779Z] Owner Operation: [2022-08-10T21:30:41.779Z] Our operation: [2022-08-10T21:30:41.779Z] Lock file: /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock {code} > Build failing in scala:compileScala due to zinc compiler cache > -- > > Key: KAFKA-14163 > URL: https://issues.apache.org/jira/browse/KAFKA-14163 > Project: Kafka > Issue Type: Bug >Reporter: Jason Gustafson >Priority: Major > > I have been seeing builds failing recently with the following error: > {code:java} > [2022-08-11T17:08:22.279Z] * What went wrong: > [2022-08-11T17:08:22.279Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-11T17:08:22.279Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-11T17:08:22.279Z] Owner PID: 25008 > [2022-08-11T17:08:22.279Z] Our PID: 25524 > [2022-08-11T17:08:22.279Z] Owner Operation: > [2022-08-11T17:08:22.279Z] Our operation: > [2022-08-11T17:08:22.279Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} > And another: > {code:java} > [2022-08-10T21:30:41.779Z] * What went wrong: > [2022-08-10T21:30:41.779Z] Execution failed for task > ':streams:streams-scala:compileScala'. > [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 > compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It > is currently in use by another Gradle instance. > [2022-08-10T21:30:41.779Z] Owner PID: 11022 > [2022-08-10T21:30:41.779Z] Our PID: 11766 > [2022-08-10T21:30:41.779Z] Owner Operation: > [2022-08-10T21:30:41.779Z] Our operation: > [2022-08-10T21:30:41.779Z] Lock file: > /home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17/zinc-1.6.1_2.13.8_17.lock > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14163) Build failing in scala:compileScala due to zinc compiler cache
Jason Gustafson created KAFKA-14163: --- Summary: Build failing in scala:compileScala due to zinc compiler cache Key: KAFKA-14163 URL: https://issues.apache.org/jira/browse/KAFKA-14163 Project: Kafka Issue Type: Bug Reporter: Jason Gustafson I have been seeing builds failing recently with the following error: {code:java} [2022-08-10T21:30:41.779Z] * What went wrong: [2022-08-10T21:30:41.779Z] Execution failed for task ':streams:streams-scala:compileScala'. [2022-08-10T21:30:41.779Z] > Timeout waiting to lock zinc-1.6.1_2.13.8_17 compiler cache (/home/jenkins/.gradle/caches/7.5.1/zinc-1.6.1_2.13.8_17). It is currently in use by another Gradle instance. {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] jsancio commented on pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
jsancio commented on PR #12506: URL: https://github.com/apache/kafka/pull/12506#issuecomment-1212271060 > but I believe the problem exists for kraft as well. Are you planning to use the same Jira (KAFKA-14154 to track this? I assume that the KRaft fix is also a blocker for 3.3.0. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji commented on pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
hachikuji commented on PR #12506: URL: https://github.com/apache/kafka/pull/12506#issuecomment-1212257796 One note: this patch fixes the problem for the zk controller, but I believe the problem exists for kraft as well. I am thinking we can address that separately with a slightly different approach. Basically we can use the `handleLeaderChange` notification on `RaftClient.Listener` in order to push controller changes directly to `BrokerToControllerChannelManager`. This can give us a guarantee that `AlterPartition` can only be sent to controllers which have an epoch which is at least as large as any partition change that the request is based off of. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji commented on pull request #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
hachikuji commented on PR #12506: URL: https://github.com/apache/kafka/pull/12506#issuecomment-1212250619 cc @jsancio @dajac -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji opened a new pull request, #12506: KAFKA-14154; Return NOT_CONTROLLER from AlterPartition if leader is ahead of controller
hachikuji opened a new pull request, #12506: URL: https://github.com/apache/kafka/pull/12506 It is possible for the leader to send an `AlterPartition` request to a zombie controller which includes either a partition or leader epoch which is larger than what is found in the controller context. Prior to https://github.com/apache/kafka/pull/12032, the controller handled this in the following way: 1. If the `LeaderAndIsr` state exactly matches the current state on the controller excluding the partition epoch, then the `AlterPartition` request is considered successful and no error is returned. The risk with this handling is that this may cause the leader to incorrectly assume that the state had been successfully updated. Since the controller's state is stale, there is no way to know what the latest ISR state is. 2. Otherwise, the controller will attempt to update the state in zookeeper with the leader/partition epochs from the `AlterPartition` request. This operation would fail if the controller's epoch was not still current in Zookeeper and the result would be a `NOT_CONTROLLER` error. Following https://github.com/apache/kafka/pull/12032, the controller's validation is stricter. If the partition epoch is larger than expected, then the controller will return `INVALID_UPDATE_VERSION` without attempting the operation. Similarly, if the leader epoch is larger than expected, the controller will return `FENCED_LEADER_EPOCH`. The problem with this new handling is that the leader treats the errors from the controller as authoritative. For example, if it sees the `FENCED_LEADER_EPOCH` error, then it will not retry the request and will simply wait until the next leader epoch arrives. The ISR state gets suck in a pending state, which can lead to persistent URPs until the leader epoch gets bumped. In this patch, we want to fix the issues with this handling, but we don't want to restore the buggy idempotent check. The approach is straightforward. If the controller sees a partition/leader epoch which is larger than what it has in the controller context, then it assumes that has become a zombie and returns `NOT_CONTROLLER` to the leader. This will cause the leader to attempt to reset the controller from its local metadata cache and retry the `AlterPartition` request. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji closed pull request #12499: KAFKA-14154; Ensure AlterPartition not sent to stale controller
hachikuji closed pull request #12499: KAFKA-14154; Ensure AlterPartition not sent to stale controller URL: https://github.com/apache/kafka/pull/12499 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] hachikuji commented on pull request #12499: KAFKA-14154; Ensure AlterPartition not sent to stale controller
hachikuji commented on PR #12499: URL: https://github.com/apache/kafka/pull/12499#issuecomment-1212226476 I am going to close this PR. On the one hand, it does not address the problem for KRaft; on the other, we have thought of a simpler fix for the zk controller, which I will open shortly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] mimaison commented on a diff in pull request #12046: KAFKA-10360: Allow disabling JMX Reporter (KIP-830)
mimaison commented on code in PR #12046: URL: https://github.com/apache/kafka/pull/12046#discussion_r943688586 ## server-common/src/main/java/org/apache/kafka/server/metrics/KafkaYammerMetrics.java: ## @@ -53,16 +72,21 @@ public static MetricsRegistry defaultRegistry() { } private final MetricsRegistry metricsRegistry = new MetricsRegistry(); -private final FilteringJmxReporter jmxReporter = new FilteringJmxReporter(metricsRegistry, -metricName -> true); +private FilteringJmxReporter jmxReporter; private KafkaYammerMetrics() { -jmxReporter.start(); Review Comment: Agreed, I've reverted the changes in this file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] yashmayya commented on a diff in pull request #12490: KAFKA-14147: Prevent deferredTaskUpdates map from growing monotonically in KafkaConfigBackingStore
yashmayya commented on code in PR #12490: URL: https://github.com/apache/kafka/pull/12490#discussion_r943666445 ## connect/runtime/src/main/java/org/apache/kafka/connect/storage/KafkaConfigBackingStore.java: ## @@ -853,6 +853,9 @@ private void processConnectorConfigRecord(String connectorName, SchemaAndValue v connectorConfigs.remove(connectorName); connectorTaskCounts.remove(connectorName); taskConfigs.keySet().removeIf(taskId -> taskId.connector().equals(connectorName)); +deferredTaskUpdates.remove(connectorName); +connectorTaskCountRecords.remove(connectorName); Review Comment: `deferredTaskUpdates` isn't exposed in the snapshot so I'll need to check what sort of test we could have for this. Thanks a lot for verifying my understanding and the great explanation! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14133) Remaining EasyMock to Mockito tests
[ https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christo Lolov updated KAFKA-14133: -- Description: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # {color:#FF8B00}StreamsRebalanceListenerTest{color} (owner: Christo) # {color:#FF8B00}TimestampedKeyValueStoreMaterializerTest{color} (owner: Christo) # {color:#FF8B00}CachingInMemoryKeyValueStoreTest{color} (owner: Christo) # {color:#FF8B00}CachingInMemorySessionStoreTest{color} (owner: Christo) # {color:#FF8B00}CachingPersistentSessionStoreTest{color} (owner: Christo) # {color:#FF8B00}CachingPersistentWindowStoreTest{color} (owner: Christo) # {color:#FF8B00}ChangeLoggingKeyValueBytesStoreTest{color} (owner: Christo) # {color:#FF8B00}ChangeLoggingTimestampedKeyValueBytesStoreTest{color} (owner: Christo) # {color:#FF8B00}CompositeReadOnlyWindowStoreTest{color} (owner: Christo) # {color:#FF8B00}KeyValueStoreBuilderTest{color} (owner: Christo) # {color:#FF8B00}RocksDBStoreTest{color} (owner: Christo) # {color:#FF8B00}StreamThreadStateStoreProviderTest{color} (owner: Christo) # TopologyTest # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # TaskManagerTest # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # MeteredTimestampedWindowStoreTest # TimeOrderedCachingPersistentWindowStoreTest # TimeOrderedWindowStoreTest *The coverage report for the above tests after the change should be >= to what the coverage is now.* was: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest
[GitHub] [kafka] clolov commented on pull request #12505: KAFKA-14133: Replace EasyMock with Mockito in streams tests
clolov commented on PR #12505: URL: https://github.com/apache/kafka/pull/12505#issuecomment-1212166713 Tagging @mdedetrich @cadonna @ijuma for visibility and reviews :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] clolov commented on a diff in pull request #12505: KAFKA-14133: Replace EasyMock with Mockito in streams tests
clolov commented on code in PR #12505: URL: https://github.com/apache/kafka/pull/12505#discussion_r943639431 ## streams/src/test/java/org/apache/kafka/streams/state/internals/KeyValueStoreBuilderTest.java: ## @@ -126,27 +121,15 @@ public void shouldThrowNullPointerIfInnerIsNull() { Serdes.String(), new MockTime())); } -@Test Review Comment: As far as I can tell, the behaviour exercised in these tests is never enforced in the code so I have no idea how they have been passing up to now. I deleted them, but if I am shown to be wrong I will return them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] clolov commented on a diff in pull request #12505: KAFKA-14133: Replace EasyMock with Mockito in streams tests
clolov commented on code in PR #12505: URL: https://github.com/apache/kafka/pull/12505#discussion_r943639431 ## streams/src/test/java/org/apache/kafka/streams/state/internals/KeyValueStoreBuilderTest.java: ## @@ -126,27 +121,15 @@ public void shouldThrowNullPointerIfInnerIsNull() { Serdes.String(), new MockTime())); } -@Test Review Comment: As far as I can tell from the code, these tests were testing something which isn't done in the code so I deleted them. If I am wrong I will return them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] clolov opened a new pull request, #12505: KAFKA-14133: Replace EasyMock with Mockito in streams tests
clolov opened a new pull request, #12505: URL: https://github.com/apache/kafka/pull/12505 Batch 2 of the tests detailed in https://issues.apache.org/jira/browse/KAFKA-14133 which use EasyMock and need to be moved to Mockito. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-14134) Replace EasyMock with Mockito for WorkerConnectorTest
[ https://issues.apache.org/jira/browse/KAFKA-14134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yash Mayya reassigned KAFKA-14134: -- Assignee: Yash Mayya > Replace EasyMock with Mockito for WorkerConnectorTest > - > > Key: KAFKA-14134 > URL: https://issues.apache.org/jira/browse/KAFKA-14134 > Project: Kafka > Issue Type: Sub-task > Components: KafkaConnect >Reporter: Yash Mayya >Assignee: Yash Mayya >Priority: Minor > Fix For: 3.4.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (KAFKA-14147) Some map objects in KafkaConfigBackingStore grow in size monotonically
[ https://issues.apache.org/jira/browse/KAFKA-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yash Mayya reassigned KAFKA-14147: -- Assignee: Yash Mayya > Some map objects in KafkaConfigBackingStore grow in size monotonically > -- > > Key: KAFKA-14147 > URL: https://issues.apache.org/jira/browse/KAFKA-14147 > Project: Kafka > Issue Type: Bug > Components: KafkaConnect >Reporter: Yash Mayya >Assignee: Yash Mayya >Priority: Minor > > Similar to https://issues.apache.org/jira/browse/KAFKA-8869 > > {{deferredTaskUpdates, connectorTaskCountRecords and > connectorTaskConfigGenerations in KafkaConfigBackingStore are never updated > when a connector is deleted, thus growing monotonically.}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14133) Remaining EasyMock to Mockito tests
[ https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christo Lolov updated KAFKA-14133: -- Description: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest (owner: Christo) # CachingPersistentSessionStoreTest (owner: Christo) # CachingPersistentWindowStoreTest (owner: Christo) # ChangeLoggingKeyValueBytesStoreTest (owner: Christo) # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest (owner: Christo) # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest (owner: Christo) # KeyValueStoreBuilderTest (owner: Christo) # MeteredTimestampedWindowStoreTest # RocksDBStoreTest (owner: Christo) # StreamThreadStateStoreProviderTest (owner: Christo) # TimeOrderedCachingPersistentWindowStoreTest # TimeOrderedWindowStoreTest *The coverage report for the above tests after the change should be >= to what the coverage is now.* was: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest (owner: Christo) # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest #
[GitHub] [kafka] mimaison opened a new pull request, #12504: KAFKA-14149: Fix DynamicBrokerReconfigurationTest in 3.2
mimaison opened a new pull request, #12504: URL: https://github.com/apache/kafka/pull/12504 @mumrah @cmccabe It looks like a way to fix `DynamicBrokerReconfigurationTest` in 3.2 is to backport https://github.com/apache/kafka/commit/9c3f605fc78f297ecf5accdcdec18471c19cf7d6. I think it makes sense to to also backport https://github.com/apache/kafka/commit/78038bca6688ce01f7df238d53142f5de3455863, otherwise `testKeyStoreAlter()` is really flaky. Before I push to 3.2, I want you to take a look and confirm if this makes sense. ### **Please do not merge this PR, instead I'll rebase directly the commits as we don't want to squash them.** -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (KAFKA-14162) HoistField SMT should not return an immutable map for schemaless key/value
[ https://issues.apache.org/jira/browse/KAFKA-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yash Mayya reassigned KAFKA-14162: -- Assignee: Yash Mayya > HoistField SMT should not return an immutable map for schemaless key/value > -- > > Key: KAFKA-14162 > URL: https://issues.apache.org/jira/browse/KAFKA-14162 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Reporter: Yash Mayya >Assignee: Yash Mayya >Priority: Minor > > The HoistField SMT currently returns an immutable map for schemaless keys and > values - > https://github.com/apache/kafka/blob/22007fba7c7346c5416f4db4e104434fdab265ee/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/HoistField.java#L62. > > This can cause issues in connectors if they attempt to modify the > SourceRecord's key or value since Kafka Connect doesn't document that these > keys/values are immutable. Furthermore, no other SMT does this. > > An example of a connector that would fail when schemaless values are used > with this SMT is Microsoft's Cosmos DB Sink Connector - > https://github.com/microsoft/kafka-connect-cosmosdb/blob/368566367a1dcbf9a91213067f1b9219a530bb16/src/main/java/com/azure/cosmos/kafka/connect/sink/CosmosDBSinkTask.java#L123-L130 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] jsancio commented on a diff in pull request #12274: KAFKA-13959: Controller should unfence Broker with busy metadata log
jsancio commented on code in PR #12274: URL: https://github.com/apache/kafka/pull/12274#discussion_r943606596 ## metadata/src/main/java/org/apache/kafka/controller/ReplicationControlManager.java: ## @@ -1382,7 +1383,6 @@ ControllerResult processBrokerHeartbeat( heartbeatManager.touch(brokerId, states.next().fenced(), request.currentMetadataOffset()); -boolean isCaughtUp = request.currentMetadataOffset() >= lastCommittedOffset; Review Comment: Minor but I would revert this move as it seems unnecessary. ## raft/src/main/java/org/apache/kafka/raft/internals/FuturePurgatory.java: ## @@ -56,7 +56,7 @@ CompletableFuture await(T threshold, long maxWaitTimeMs); /** - * Complete awaiting futures whose associated values are larger than the given threshold value. + * Complete awaiting futures whose associated values are smaller than the given threshold value. Review Comment: Thanks for this fix. I think the phrase "associated values" is still misleading. How about: > Complete awaiting futures whose threshold value from {@link await} is smaller than the given the given threshold value. ## metadata/src/test/java/org/apache/kafka/metadata/RecordTestUtils.java: ## @@ -66,7 +66,18 @@ public static void replayAll(Object target, Optional.class); method.invoke(target, record, Optional.empty()); } catch (NoSuchMethodException t) { -// ignore +try { +Method method = target.getClass().getMethod("replay", +record.getClass(), +long.class); +method.invoke(target, record, 0L); +} catch (NoSuchMethodException i) { +// ignore +} catch (InvocationTargetException i) { Review Comment: We have duplicated this pattern 3 times. I think we can remove this duplication if we add a `try` for the entire method. E.g. starts a line 57 and ends after the `for` loop. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] jsancio commented on pull request #12274: KAFKA-13959: Controller should unfence Broker with busy metadata log
jsancio commented on PR #12274: URL: https://github.com/apache/kafka/pull/12274#issuecomment-1212113291 @dengziming take a look at the tests failures. All of the failures in `ClusterControlManagerTest` seems related to this change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14133) Remaining EasyMock to Mockito tests
[ https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christo Lolov updated KAFKA-14133: -- Description: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest (owner: Christo) # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest # StreamThreadStateStoreProviderTest # TimeOrderedCachingPersistentWindowStoreTest # TimeOrderedWindowStoreTest *The coverage report for the above tests after the change should be >= to what the coverage is now.* was: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest #
[GitHub] [kafka] chia7712 opened a new pull request, #12503: Appending value to LIST config should not generate empty string with …
chia7712 opened a new pull request, #12503: URL: https://github.com/apache/kafka/pull/12503 We noticed this issue when appending value to `leader.replication.throttled.replicas`. The default value is empty string, so it produces `,0:0` rather than `0:0`. For another, parsing throttled replicas get failed when there is an empty string (see [parseThrottledPartitions](https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/ConfigHandler.scala#L91)) ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14133) Remaining EasyMock to Mockito tests
[ https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christo Lolov updated KAFKA-14133: -- Description: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest (owner: Christo) # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest # StreamThreadStateStoreProviderTest # TimeOrderedCachingPersistentWindowStoreTest # TimeOrderedWindowStoreTest *The coverage report for the above tests after the change should be >= to what the coverage is now.* was: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest # StreamThreadStateStoreProviderTest #
[jira] [Commented] (KAFKA-14098) Internal Kafka clients used by Kafka Connect should have distinguishable client IDs
[ https://issues.apache.org/jira/browse/KAFKA-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578486#comment-17578486 ] Mickael Maison commented on KAFKA-14098: This would be very valuable. When looking at metrics you have no idea what "consumer-connect-cluster-1" is actually doing. > Internal Kafka clients used by Kafka Connect should have distinguishable > client IDs > --- > > Key: KAFKA-14098 > URL: https://issues.apache.org/jira/browse/KAFKA-14098 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect >Reporter: Chris Egerton >Assignee: Chris Egerton >Priority: Minor > > KAFKA-5061 dealt with the lack of automatically-provided client IDs for the > Kafka clients used for source and sink tasks, and has been addressed for some > time now. Additionally, when new features have required new Kafka clients to > be brought up for tasks (such as the need for an admin client to create > topics for source tasks introduced by > [KIP-158|https://cwiki.apache.org/confluence/display/KAFKA/KIP-158%3A+Kafka+Connect+should+allow+source+connectors+to+set+topic-specific+settings+for+new+topics]), > we have taken care to ensure that these clients are also given meaningful > client IDs. > However, the internal clients used by Kafka Connect workers to create, > consume from, and produce to internal topics do not have > automatically-provided client IDs at the moment, and it is up to users to > manually supply them. Worse yet, even if a user does manually supply a client > ID for their Connect cluster's internal clients (by setting the {{client.id}} > property in their worker configuration), there is no distinction made between > the clients created for interacting with different topics. > If no {{client.id}} property is set in the worker config, Kafka Connect > should automatically provide client IDs for its internal clients that > includes the group ID of the cluster (if running in distributed mode) and the > purpose of the client (such as {{{}statuses{}}}, {{{}configs{}}}, or > {{{}offsets{}}}). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] dstelljes commented on pull request #12295: KAFKA-13586: Prevent exception thrown during connector update from crashing distributed herder
dstelljes commented on PR #12295: URL: https://github.com/apache/kafka/pull/12295#issuecomment-1212026332 Sorry for the wait on this—it looks like work to switch the test to Mockito is in flight now, so I’ll hang tight for now and rebase once that lands. I do have an account on the ASF JIRA; dstelljes is my username there as well. I tried to self-assign when I picked this up but didn’t have the permissions for it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (KAFKA-14133) Remaining EasyMock to Mockito tests
[ https://issues.apache.org/jira/browse/KAFKA-14133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christo Lolov updated KAFKA-14133: -- Description: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest (owner: Christo) # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest (owner: Christo) # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest # StreamThreadStateStoreProviderTest # TimeOrderedCachingPersistentWindowStoreTest # TimeOrderedWindowStoreTest *The coverage report for the above tests after the change should be >= to what the coverage is now.* was: {color:#DE350B}There are tests which use both PowerMock and EasyMock. I have put those in https://issues.apache.org/jira/browse/KAFKA-14132. Tests here rely solely on EasyMock.{color} Unless stated in brackets the tests are in the streams module. A list of tests which still require to be moved from EasyMock to Mockito as of 2nd of August 2022 which do not have a Jira issue and do not have pull requests I am aware of which are opened: {color:#FF8B00}In Review{color} # WorkerConnectorTest (connect) (owner: Yash) # WorkerCoordinatorTest (connect) # RootResourceTest (connect) # ByteArrayProducerRecordEquals (connect) # TopologyTest # {color:#FF8B00}KStreamFlatTransformTest{color} (owner: Christo) # {color:#FF8B00}KStreamFlatTransformValuesTest{color} (owner: Christo) # {color:#FF8B00}KStreamPrintTest{color} (owner: Christo) # {color:#FF8B00}KStreamRepartitionTest{color} (owner: Christo) # {color:#FF8B00}MaterializedInternalTest{color} (owner: Christo) # {color:#FF8B00}TransformerSupplierAdapterTest{color} (owner: Christo) # {color:#FF8B00}KTableSuppressProcessorMetricsTest{color} (owner: Christo) # {color:#FF8B00}ClientUtilsTest{color} (owner: Christo) # {color:#FF8B00}HighAvailabilityStreamsPartitionAssignorTest{color} (owner: Christo) # KTableSuppressProcessorTest # InternalTopicManagerTest # ProcessorContextImplTest # ProcessorStateManagerTest # StandbyTaskTest # StoreChangelogReaderTest # StreamTaskTest # StreamThreadTest # StreamsAssignmentScaleTest (*WIP* owner: Christo) # StreamsPartitionAssignorTest (*WIP* owner: Christo) # StreamsRebalanceListenerTest # TaskManagerTest # TimestampedKeyValueStoreMaterializerTest # WriteConsistencyVectorTest # AssignmentTestUtils (*WIP* owner: Christo) # StreamsMetricsImplTest # CachingInMemoryKeyValueStoreTest # CachingInMemorySessionStoreTest # CachingPersistentSessionStoreTest # CachingPersistentWindowStoreTest # ChangeLoggingKeyValueBytesStoreTest # ChangeLoggingSessionBytesStoreTest # ChangeLoggingTimestampedKeyValueBytesStoreTest # ChangeLoggingTimestampedWindowBytesStoreTest # ChangeLoggingWindowBytesStoreTest # CompositeReadOnlyWindowStoreTest # KeyValueStoreBuilderTest # MeteredTimestampedWindowStoreTest # RocksDBStoreTest # StreamThreadStateStoreProviderTest # TimeOrderedCachingPersistentWindowStoreTest #
[GitHub] [kafka] andymg3 commented on a diff in pull request #12499: KAFKA-14154; Ensure AlterPartition not sent to stale controller
andymg3 commented on code in PR #12499: URL: https://github.com/apache/kafka/pull/12499#discussion_r941967874 ## core/src/main/scala/kafka/server/BrokerToControllerChannelManager.scala: ## @@ -364,15 +423,15 @@ class BrokerToControllerRequestThread( } override def doWork(): Unit = { -if (activeControllerAddress().isDefined) { +if (activeControllerOpt().isDefined) { super.pollOnce(Long.MaxValue) } else { debug("Controller isn't cached, looking for local metadata changes") controllerNodeProvider.get() match { -case Some(controllerNode) => - info(s"Recorded new controller, from now on will use broker $controllerNode") - updateControllerAddress(controllerNode) - metadataUpdater.setNodes(Seq(controllerNode).asJava) +case Some(controllerNodeAndEpoch) => Review Comment: Is this where/how eventually the `LeaderAndIsr` from the new controller gets applied? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (KAFKA-14161) kafka-consumer-group.sh --list not list all consumer groups
[ https://issues.apache.org/jira/browse/KAFKA-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Chen resolved KAFKA-14161. --- Resolution: Not A Problem > kafka-consumer-group.sh --list not list all consumer groups > --- > > Key: KAFKA-14161 > URL: https://issues.apache.org/jira/browse/KAFKA-14161 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.8.1 > Environment: Cenots7.5 >Reporter: yandufeng >Priority: Major > > Hello, we use kafka 2.8.1 version in production, i use > kafka-console-consumer.sh command or flink sql job consume data of kafka > topic, but i use kafka-consumer-group.sh --list --bootstrap-server:9092, only > get two consumer group, why not get other consumer group? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-14161) kafka-consumer-group.sh --list not list all consumer groups
[ https://issues.apache.org/jira/browse/KAFKA-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17578351#comment-17578351 ] Luke Chen commented on KAFKA-14161: --- [~yandufeng] , you should check your consumer setting. Not every consumer belongs to one group, and also, multiple consumers might belong to the same group. I'm not sure how Flink works with Kafka, but `kafka-console-consumer.sh` won't start a consumer group if you only use default option. I'm going to close this ticket, unless you found there's indeed a consumer group but kafka-consumer-group.sh cannot find it. Thanks. > kafka-consumer-group.sh --list not list all consumer groups > --- > > Key: KAFKA-14161 > URL: https://issues.apache.org/jira/browse/KAFKA-14161 > Project: Kafka > Issue Type: Bug >Affects Versions: 2.8.1 > Environment: Cenots7.5 >Reporter: yandufeng >Priority: Major > > Hello, we use kafka 2.8.1 version in production, i use > kafka-console-consumer.sh command or flink sql job consume data of kafka > topic, but i use kafka-consumer-group.sh --list --bootstrap-server:9092, only > get two consumer group, why not get other consumer group? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] tombentley commented on a diff in pull request #12046: KAFKA-10360: Allow disabling JMX Reporter (KIP-830)
tombentley commented on code in PR #12046: URL: https://github.com/apache/kafka/pull/12046#discussion_r943247605 ## server-common/src/main/java/org/apache/kafka/server/metrics/KafkaYammerMetrics.java: ## @@ -53,16 +72,21 @@ public static MetricsRegistry defaultRegistry() { } private final MetricsRegistry metricsRegistry = new MetricsRegistry(); -private final FilteringJmxReporter jmxReporter = new FilteringJmxReporter(metricsRegistry, -metricName -> true); +private FilteringJmxReporter jmxReporter; private KafkaYammerMetrics() { -jmxReporter.start(); Review Comment: It was https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/server/BrokerServer.scala#L157 where it wasn't clear to me when `configure()` (and therefore now `start()`) gets called. Since the KIP didn't mention the Yammer reporters it's probably a good idea to not touch them here. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ashmeet13 commented on a diff in pull request #12414: KAFKA-14073 Logging the reason for Snapshot
ashmeet13 commented on code in PR #12414: URL: https://github.com/apache/kafka/pull/12414#discussion_r943241275 ## raft/src/main/java/org/apache/kafka/raft/ReplicatedCounter.java: ## @@ -107,6 +107,8 @@ public synchronized void handleCommit(BatchReader reader) { } log.debug("Counter incremented from {} to {}", initialCommitted, committed); +// A snapshot is being taken here too, not being able to - +// `import org.apache.kafka.metadata.utils.SnapshotReason`, figure out why? Review Comment: I see, so will we ignore this snapshot request and not log the reason for it? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ashmeet13 commented on a diff in pull request #12414: KAFKA-14073 Logging the reason for Snapshot
ashmeet13 commented on code in PR #12414: URL: https://github.com/apache/kafka/pull/12414#discussion_r943240735 ## core/src/main/scala/kafka/server/metadata/BrokerMetadataSnapshotter.scala: ## @@ -59,11 +60,19 @@ class BrokerMetadataSnapshotter( val writer = writerBuilder.build( image.highestOffsetAndEpoch().offset, image.highestOffsetAndEpoch().epoch, -lastContainedLogTime - ) +lastContainedLogTime) if (writer.nonEmpty) { _currentSnapshotOffset = image.highestOffsetAndEpoch().offset -info(s"Creating a new snapshot at offset ${_currentSnapshotOffset}...") + +var stringReasons: Set[String] = Set() + +snapshotReasons.foreach(r => stringReasons += r.toString) + +if (stringReasons.isEmpty){ + stringReasons += SnapshotReason.UnknownReason.toString +} + +info(s"Creating a new snapshot at offset ${_currentSnapshotOffset} because, ${stringReasons.mkString(" and ")}") Review Comment: Made this change -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] tombentley commented on a diff in pull request #12046: KAFKA-10360: Allow disabling JMX Reporter (KIP-830)
tombentley commented on code in PR #12046: URL: https://github.com/apache/kafka/pull/12046#discussion_r943234303 ## server-common/src/main/java/org/apache/kafka/server/metrics/KafkaYammerMetrics.java: ## @@ -53,16 +72,21 @@ public static MetricsRegistry defaultRegistry() { } private final MetricsRegistry metricsRegistry = new MetricsRegistry(); -private final FilteringJmxReporter jmxReporter = new FilteringJmxReporter(metricsRegistry, -metricName -> true); +private FilteringJmxReporter jmxReporter; private KafkaYammerMetrics() { -jmxReporter.start(); -Exit.addShutdownHook("kafka-jmx-shutdown-hook", jmxReporter::shutdown); } @Override +@SuppressWarnings("deprecation") public void configure(Map configs) { +AbstractConfig config = new AbstractConfig(CONFIG_DEF, configs); +List reporters = config.getList(CommonClientConfigs.METRIC_REPORTER_CLASSES_CONFIG); +if (config.getBoolean(CommonClientConfigs.AUTO_INCLUDE_JMX_REPORTER_CONFIG) || reporters.stream().anyMatch(r -> JmxReporter.class.getName().equals(r))) { Review Comment: OK, well thanks for trying. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] yashmayya commented on pull request #12502: KAFKA-14162: HoistField SMT should not return an immutable map for schemaless key/value
yashmayya commented on PR #12502: URL: https://github.com/apache/kafka/pull/12502#issuecomment-1211696790 @C0urante another small PR for you to review whenever you get a chance? Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] yashmayya opened a new pull request, #12502: KAFKA-14162: HoistField SMT should not return an immutable map for schemaless key/value
yashmayya opened a new pull request, #12502: URL: https://github.com/apache/kafka/pull/12502 - https://issues.apache.org/jira/browse/KAFKA-14162 - The HoistField SMT currently returns an immutable map for schemaless keys and values - https://github.com/apache/kafka/blob/22007fba7c7346c5416f4db4e104434fdab265ee/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/HoistField.java#L62 - This can cause failures in connectors if they attempt to modify the SourceRecord's key or value since Kafka Connect doesn't document that these keys/values are immutable. Furthermore, no other SMT does this. - An example of a connector that would fail when schemaless values are used with this SMT is Microsoft's Cosmos DB Sink Connector - https://github.com/microsoft/kafka-connect-cosmosdb/blob/368566367a1dcbf9a91213067f1b9219a530bb16/src/main/java/com/azure/cosmos/kafka/connect/sink/CosmosDBSinkTask.java#L123-L130 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-14162) HoistField SMT should not return an immutable map for schemaless key/value
Yash Mayya created KAFKA-14162: -- Summary: HoistField SMT should not return an immutable map for schemaless key/value Key: KAFKA-14162 URL: https://issues.apache.org/jira/browse/KAFKA-14162 Project: Kafka Issue Type: Improvement Components: KafkaConnect Reporter: Yash Mayya The HoistField SMT currently returns an immutable map for schemaless keys and values - https://github.com/apache/kafka/blob/22007fba7c7346c5416f4db4e104434fdab265ee/connect/transforms/src/main/java/org/apache/kafka/connect/transforms/HoistField.java#L62. This can cause issues in connectors if they attempt to modify the SourceRecord's key or value since Kafka Connect doesn't document that these keys/values are immutable. Furthermore, no other SMT does this. An example of a connector that would fail when schemaless values are used with this SMT is Microsoft's Cosmos DB Sink Connector - https://github.com/microsoft/kafka-connect-cosmosdb/blob/368566367a1dcbf9a91213067f1b9219a530bb16/src/main/java/com/azure/cosmos/kafka/connect/sink/CosmosDBSinkTask.java#L123-L130 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] showuon commented on a diff in pull request #12392: KAFKA-14053: Transactional producer should bump the epoch and skip ab…
showuon commented on code in PR #12392: URL: https://github.com/apache/kafka/pull/12392#discussion_r943219606 ## clients/src/test/java/org/apache/kafka/clients/producer/internals/TransactionManagerTest.java: ## @@ -2594,27 +2596,20 @@ public void testDropCommitOnBatchExpiry() throws InterruptedException { } catch (ExecutionException e) { assertTrue(e.getCause() instanceof TimeoutException); } + runUntil(commitResult::isCompleted); // the commit shouldn't be completed without being sent since the produce request failed. assertFalse(commitResult.isSuccessful()); // the commit shouldn't succeed since the produce request failed. -assertThrows(TimeoutException.class, commitResult::await); +assertThrows(KafkaException.class, commitResult::await); -assertTrue(transactionManager.hasAbortableError()); -assertTrue(transactionManager.hasOngoingTransaction()); +assertTrue(transactionManager.hasFatalBumpableError()); +assertFalse(transactionManager.hasOngoingTransaction()); assertFalse(transactionManager.isCompleting()); -assertTrue(transactionManager.transactionContainsPartition(tp0)); -TransactionalRequestResult abortResult = transactionManager.beginAbort(); - -prepareEndTxnResponse(Errors.NONE, TransactionResult.ABORT, producerId, epoch); -prepareInitPidResponse(Errors.NONE, false, producerId, (short) (epoch + 1)); -runUntil(abortResult::isCompleted); -assertTrue(abortResult.isSuccessful()); -assertFalse(transactionManager.hasOngoingTransaction()); -assertFalse(transactionManager.transactionContainsPartition(tp0)); +assertThrows(KafkaException.class, () -> transactionManager.beginAbort()); Review Comment: So, it looks like after this patch, when batch expiration or timeout error, the producer will enter fatal error state after bumping epoch. But before this patch, the we'll abort it and continue the transaction work. Is that right? Sorry, I didn't realize this situation. This will impact current user behavior, so we need more discussion. I'll ping some experts in this PR, and hope they will help provide comments. cc @artemlivshits @ijuma -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] showuon commented on pull request #12500: MINOR: Remove spurious sleep in ConsumerCoordinatorTest
showuon commented on PR #12500: URL: https://github.com/apache/kafka/pull/12500#issuecomment-1211653244 Triggering another jenkins build to make sure it passes the tests. https://ci-builds.apache.org/job/Kafka/job/kafka-pr/view/change-requests/job/PR-12500/2/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-14161) kafka-consumer-group.sh --list not list all consumer groups
yandufeng created KAFKA-14161: - Summary: kafka-consumer-group.sh --list not list all consumer groups Key: KAFKA-14161 URL: https://issues.apache.org/jira/browse/KAFKA-14161 Project: Kafka Issue Type: Bug Affects Versions: 2.8.1 Environment: Cenots7.5 Reporter: yandufeng Hello, we use kafka 2.8.1 version in production, i use kafka-console-consumer.sh command or flink sql job consume data of kafka topic, but i use kafka-consumer-group.sh --list --bootstrap-server:9092, only get two consumer group, why not get other consumer group? -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [kafka] dengziming commented on a diff in pull request #12274: KAFKA-13959: Controller should unfence Broker with busy metadata log
dengziming commented on code in PR #12274: URL: https://github.com/apache/kafka/pull/12274#discussion_r943173917 ## metadata/src/main/java/org/apache/kafka/controller/QuorumController.java: ## @@ -1305,12 +1305,11 @@ private void handleFeatureControlChange() { } } -@SuppressWarnings("unchecked") -private void replay(ApiMessage message, Optional snapshotId) { +private void replay(ApiMessage message, Optional snapshotId, long offset) { Review Comment: Done ## metadata/src/main/java/org/apache/kafka/controller/QuorumController.java: ## @@ -759,7 +759,7 @@ public void run() throws Exception { int i = 1; for (ApiMessageAndVersion message : result.records()) { try { -replay(message.message(), Optional.empty()); +replay(message.message(), Optional.empty(), writeOffset + i); Review Comment: Thank you for pointing out this. ## metadata/src/main/java/org/apache/kafka/controller/ClusterControlManager.java: ## @@ -217,6 +217,14 @@ boolean check() { */ private final TimelineHashMap brokerRegistrations; +/** + * Save the offset of each broker registration record, we will only unfence a + * broker when its high watermark has reached its broker registration record, + * this is not necessarily the exact offset of each broker registration record + * but should not be smaller than it. + */ +private final TimelineHashMap registerBrokerRecordOffsets; Review Comment: Currently, all fields in `BrokerRegistration` are hard states which means they will all be persisted in `RegisterBrokerRecord`, so I prefer to leave it in a separate field ## metadata/src/main/java/org/apache/kafka/controller/QuorumController.java: ## @@ -1874,8 +1873,13 @@ public CompletableFuture processBrokerHeartbeat( @Override public ControllerResult generateRecordsAndResult() { +Long offsetForRegisterBrokerRecord = clusterControl.registerBrokerRecordOffset(brokerId); +if (offsetForRegisterBrokerRecord == null) { +throw new RuntimeException( +String.format("Receive a heartbeat from broker %d before registration", brokerId)); Review Comment: Good catch, I find we have a similar inspection at `ClusterControlManager.checkBrokerEpoch` where we return `StaleBrokerEpochException` if BrokerRegistration is null, I also return a `StaleBrokerEpochException` here to make it consistent. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] clolov commented on pull request #11792: Replace EasyMock/PowerMock with Mockito in DistributedHerderTest
clolov commented on PR #11792: URL: https://github.com/apache/kafka/pull/11792#issuecomment-1211589682 Okay @dplavcic! Thanks both for picking it up and for letting us know :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] ahuang98 commented on a diff in pull request #12479: MINOR; Convert some integration tests to run with the KRaft modes
ahuang98 commented on code in PR #12479: URL: https://github.com/apache/kafka/pull/12479#discussion_r943125682 ## core/src/test/scala/unit/kafka/admin/DeleteTopicTest.scala: ## @@ -45,103 +59,122 @@ class DeleteTopicTest extends QuorumTestHarness { @AfterEach override def tearDown(): Unit = { -TestUtils.shutdownServers(servers) +adminClient.close() +TestUtils.shutdownServers(brokers) super.tearDown() } - @Test - def testDeleteTopicWithAllAliveReplicas(): Unit = { + @ParameterizedTest(name = TestInfoUtils.TestWithParameterizedQuorumName) + @ValueSource(strings = Array("zk", "kraft")) + def testDeleteTopicWithAllAliveReplicas(quorum: String): Unit = { val topic = "test" -servers = createTestTopicAndCluster(topic) -// start topic deletion -adminZkClient.deleteTopic(topic) -TestUtils.verifyTopicDeletion(zkClient, topic, 1, servers) +createTestTopicAndCluster(topic) +TestUtils.deleteTopicWithAdmin(adminClient, topic, brokers) } - @Test - def testResumeDeleteTopicWithRecoveredFollower(): Unit = { + @ParameterizedTest(name = TestInfoUtils.TestWithParameterizedQuorumName) + @ValueSource(strings = Array("zk", "kraft")) + def testResumeDeleteTopicWithRecoveredFollower(quorum: String): Unit = { val topicPartition = new TopicPartition("test", 0) val topic = topicPartition.topic -servers = createTestTopicAndCluster(topic) +createTestTopicAndCluster(topic) // shut down one follower replica -val leaderIdOpt = zkClient.getLeaderForPartition(new TopicPartition(topic, 0)) -assertTrue(leaderIdOpt.isDefined, "Leader should exist for partition [test,0]") -val follower = servers.filter(s => s.config.brokerId != leaderIdOpt.get).last +val partition = getTopicPartitionInfo(adminClient, topic, 0) +assertTrue(partition.isPresent, "Partition [test,0] should exist") +val leaderNode = partition.get().leader() +assertFalse(leaderNode.isEmpty, "Leader should exist for partition [test,0]") +val follower = brokers.filter(s => s.config.brokerId != leaderNode.id()).last follower.shutdown() // start topic deletion -adminZkClient.deleteTopic(topic) +adminClient.deleteTopics(Collections.singleton(topic)) // check if all replicas but the one that is shut down has deleted the log TestUtils.waitUntilTrue(() => - servers.filter(s => s.config.brokerId != follower.config.brokerId) -.forall(_.getLogManager.getLog(topicPartition).isEmpty), "Replicas 0,1 have not deleted log.") + brokers.filter(s => s.config.brokerId != follower.config.brokerId) +.forall(_.logManager.getLog(topicPartition).isEmpty), "Replicas 0,1 have not deleted log.") // ensure topic deletion is halted -TestUtils.waitUntilTrue(() => zkClient.isTopicMarkedForDeletion(topic), - "Admin path /admin/delete_topics/test path deleted even when a follower replica is down") +if (isKRaftTest()) { + val aliveBrokers = brokers.filter(b => b != follower) + TestUtils.waitForAllPartitionsMetadata(aliveBrokers, topic, 0) + TestUtils.waitForAllPartitionsMetadata(Seq(follower), topic, 1) + assertThrows(classOf[ExecutionException], () => adminClient.describeTopics(Collections.singleton(topic)).topicNameValues().get(topic).get()) +} else { + TestUtils.waitUntilTrue(() => zkClient.isTopicMarkedForDeletion(topic), +"Admin path /admin/delete_topics/test path deleted even when a follower replica is down") +} // restart follower replica follower.startup() -TestUtils.verifyTopicDeletion(zkClient, topic, 1, servers) +TestUtils.waitForAllPartitionsMetadata(brokers, topic, 0) } - @Test - def testResumeDeleteTopicOnControllerFailover(): Unit = { + @ParameterizedTest(name = TestInfoUtils.TestWithParameterizedQuorumName) + @ValueSource(strings = Array("zk", "kraft")) + def testResumeDeleteTopicOnControllerFailover(quorum: String): Unit = { val topicPartition = new TopicPartition("test", 0) val topic = topicPartition.topic -servers = createTestTopicAndCluster(topic) -val controllerId = zkClient.getControllerId.getOrElse(fail("Controller doesn't exist")) -val controller = servers.filter(s => s.config.brokerId == controllerId).head -val leaderIdOpt = zkClient.getLeaderForPartition(new TopicPartition(topic, 0)) -val follower = servers.filter(s => s.config.brokerId != leaderIdOpt.get && s.config.brokerId != controllerId).last +createTestTopicAndCluster(topic) + +val partition = getTopicPartitionInfo(adminClient, topic, 0) +assertTrue(partition.isPresent, "Partition [test,0] should exist") +val leaderNode = partition.get().leader() +assertFalse(leaderNode.isEmpty, "Leader should exist for partition [test,0]") + +val verifyDeletionExecutable: Executable = () => TestUtils.verifyTopicDeletion(zkClientOrNull, topic, 1, brokers) +var shutdownController: Executable =