Re: [PR] MINOR: Various cleanups in metadata [kafka]

2024-04-25 Thread via GitHub


chia7712 merged PR #15806:
URL: https://github.com/apache/kafka/pull/15806


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2024-04-25 Thread via GitHub


chia7712 commented on PR #15806:
URL: https://github.com/apache/kafka/pull/15806#issuecomment-2078233197

   ```
   ./gradlew cleanTest :streams:test --tests 
StreamsAssignmentScaleTest.testHighAvailabilityTaskAssignorManyStandbys 
:tools:test --tests 
MetadataQuorumCommandTest.testDescribeQuorumStatusSuccessful --tests 
DeleteOffsetsConsumerGroupCommandIntegrationTest.testDeleteOffsetsOfStableConsumerGroupWithTopicOnly
 --tests 
DeleteOffsetsConsumerGroupCommandIntegrationTest.testDeleteOffsetsOfStableConsumerGroupWithTopicPartition
 --tests 
DeleteOffsetsConsumerGroupCommandIntegrationTest.testDeleteOffsetsOfEmptyConsumerGroupWithTopicOnly
 :storage:test --tests 
TopicBasedRemoteLogMetadataManagerTest.testRemoteLogSizeCalculationWithSegmentsOfTheSameEpoch
 --tests TransactionsWithTieredStoreTest.testSendOffsetsToTransactionTimeout 
--tests TransactionsWithTieredStoreTest.testAbortTransactionTimeout 
:connect:runtime:test --tests 
org.apache.kafka.connect.integration.ExactlyOnceSourceIntegrationTest.testSeparateOffsetsTopic
 :metadata:test --tests 
QuorumControllerMetricsIntegrationTest.testTimeoutMetrics :tr
 ogdor:test --tests CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated 
:connect:mirror:test --tests 
IdentityReplicationIntegrationTest.testSyncTopicConfigs --tests 
MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testReplicateSourceDefault
 :core:test --tests ConsumerBounceTest.testConsumptionWithBrokerFailures 
--tests ReplicationQuotasTest.shouldThrottleOldSegments
   ```
   don't notice related failure, and they pass on my local. will merge it


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-14 Thread via GitHub


mimaison merged PR #14734:
URL: https://github.com/apache/kafka/pull/14734


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-14 Thread via GitHub


mimaison commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1809735473

   The rebuild has a different set of failures and 
`:streams:upgrade-system-tests-23:test` was fine across all pipelines. I'll 
merge this in trunk. Thanks for the review!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


jlprat commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1808282654

   @mimaison I also highly doubt this PR is the cause, but it would be good to 
know if the test was just flaky or something went wrong in previous PRs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


mimaison commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1808279317

   The PR does not touch streams and it builds fine locally with Java 21 and 
Scala 2.13. To be extra safe I kicked a rebuild: 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-14734/2/. Let see if 
that module builds this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


jlprat commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1808233060

   Navigating to 
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14734/1/pipeline/
 I see an error:
   
   > FAILURE: Build failed with an exception.
   > 
   > 
   > * What went wrong:
   > 
   > Execution failed for task ':streams:upgrade-system-tests-23:test'.
   > 
   > > Process 'Gradle Test Executor 3' finished with non-zero exit value 1
   > 
   >   This problem might be caused by incorrect test process configuration.
   > 
   >   For more on test execution, please refer to 
https://docs.gradle.org/8.3/userguide/java_testing.html#sec:test_execution in 
the Gradle documentation.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


mimaison commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1808226806

   > LGTM then, as long as the system tests for 2.3 turns green somehow (or 
it's accepted that is flaky)
   
   What do you mean?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


mimaison commented on code in PR #14734:
URL: https://github.com/apache/kafka/pull/14734#discussion_r1391134864


##
metadata/src/main/java/org/apache/kafka/metadata/placement/StripedReplicaPlacer.java:
##
@@ -33,89 +33,86 @@
 
 /**
  * The striped replica placer.
- *
- *
- * GOALS
- * The design of this placer attempts to satisfy a few competing goals.  
Firstly, we want
- * to spread the replicas as evenly as we can across racks.  In the simple 
case where
- * broker racks have not been configured, this goal is a no-op, of course.  
But it is the
+ * 
+ * Goals
+ * The design of this placer attempts to satisfy a few competing goals. 
Firstly, we want
+ * to spread the replicas as evenly as we can across racks. In the simple case 
where
+ * broker racks have not been configured, this goal is a no-op, of course. But 
it is the
  * highest priority goal in multi-rack clusters.
  *
- * Our second goal is to spread the replicas evenly across brokers.  Since we 
are placing
+ * Our second goal is to spread the replicas evenly across brokers. Since 
we are placing
  * multiple partitions, we try to avoid putting each partition on the same set 
of
- * replicas, even if it does satisfy the rack placement goal.  If any specific 
broker is
+ * replicas, even if it does satisfy the rack placement goal. If any specific 
broker is
  * fenced, we would like the new leaders to distributed evenly across the 
remaining
  * brokers.
  *
- * However, we treat the rack placement goal as higher priority than this 
goal-- if you
+ * However, we treat the rack placement goal as higher priority than this 
goal-- if you
  * configure 10 brokers in rack A and B, and 1 broker in rack C, you will end 
up with a
  * lot of partitions on that one broker in rack C.  If you were to place a lot 
of
  * partitions with replication factor 3, each partition would try to get a 
replica there.
  * In general racks are supposed to be about the same size -- if they aren't, 
this is a
  * user error.
  *
- * Finally, we would prefer to place replicas on unfenced brokers, rather than 
on fenced
+ * Finally, we would prefer to place replicas on unfenced brokers, rather 
than on fenced
  * brokers.
- *
- *
- * CONSTRAINTS
- * In addition to these goals, we have two constraints.  Unlike the goals, 
these are not
- * optional -- they are mandatory.  Placement will fail if a constraint cannot 
be
- * satisfied.  The first constraint is that we can't place more than one 
replica on the
- * same broker.  This imposes an upper limit on replication factor-- for 
example, a 3-node
- * cluster can't have any topics with replication factor 4.  This constraint 
comes from
+ * 

Review Comment:
   Same for the other instance below.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


mimaison commented on code in PR #14734:
URL: https://github.com/apache/kafka/pull/14734#discussion_r1391134127


##
metadata/src/main/java/org/apache/kafka/metadata/placement/StripedReplicaPlacer.java:
##
@@ -33,89 +33,86 @@
 
 /**
  * The striped replica placer.
- *
- *
- * GOALS
- * The design of this placer attempts to satisfy a few competing goals.  
Firstly, we want
- * to spread the replicas as evenly as we can across racks.  In the simple 
case where
- * broker racks have not been configured, this goal is a no-op, of course.  
But it is the
+ * 
+ * Goals
+ * The design of this placer attempts to satisfy a few competing goals. 
Firstly, we want
+ * to spread the replicas as evenly as we can across racks. In the simple case 
where
+ * broker racks have not been configured, this goal is a no-op, of course. But 
it is the
  * highest priority goal in multi-rack clusters.
  *
- * Our second goal is to spread the replicas evenly across brokers.  Since we 
are placing
+ * Our second goal is to spread the replicas evenly across brokers. Since 
we are placing
  * multiple partitions, we try to avoid putting each partition on the same set 
of
- * replicas, even if it does satisfy the rack placement goal.  If any specific 
broker is
+ * replicas, even if it does satisfy the rack placement goal. If any specific 
broker is
  * fenced, we would like the new leaders to distributed evenly across the 
remaining
  * brokers.
  *
- * However, we treat the rack placement goal as higher priority than this 
goal-- if you
+ * However, we treat the rack placement goal as higher priority than this 
goal-- if you
  * configure 10 brokers in rack A and B, and 1 broker in rack C, you will end 
up with a
  * lot of partitions on that one broker in rack C.  If you were to place a lot 
of
  * partitions with replication factor 3, each partition would try to get a 
replica there.
  * In general racks are supposed to be about the same size -- if they aren't, 
this is a
  * user error.
  *
- * Finally, we would prefer to place replicas on unfenced brokers, rather than 
on fenced
+ * Finally, we would prefer to place replicas on unfenced brokers, rather 
than on fenced
  * brokers.
- *
- *
- * CONSTRAINTS
- * In addition to these goals, we have two constraints.  Unlike the goals, 
these are not
- * optional -- they are mandatory.  Placement will fail if a constraint cannot 
be
- * satisfied.  The first constraint is that we can't place more than one 
replica on the
- * same broker.  This imposes an upper limit on replication factor-- for 
example, a 3-node
- * cluster can't have any topics with replication factor 4.  This constraint 
comes from
+ * 

Review Comment:
   The render is a bit nicer with the extra spacing this introduces. We've used 
similar "tricks" for example in 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L109



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


mimaison commented on PR #14734:
URL: https://github.com/apache/kafka/pull/14734#issuecomment-1808184173

   Thanks @jlprat for taking a look. In most javadoc blocks we don't close 
`` tags, so I did that to be consistent. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in metadata [kafka]

2023-11-13 Thread via GitHub


jlprat commented on code in PR #14734:
URL: https://github.com/apache/kafka/pull/14734#discussion_r1390959569


##
metadata/src/main/java/org/apache/kafka/metadata/placement/StripedReplicaPlacer.java:
##
@@ -33,89 +33,86 @@
 
 /**
  * The striped replica placer.
- *
- *
- * GOALS
- * The design of this placer attempts to satisfy a few competing goals.  
Firstly, we want
- * to spread the replicas as evenly as we can across racks.  In the simple 
case where
- * broker racks have not been configured, this goal is a no-op, of course.  
But it is the
+ * 
+ * Goals
+ * The design of this placer attempts to satisfy a few competing goals. 
Firstly, we want
+ * to spread the replicas as evenly as we can across racks. In the simple case 
where
+ * broker racks have not been configured, this goal is a no-op, of course. But 
it is the
  * highest priority goal in multi-rack clusters.
  *
- * Our second goal is to spread the replicas evenly across brokers.  Since we 
are placing
+ * Our second goal is to spread the replicas evenly across brokers. Since 
we are placing
  * multiple partitions, we try to avoid putting each partition on the same set 
of
- * replicas, even if it does satisfy the rack placement goal.  If any specific 
broker is
+ * replicas, even if it does satisfy the rack placement goal. If any specific 
broker is
  * fenced, we would like the new leaders to distributed evenly across the 
remaining
  * brokers.
  *
- * However, we treat the rack placement goal as higher priority than this 
goal-- if you
+ * However, we treat the rack placement goal as higher priority than this 
goal-- if you
  * configure 10 brokers in rack A and B, and 1 broker in rack C, you will end 
up with a
  * lot of partitions on that one broker in rack C.  If you were to place a lot 
of
  * partitions with replication factor 3, each partition would try to get a 
replica there.
  * In general racks are supposed to be about the same size -- if they aren't, 
this is a
  * user error.
  *
- * Finally, we would prefer to place replicas on unfenced brokers, rather than 
on fenced
+ * Finally, we would prefer to place replicas on unfenced brokers, rather 
than on fenced
  * brokers.

Review Comment:
   Same missing ``



##
metadata/src/main/java/org/apache/kafka/metadata/placement/StripedReplicaPlacer.java:
##
@@ -33,89 +33,86 @@
 
 /**
  * The striped replica placer.
- *
- *
- * GOALS
- * The design of this placer attempts to satisfy a few competing goals.  
Firstly, we want
- * to spread the replicas as evenly as we can across racks.  In the simple 
case where
- * broker racks have not been configured, this goal is a no-op, of course.  
But it is the
+ * 
+ * Goals
+ * The design of this placer attempts to satisfy a few competing goals. 
Firstly, we want
+ * to spread the replicas as evenly as we can across racks. In the simple case 
where
+ * broker racks have not been configured, this goal is a no-op, of course. But 
it is the
  * highest priority goal in multi-rack clusters.
  *
- * Our second goal is to spread the replicas evenly across brokers.  Since we 
are placing
+ * Our second goal is to spread the replicas evenly across brokers. Since 
we are placing
  * multiple partitions, we try to avoid putting each partition on the same set 
of
- * replicas, even if it does satisfy the rack placement goal.  If any specific 
broker is
+ * replicas, even if it does satisfy the rack placement goal. If any specific 
broker is
  * fenced, we would like the new leaders to distributed evenly across the 
remaining
  * brokers.

Review Comment:
   Same missing ``



##
metadata/src/main/java/org/apache/kafka/metadata/placement/StripedReplicaPlacer.java:
##
@@ -33,89 +33,86 @@
 
 /**
  * The striped replica placer.
- *
- *
- * GOALS
- * The design of this placer attempts to satisfy a few competing goals.  
Firstly, we want
- * to spread the replicas as evenly as we can across racks.  In the simple 
case where
- * broker racks have not been configured, this goal is a no-op, of course.  
But it is the
+ * 
+ * Goals
+ * The design of this placer attempts to satisfy a few competing goals. 
Firstly, we want
+ * to spread the replicas as evenly as we can across racks. In the simple case 
where
+ * broker racks have not been configured, this goal is a no-op, of course. But 
it is the
  * highest priority goal in multi-rack clusters.
  *
- * Our second goal is to spread the replicas evenly across brokers.  Since we 
are placing
+ * Our second goal is to spread the replicas evenly across brokers. Since 
we are placing
  * multiple partitions, we try to avoid putting each partition on the same set 
of
- * replicas, even if it does satisfy the rack placement goal.  If any specific 
broker is
+ * replicas, even if it does satisfy the rack placement goal. If any specific 
broker is
  * fenced, we would like the new leaders to distributed evenly across the 
remaining
  * brokers.
  *
- * However, we treat