chia7712 commented on code in PR #20530:
URL: https://github.com/apache/kafka/pull/20530#discussion_r2345531638
##########
clients/src/main/java/org/apache/kafka/clients/admin/internals/ListOffsetsHandler.java:
##########
@@ -202,7 +202,7 @@ private void handlePartitionError(
public Map<TopicPartition, Throwable> handleUnsupportedVersionException(
int brokerId, UnsupportedVersionException exception,
Set<TopicPartition> keys
) {
- log.warn("Broker " + brokerId + " does not support MAX_TIMESTAMP
offset specs");
+ log.warn("Broker {} does not support MAX_TIMESTAMP offset specs",
brokerId);
Review Comment:
That is quite interesting. Right now it only deals with `MAX_TIMESTAMP`,
while other timestamp placeholders are just unused. I think it is okay to keep
the change for now, and I will follow up with a JIRA for the rest
##########
clients/src/test/java/org/apache/kafka/common/UuidTest.java:
##########
@@ -74,7 +74,7 @@ public void testStringConversion() {
String zeroIdString = Uuid.ZERO_UUID.toString();
- assertEquals(Uuid.fromString(zeroIdString), Uuid.ZERO_UUID);
+ assertEquals(Uuid.ZERO_UUID, Uuid.fromString(zeroIdString));
}
@RepeatedTest(value = 100, name = RepeatedTest.LONG_DISPLAY_NAME)
Review Comment:
Looks like lines 84 and 85 run into the same issue as well
```java
assertNotEquals(randomID, Uuid.ZERO_UUID);
assertNotEquals(randomID, Uuid.METADATA_TOPIC_ID);
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]