cmccabe commented on code in PR #16183:
URL: https://github.com/apache/kafka/pull/16183#discussion_r1684705617


##########
core/src/main/scala/kafka/server/BrokerLifecycleManager.scala:
##########
@@ -381,10 +381,12 @@ class BrokerLifecycleManager(
   private def sendBrokerRegistration(): Unit = {
     val features = new BrokerRegistrationRequestData.FeatureCollection()
     _supportedFeatures.asScala.foreach {
-      case (name, range) => features.add(new 
BrokerRegistrationRequestData.Feature().
+      // Do not include features with the range 0-0.
+      case (name, range) if range.max() > 0 => features.add(new 
BrokerRegistrationRequestData.Feature().

Review Comment:
   Can you be clearer about what you're proposing?
   
   From my point of view:
   
   1. We should never report a `0-0` supported versions range in an RPC. There 
is no point! This applies to both `ApiVersionsResponse` and 
`BrokerRegistrationRequest`.
   
   2. Obviously a `1-0` range is even worse, being incorrect as well as useless.
   
   3. Having a HashMap or something that contains a `0-0` range is fine, and 
may simplify the code in some ways. For example if we have code that changes 
the maximum supported range based on a config, it would probably be annoying to 
have to mutate the HashMap or use a different hash map if that config was on or 
off.
   
   I don't feel really strongly about the third point, but I suspect that the 
code will be messier if you try to remove features with supported version 0-0 
from the internal data structures than if you don't.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to