ableegoldman commented on a change in pull request #10985:
URL: https://github.com/apache/kafka/pull/10985#discussion_r668409463



##########
File path: 
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractStickyAssignor.java
##########
@@ -205,6 +237,9 @@ private boolean allSubscriptionsEqual(Set<String> allTopics,
                 // consumer owned the "maxQuota" of partitions or more, and 
we're still under the number of expected members
                 // with more than the minQuota partitions, so keep "maxQuota" 
of the owned partitions, and revoke the rest of the partitions
                 numMembersAssignedOverMinQuota++;
+                if (numMembersAssignedOverMinQuota == 
expectedNumMembersAssignedOverMinQuota) {
+                    potentiallyUnfilledMembersAtMinQuota.clear();

Review comment:
       While I'm not really a fan of the `potentiallyUnfilledMembersAtMinQuota` 
logic (it's definitely awkward but I felt it was still the lesser evil in terms 
of complicating the code), I don't think we can get rid of it that easily. The 
problem is that when `minQuota != maxQuota`, and so far 
`currentNumMembersWithOverMinQuotaPartitions` < 
`expectedNumMembersWithOverMinQuotaPartitions`, then consumers that are filled 
up to exactly `minQuota` have to be considered potentially not yet at capacity 
since some will need one more partition, though not all. So this data structure 
is not just used to verify that everything is properly assigned after we've 
exhausted the `unassignedPartitions`, it's used to track which consumers can 
still receive another partition (ie, are "unfilled"). Does that make sense?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to