> On June 30, 2015, 10:24 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
> > lines 933-937
> > <https://reviews.apache.org/r/34789/diff/14/?file=996120#file996120line933>
> >
> >     Wondering why we want to fetch for all assigned partitions if the 
> > requested partitions is indeed assigned here?

I think this is how the old code was written, but I was wondering the same 
thing. I thought perhaps it was an optimization: since we have to hit the 
coordinator anyway, we may as well update all our assigned offsets in case they 
are stale as well.


> On June 30, 2015, 10:24 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
> > lines 1151-1153
> > <https://reviews.apache.org/r/34789/diff/14/?file=996120#file996120line1151>
> >
> >     This comments seems out-dated, or otherwise a bit confusing.

I think the comment is still accurate. It's basically just saying that we only 
fetch if the cache is dirty (as indicated by 
subscriptions.refreshCommitsNeeded()). I'll try to make it clearer.


> On June 30, 2015, 10:24 p.m., Guozhang Wang wrote:
> > clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java, 
> > lines 1249-1256
> > <https://reviews.apache.org/r/34789/diff/14/?file=996120#file996120line1249>
> >
> >     If the coordinator is not available, would this async commit also be 
> > blocked here? Or will this be resolved after we introduced the delayed 
> > queue in KAFKA-2123?

Yes, that's right, and note that this is consistent with the behavior before 
this patch (just now it's a bit more explicit). The major problem is that we 
don't have any data structure to keep track of needed requests, so if the 
coordinator is not available and we don't want to block to wait for it, then 
all we can do is discard the commit. I think the delayed queued may be what we 
need to solve this problem since we can just reschedule the commit for a later 
time if the coordinator is not available. At the same time, I feel a little 
wary about having a backlog of commits that pile up when the coordinator is 
down. I'd almost rather just fail the commit and let it be retried on the next 
interval, but that has disadvantages as well.


- Jason


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/34789/#review89971
-----------------------------------------------------------


On June 30, 2015, 5:55 p.m., Jason Gustafson wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/34789/
> -----------------------------------------------------------
> 
> (Updated June 30, 2015, 5:55 p.m.)
> 
> 
> Review request for kafka.
> 
> 
> Bugs: KAFKA-2168
>     https://issues.apache.org/jira/browse/KAFKA-2168
> 
> 
> Repository: kafka
> 
> 
> Description
> -------
> 
> KAFKA-2168; minor fixes
> 
> 
> Diffs
> -----
> 
>   clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java 
> 9be8fbc648369ad9db1a7eea94bc1b9edbfdbfd7 
>   core/src/test/scala/integration/kafka/api/ConsumerBounceTest.scala 
> f56096b826bdbf760411a54ba067a6a83eca8a10 
> 
> Diff: https://reviews.apache.org/r/34789/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Jason Gustafson
> 
>

Reply via email to