The Java docs have not been updated, but here is a short summary:

Consumer.poll() call will possibly throw three types of exceptions:

1) WakeupException: this is when Consumer.close() is called by another
thread which interrupts another user thread polling on the consumer, users
should really catch it and proceed normally since the consumer is usually
closing at this time.

2) (maybe) AuthorizationException: this is introduced by the new security
feature of Kafka, which will be triggered on the first poll() if the
consumer is not authenticated to access Kafka. Users should treat it as
FATAL.

3) ApiException: this is a group of exceptions that are triggered by
non-retriable errors returned from brokers. Examples include:

NoOffsetForPartitionException: If there is no stored offset for a
subscribed partition and no automatic offset reset policy has been
configured. This is usually when users make their own offset management but
do not have any stored offsets to start the fetching.

OffsetOutOfRangeException: If there is OffsetOutOfRange error in
fetchResponse and the default reset policy is NONE. This is again when
users fetching an invalid position and users does not have a default offset
reset policy configured.

RecordSizeTooLargeException: If there is a single record on the fetching
partitions whose size is larger than the configured max fetch size.

... and etc. As you can see, those exceptions are generally due to
mis-configuration or operation errors, and hence users may need to handle
some of them specifically based on their own app logic such like:

try {

} catch (NoOffsetForPartitionException) {
    .... // some user logic
} catch (OffsetOutOfRangeException) {
    .... // some other user logic
} catch (ApiException) {
    throw new RuntimeException; // we do not care about others and hence
just re-throw to halt the consumer
}

Guozhang


On Mon, Oct 26, 2015 at 4:11 PM, Mohit Anchlia <mohitanch...@gmail.com>
wrote:

> There is a very basic example here:
>
>
> https://github.com/apache/kafka/blob/trunk/examples/src/main/java/kafka/examples/Consumer.java
>
> However, I have a question on the failure scenario where say there is an
> error in a for loop, is the expectation that developer set the offset to
> offset -1 ?
>
> On Sat, Oct 24, 2015 at 11:13 PM, Guozhang Wang <wangg...@gmail.com>
> wrote:
>
> > Mohit,
> >
> > We will update the java docs page to include more examples using the APIs
> > soon, will keep you posted.
> >
> > Guozhang
> >
> > On Fri, Oct 23, 2015 at 9:30 AM, Mohit Anchlia <mohitanch...@gmail.com>
> > wrote:
> >
> > > Can I get a link to other type of examples? I would like to see how to
> > > write the API code correctly.
> > >
> > > On Fri, Oct 23, 2015 at 8:23 AM, Gwen Shapira <g...@confluent.io>
> wrote:
> > >
> > > > There are some examples that include error handling. These are to
> > > > demonstrate the new and awesome seek() method.
> > > > You don't have to handle errors that way, we are just showing that
> you
> > > can.
> > > >
> > > > On Thu, Oct 22, 2015 at 8:34 PM, Mohit Anchlia <
> mohitanch...@gmail.com
> > >
> > > > wrote:
> > > > > It's in this link. Most of the examples have some kind of error
> > > handling
> > > > >
> > > > >
> > http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> > > > >
> > > > > On Thu, Oct 22, 2015 at 7:45 PM, Guozhang Wang <wangg...@gmail.com
> >
> > > > wrote:
> > > > >
> > > > >> Could you point me to the exact examples that indicate user error
> > > > handling?
> > > > >>
> > > > >> Guozhang
> > > > >>
> > > > >> On Thu, Oct 22, 2015 at 5:43 PM, Mohit Anchlia <
> > > mohitanch...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > The examples in the javadoc seems to imply that developers need
> to
> > > > manage
> > > > >> > all of the aspects around failures. Those examples are for
> > rewinding
> > > > >> > offsets, dealing with failed portioned for instance.
> > > > >> >
> > > > >> > On Thu, Oct 22, 2015 at 11:17 AM, Guozhang Wang <
> > wangg...@gmail.com
> > > >
> > > > >> > wrote:
> > > > >> >
> > > > >> > > Hi Mohit:
> > > > >> > >
> > > > >> > > In general new consumers will abstract developers from any
> > network
> > > > >> > > failures. More specifically.
> > > > >> > >
> > > > >> > > 1) consumers will automatically try to re-fetch the messages
> if
> > > the
> > > > >> > > previous fetch has failed.
> > > > >> > > 2) consumers will remember the currently fetch positions after
> > > each
> > > > >> > > successful fetch, and can periodically commit these offsets
> back
> > > to
> > > > >> > Kafka.
> > > > >> > >
> > > > >> > > Guozhang
> > > > >> > >
> > > > >> > > On Thu, Oct 22, 2015 at 10:11 AM, Mohit Anchlia <
> > > > >> mohitanch...@gmail.com>
> > > > >> > > wrote:
> > > > >> > >
> > > > >> > > > It looks like the new consumer API expects developers to
> > manage
> > > > the
> > > > >> > > > failures? Or is there some other API that can abstract the
> > > > failures,
> > > > >> > > > primarily:
> > > > >> > > >
> > > > >> > > > 1) Automatically resent failed messages because of network
> > issue
> > > > or
> > > > >> > some
> > > > >> > > > other issue between the broker and the consumer
> > > > >> > > > 2) Ability to acknowledge receipt of a message by the
> consumer
> > > > such
> > > > >> > that
> > > > >> > > > message is sent again if consumer fails to acknowledge the
> > > > receipt.
> > > > >> > > >
> > > > >> > > > Is there such an API or are the clients expected to deal
> with
> > > > failure
> > > > >> > > > scenarios?
> > > > >> > > >
> > > > >> > > > Docs I am looking at are here:
> > > > >> > > >
> > > > >> > > >
> > > > >>
> > > http://people.apache.org/~nehanarkhede/kafka-0.9-consumer-javadoc/doc/
> > > > >> > > >
> > > > >> > >
> > > > >> > >
> > > > >> > >
> > > > >> > > --
> > > > >> > > -- Guozhang
> > > > >> > >
> > > > >> >
> > > > >>
> > > > >>
> > > > >>
> > > > >> --
> > > > >> -- Guozhang
> > > > >>
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang

Reply via email to