[
https://issues.apache.org/jira/browse/KAFKA-2022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14581473#comment-14581473
]
Jinder Aujla commented on KAFKA-2022:
-------------------------------------
Hi
I also noticed this, here is part of the stack trace is there a work around or
something I can do prevent this from happening?
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
at kafka.consumer.SimpleConsumer.fetchOffsets(SimpleConsumer.scala:147)
at
kafka.javaapi.consumer.SimpleConsumer.fetchOffsets(SimpleConsumer.scala:99)
thanks
> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException:
> null exception when the original leader fails instead of being trapped in the
> fetchResponse api while consuming messages
> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-2022
> URL: https://issues.apache.org/jira/browse/KAFKA-2022
> Project: Kafka
> Issue Type: Bug
> Components: consumer
> Affects Versions: 0.8.2.1
> Environment: 3 linux nodes with both zookeepr & brokers running under
> respective users on each..
> Reporter: Muqeet Mohammed Ali
> Assignee: Neha Narkhede
>
> simpleconsumer.fetch(req) throws a java.nio.channels.ClosedChannelException:
> null exception when the original leader fails, instead of being trapped in
> the fetchResponse api while consuming messages. My understanding was that any
> fetch failures can be found via fetchResponse.hasError() call and then be
> handled to fetch new leader in this case. Below is the relevant code snippet
> from the simple consumer with comments marking the line causing
> exception..can you please comment on this?
> if (simpleconsumer == null) {
> simpleconsumer = new
> SimpleConsumer(leaderAddress.getHostName(), leaderAddress.getPort(),
> consumerTimeout,
> consumerBufferSize,
> consumerId);
> }
> FetchRequest req = new FetchRequestBuilder().clientId(getConsumerId())
> .addFetch(topic, partition,
> offsetManager.getTempOffset(), consumerBufferSize)
> // Note: the fetchSize might need to be increased
> // if large batches are written to Kafka
> .build();
> // exception is throw at the below line
> FetchResponse fetchResponse = simpleconsumer.fetch(req);
> if (fetchResponse.hasError()) {
> numErrors++;
> etc...
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)