[jira] [Commented] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-04-11 Thread Upendra Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15964757#comment-15964757
 ] 

Upendra Yadav commented on KAFKA-4967:
--

I solved this issue by calling 2nd time commitOffset.
actually the problem related to connections.max.idle.ms.
this propery is introduced in latest kafka(broker=10 minutes, consumer=9 
minutes, producer=9 minutes).
due to this whenever my old consumer calling next commit offset after 10 
minutes, I am getting above exception.
with old consumer api there is no way to set this property.
and broker configuration change is not in my control...

here i think commitOffset required another connection(other that iterator) and 
that connection when getting ideal for more than 10 minutes connection getting 
close.
I'm not very sure about this.
but next consecutive call for now its working fine for me... at-least if any 
fail happening on 1st call then 2nd call getting success.
and if 1st one getting success then next one execution will not make any 
problem.
any way, we have very few calls for commit offset.

next we'll try to move to use latest kafka consumer and producer java APIs. 

> java.io.EOFException Error while committing offsets
> ---
>
> Key: KAFKA-4967
> URL: https://issues.apache.org/jira/browse/KAFKA-4967
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: OS : CentOS
>Reporter: Upendra Yadav
>
> kafka server and client : 0.10.0.1
> And consumer and producer side using latest kafka jars as mentioned above but 
> still using old consumer apis in code. 
> kafka server side configuration :
> listeners=PLAINTEXT://:9092
> #below configuration is for old clients, that was exists before. but now 
> every clients are already moved with latest kafka client - 0.10.0.1
> log.message.format.version=0.8.2.1
> broker.id.generation.enable=false
> unclean.leader.election.enable=false
> Some of configurations for kafka consumer :
> auto.commit.enable is overridden to false
> auto.offset.reset is overridden to smallest
> consumer.timeout.ms is overridden to 100
> dual.commit.enabled is overridden to true
> fetch.message.max.bytes is overridden to 209715200
> group.id is overridden to crm_topic1_hadoop_tables
> offsets.storage is overridden to kafka
> rebalance.backoff.ms is overridden to 6000
> zookeeper.session.timeout.ms is overridden to 23000
> zookeeper.sync.time.ms is overridden to 2000
> below exception I'm getting on commit offset.
> Consumer process is still running after this exception..
> but when I'm checking offset position through kafka shell scripts its showing 
> old position(Could not fetch offset from topic1_group1 partition [topic1,0] 
> due to missing offset data in zookeeper). after some time when 2nd commit 
> comes then it get updated.
> because of duel commit enabled, I think kafka side position get update 
> successfully for both time.
> ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
> committing offsets.
> java.io.EOFException
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
> at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
> at 
> kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
> at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-04-10 Thread Upendra Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upendra Yadav updated KAFKA-4967:
-
Description: 
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but 
still using old consumer apis in code. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_topic1_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)

  was:
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but 
still using old consumer apis in code. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.jav

[jira] [Updated] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-04-10 Thread Upendra Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upendra Yadav updated KAFKA-4967:
-
Description: 
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but 
still using old consumer apis. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)

  was:
kafka server and client : 0.10.0.1

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)


> java.io.EOFExc

[jira] [Updated] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-04-10 Thread Upendra Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Upendra Yadav updated KAFKA-4967:
-
Description: 
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but 
still using old consumer apis in code. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)

  was:
kafka server and client : 0.10.0.1
And consumer and producer side using latest kafka jars as mentioned above but 
still using old consumer apis. 

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.jav

[jira] [Commented] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-04-07 Thread Upendra Yadav (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15960489#comment-15960489
 ] 

Upendra Yadav commented on KAFKA-4967:
--

Same Exception, even after setting
 dual.commit.enabled = false
 consumer.timeout.ms = 1000

where other settings keeping as old configuration(defined in description).

some more details:
with version 0.8.2.1, I never face this problem.
after moving to 0.10.0.1(client as well as server), start getting this 
exception.

and this exception I'm getting on every 2nd commitOffset.
and some time(where commitOffset calling withing 10 seconds of previous commit) 
no exception for 2nd commit.

and for your information. if commit offset failed then consumer just reading 
next messages.
but if commit offset failed and restarting consumer process then it reading 
from old commit position(without exception).

> java.io.EOFException Error while committing offsets
> ---
>
> Key: KAFKA-4967
> URL: https://issues.apache.org/jira/browse/KAFKA-4967
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.1
> Environment: OS : CentOS
>Reporter: Upendra Yadav
>
> kafka server and client : 0.10.0.1
> kafka server side configuration :
> listeners=PLAINTEXT://:9092
> #below configuration is for old clients, that was exists before. but now 
> every clients are already moved with latest kafka client - 0.10.0.1
> log.message.format.version=0.8.2.1
> broker.id.generation.enable=false
> unclean.leader.election.enable=false
> Some of configurations for kafka consumer :
> auto.commit.enable is overridden to false
> auto.offset.reset is overridden to smallest
> consumer.timeout.ms is overridden to 100
> dual.commit.enabled is overridden to true
> fetch.message.max.bytes is overridden to 209715200
> group.id is overridden to crm_172_19_255_187_hadoop_tables
> offsets.storage is overridden to kafka
> rebalance.backoff.ms is overridden to 6000
> zookeeper.session.timeout.ms is overridden to 23000
> zookeeper.sync.time.ms is overridden to 2000
> below exception I'm getting on commit offset.
> Consumer process is still running after this exception..
> but when I'm checking offset position through kafka shell scripts its showing 
> old position(Could not fetch offset from topic1_group1 partition [topic1,0] 
> due to missing offset data in zookeeper). after some time when 2nd commit 
> comes then it get updated.
> because of duel commit enabled, I think kafka side position get update 
> successfully for both time.
> ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
> committing offsets.
> java.io.EOFException
> at 
> org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
> at 
> kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
> at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
> at 
> kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
> at 
> kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
> at 
> kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
> at 
> com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (KAFKA-4967) java.io.EOFException Error while committing offsets

2017-03-28 Thread Upendra Yadav (JIRA)
Upendra Yadav created KAFKA-4967:


 Summary: java.io.EOFException Error while committing offsets
 Key: KAFKA-4967
 URL: https://issues.apache.org/jira/browse/KAFKA-4967
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 0.10.0.1
 Environment: OS : CentOS
Reporter: Upendra Yadav


kafka server and client : 0.10.0.1

kafka server side configuration :
listeners=PLAINTEXT://:9092
#below configuration is for old clients, that was exists before. but now every 
clients are already moved with latest kafka client - 0.10.0.1
log.message.format.version=0.8.2.1
broker.id.generation.enable=false
unclean.leader.election.enable=false

Some of configurations for kafka consumer :
auto.commit.enable is overridden to false
auto.offset.reset is overridden to smallest
consumer.timeout.ms is overridden to 100
dual.commit.enabled is overridden to true
fetch.message.max.bytes is overridden to 209715200
group.id is overridden to crm_172_19_255_187_hadoop_tables
offsets.storage is overridden to kafka
rebalance.backoff.ms is overridden to 6000
zookeeper.session.timeout.ms is overridden to 23000
zookeeper.sync.time.ms is overridden to 2000

below exception I'm getting on commit offset.
Consumer process is still running after this exception..
but when I'm checking offset position through kafka shell scripts its showing 
old position(Could not fetch offset from topic1_group1 partition [topic1,0] due 
to missing offset data in zookeeper). after some time when 2nd commit comes 
then it get updated.

because of duel commit enabled, I think kafka side position get update 
successfully for both time.

ERROR kafka.consumer.ZookeeperConsumerConnector: [], Error while 
committing offsets.
java.io.EOFException
at 
org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(NetworkReceive.java:83)
at 
kafka.network.BlockingChannel.readCompletely(BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at 
kafka.consumer.ZookeeperConsumerConnector.liftedTree2$1(ZookeeperConsumerConnector.scala:354)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:351)
at 
kafka.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:331)
at 
kafka.javaapi.consumer.ZookeeperConsumerConnector.commitOffsets(ZookeeperConsumerConnector.scala:111)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.commitOffset(KafkaHLConsumer.java:173)
at 
com.zoho.mysqlbackup.kafka.consumer.KafkaHLConsumer.run(KafkaHLConsumer.java:271)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)