Either use the SimpleConsumer which gives you much finer-grained control, or 
(this worked with 0.7) spin up a ConsumerConnection (this is a HighLevel 
consumer concept) per partition, turn off auto-commit.

Philip

 
-----------------------------------------
http://www.philipotoole.com 


On Tuesday, September 2, 2014 4:38 PM, Bhavesh Mistry 
<mistry.p.bhav...@gmail.com> wrote:
 


Hi Kafka Group,

I have to pull the data from the Topic and index into Elastic Search with
Bulk API and wanted to commit only batch that has been committed and still
continue to read from topic further on same topic.  I have auto commit to
be off.


List<Message>  batch .....

while (iterator.hasNext()) {
batch.add(iterator.next().message());
if(batch size is 50 ){
      //===>>>>  Once the bulk API is successful it will commit the offset
to zookeeper...
      executor.submit(new Thread() process batch and commit batch,
cconsumerConnector)
      batch = new batch buffer....
   }
}

This commitOffset API commits all messages that have been read so far.
What is best way to continue reading and only commit another thread finish
batch process is successful.  This will lead to fragmentation of the
Consumer offset so what is best way to implement continuous reading stream
and commit the rage offset.

Is Simple Consumer a better approach for this.


Thanks,

Bhavesh







Thanks,
Bhavesh

Reply via email to