ionNumber(),nextOffset+"");
>> > checkPointedOffset.put(key,nextOffset);
>> > }
>> > }
>> >
>> > -Original Message-
>> > From: Gwen Shapira [mailto:gshap...@cloudera.com]
>> >
>>Only problem is no of connections to Kafka is increased.
*Why* is it a problem?
Philip
gt; }
> >
> > -----Original Message-
> > From: Gwen Shapira [mailto:gshap...@cloudera.com]
> > Sent: Tuesday, September 02, 2014 11:38 PM
> > To: users@kafka.apache.org; Philip O'Toole
> > Subject: Re: High Level Consumer and Commit
> >
> > I believ
t: Re: High Level Consumer and Commit
Thanks, Balaji!
It looks like your approach depends on specific implementation details, such as
the directory structure in ZK.
In this case it doesn't matter much since the APIs are not stable yet, but in
general, wouldn't you prefer to use public A
gt; topicDirs.consumerOffsetDir()+"/"+metaData.getPartitionNumber(),nextOffset+"");
> checkPointedOffset.put(key,nextOffset);
> }
> }
>
> -Original Message-
> From: Gwen Shapira [mailto:gshap...@cloudera.com]
> Sent: Tuesday, September 02, 2
}
}
-Original Message-
From: Gwen Shapira [mailto:gshap...@cloudera.com]
Sent: Tuesday, September 02, 2014 11:38 PM
To: users@kafka.apache.org; Philip O'Toole
Subject: Re: High Level Consumer and Commit
I believe a simpler solution would be to create multiple ConsumerConnector,
ea
Sorry, I guess I missed that. The followup discussion was around the
simple consumer :)
I'm not sure why the OP didn't find this solution acceptable.
On Wed, Sep 3, 2014 at 8:29 AM, Philip O'Toole wrote:
> That's what I said in my first reply. :-)
>
> -
>
That's what I said in my first reply. :-)
-
http://www.philipotoole.com
On Tuesday, September 2, 2014 10:37 PM, Gwen Shapira
wrote:
I believe a simpler solution would be to create multiple
ConsumerConnector, each with 1 thread (single ConsumerStre
I believe a simpler solution would be to create multiple
ConsumerConnector, each with 1 thread (single ConsumerStream) and use
commitOffset API to commit all partitions managed by each
ConsumerConnector after the thread finished processing the messages.
Does that solve the problem, Bhavesh?
Gwen
Yeah, from reading that I suspect you need the SimpleConsumer. Try it out and
see.
Philip
-
http://www.philipotoole.com
On Tuesday, September 2, 2014 5:43 PM, Bhavesh Mistry
wrote:
Hi Philip,
Yes, We have disabled auto commit but, we need to be
Hi Philip,
Yes, We have disabled auto commit but, we need to be able to read from
particular offset if we manage the offset ourself in some storage(DB).
High Level consumer does not allow per partition management plug-ability.
I like to have the High Level consumers Failover and auto rebalancing
No, you'll need to write your own failover.
I'm not sure I follow your second question, but the high-level Consumer should
be able to do what you want if you disable auto-commit. I'm not sure what else
you're asking.
Philip
-
http://www.philipotoole.c
Hi Philip,
Thanks for the update. With Simple Consumer I will not get failover and
rebalance that is provided out of box. what is other option not to block
reading and keep processing and commit only when batch is done.
Thanks,
Bhavesh
On Tue, Sep 2, 2014 at 4:43 PM, Philip O'Toole <
philip.
Either use the SimpleConsumer which gives you much finer-grained control, or
(this worked with 0.7) spin up a ConsumerConnection (this is a HighLevel
consumer concept) per partition, turn off auto-commit.
Philip
-
http://www.philipotoole.com
On Tuesd
14 matches
Mail list logo