Hello Weide,

That should be doable via high-level consumer, you can take a look at this
page:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

Guozhang


On Fri, Aug 1, 2014 at 3:20 PM, Weide Zhang <weo...@gmail.com> wrote:

> Hi,
>
> I have a use case for a master slave  cluster where the logic inside master
> need to consume data from kafka and publish some aggregated data to kafka
> again. When master dies, slave need to take the latest committed offset
> from master and continue consuming the data from kafka and doing the push.
>
> My questions is what will be easiest kafka consumer design for this
> scenario to work ? I was thinking about using simpleconsumer and doing
> manual consumer offset syncing between master and slave. That seems to
> solve the problem but I was wondering if it can be achieved by using high
> level consumer client ?
>
> Thanks,
>
> Weide
>



-- 
-- Guozhang

Reply via email to