Hello Virendra,

Did you have any producer/consumer clients running during the whole process?

Guozhang


On Wed, Jun 25, 2014 at 11:53 PM, Virendra Pratap Singh <
vpsi...@yahoo-inc.com.invalid> wrote:

> I am aware of lack of programmatic way of deleting topics in kafka 0.8.0.
> So using the sledge hammer approach.
> This is what I am doing:
>
> 1. Bring whole of my kafka cluster down.
> 2. Delete all the content on all the kafka clusters pointed via logs.dir
> setting.
> 3. Delete the topic metadata from zookeeper : rmr /brokers (note I am not
> wiping off the whole zookeeper but the znode /brokers where the kafka
> broker ids and topic metadata is stored)
> 4. Restart the kafka cluster again.
>
> One would expect that the kafka cluster will come up with no memory of any
> topic from previous.
>
> But guess what, and this is the place where I need help and need to
> understand, when the kafka cluster comes back, it somehow is able to obtain
> the info of the previous topics. It promptly goes ahead creating and
> assigning partitions/replicas to the brokers for the previous topics. Now I
> am completely at loss to understand where exactly is kafka able to get the
> info of previous topics when I have wiped it off the zookeeper and also
> dropped the logs.dir locations across the kafka cluster.
>
> An insight is much needed here. Where else is the topic meta data store
> which the kafka server is getting hold of after coming back alive?
>
> Regards,
> Virendra
>
>


-- 
-- Guozhang

Reply via email to