Grant,
Thanks for the response.
We currently have no client running. Does the autocreate happen even if the 
client is not running?
We do have only 2 brokers running so I will recommend an increase for this 
reason and try that way.
We do have delete.topic.enable=true but all that happens is we get the message 
and the topic remains.
Topic job_processing is already marked for deletion.
We are running 0.8.2 at this point.
Jason

      From: Grant Henke <ghe...@cloudera.com>
 To: users@kafka.apache.org; Jason Kania <jason.ka...@ymail.com> 
 Sent: Thursday, June 16, 2016 3:26 PM
 Subject: Re: Topic relentlessly recreated
   
Hi Jason,

We encountered a corrupted topic and when we attempt to delete it, it comes 
back with some unusable defaults. It's really, really annoying.


It sounds like you may have auto topic creation enabled and a client is 
constantly requesting that topic causing it to be created. Try setting 
auto.create.topics.enable=false. Note that this may cause that client to fail.

We tried creating the topic with one broker down as the topics is only created 
on one broker but the tool requires all brokers online so that doesn't work. As 
an aside, creating a topic without all brokers online should be possible...
/usr/bin/kafka-topics.sh --zookeeper localhost --create --topic job_processing 
--partitions 4 --replication-factor 2

How many brokers do you have? It sounds like you may only have 2 brokers total 
(and one is down). If thats the case, the tool is no able to reach the 
requested replication factor of 2 and fails. If you had 3 brokers and 1 were 
down, this should work. 

Since delete doesn't do what most people would expect and actually delete, we 
can't delete once online so we are completely stuck.

True that delete functionality could be improved. However, assuming your 
cluster and topics are healthy, delete should work as you expect and actually 
delete. Do you have delete functionality enabled on your cluster? Try setting 
delete.topic.enable=true.
Thanks,Grant





On Thu, Jun 16, 2016 at 2:11 PM, Jason Kania <jason.ka...@ymail.com.invalid> 
wrote:

We encountered a corrupted topic and when we attempt to delete it, it comes 
back with some unusable defaults. It's really, really annoying.
We are shutting down all the kafka brokers, removing the kafka log folder and 
contents on all nodes, removing the broker topic information from zookeeper and 
restarting everything, but it continues to happen.
We tried creating the topic with one broker down as the topics is only created 
on one broker but the tool requires all brokers online so that doesn't work. As 
an aside, creating a topic without all brokers online should be possible...
/usr/bin/kafka-topics.sh --zookeeper localhost --create --topic job_processing 
--partitions 4 --replication-factor 2

Since delete doesn't do what most people would expect and actually delete, we 
can't delete once online so we are completely stuck.
Any suggestions would be appreciated.
Thanks,
Jason



-- 
Grant Henke 
Software Engineer | clouderagr...@cloudera.com | twitter.com/gchenke | 
linkedin.com/in/granthenke

  

Reply via email to