Ok, Am in situation where all kafka nodes are going to run out of space. This is because I had been running uncompacted __consumer_offset topic and everything retained topics .
Now at a place, where I can afford to compact __consumer_offset topic and also delete certain topics. I would like to know the right process to do this. Now since I am having close 1.8T of data on __consumer_offset topic and more in the topics data, any log compaction and log deletion/trunction is going to take time. Should I do this node by node. Will Kafka's replication come in the way. (I have read that uncompacted data from the leader is sent to the followers.) Is there a clean process for this for a 3 node Kafka cluster ? Last time I triggered a log compaction in all the 3 node simultaneously, all consumers broke (I raised this in the same email group and got an answer to increase the memory). Eventually they self-healed, but this caused some serious disruption to the service, so before trying I want to make sure, there is a cleaner process here. Any help/pointers will be greatly appreciated. Thanks, Sathya