[ https://issues.apache.org/jira/browse/KAFKA-15274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
jianbin.chen resolved KAFKA-15274. ---------------------------------- Resolution: Duplicate > support moving files to be deleted to other directories > ------------------------------------------------------- > > Key: KAFKA-15274 > URL: https://issues.apache.org/jira/browse/KAFKA-15274 > Project: Kafka > Issue Type: Task > Reporter: jianbin.chen > Assignee: jianbin.chen > Priority: Major > > Hello everyone, I am a Kafka user from China. Our company operates in public > clouds overseas, such as AWS, Ali Cloud, and Huawei Cloud. We face a large > amount of data exchange and business message delivery every day. Daily > messages consume a significant amount of disk space. Purchasing the > corresponding storage capacity on these cloud providers incurs substantial > costs, especially for SSDs with ultra-high IOPS. High IOPS is very effective > for disaster recovery, especially in the event of a sudden broker failure > where storage space becomes full or memory space is exhausted leading to OOM > kills. This high IOPS storage greatly improves data recovery efficiency, > forcing us to adopt smaller storage specifications with high IO to save > costs. Particularly, cloud providers only allow capacity expansion but not > reduction. > Currently, we have come up with a solution and would like to contribute it to > the community for discussion. When we need to delete logs, I can purchase S3 > or Minio storage from services like AWS and mount it to my brokers. When a > log needs to be deleted, we can decide how it leaves the broker. The default > is to delete it directly, while the move option moves it to S3. Since most of > the deleted data is cold data that won't be used in the short term, this > approach improves the retention period of historical data while maintaining > good cost control. -- This message was sent by Atlassian Jira (v8.20.10#820010)