RestfulBlue opened a new issue #6579: tasks tables in metadata storage are not 
cleared
URL: https://github.com/apache/incubator-druid/issues/6579
 
 
   we have separate cluster for stress testing with following kill 
configuration:
   coordinator: 
   ```
   druid.coordinator.kill.on=true
   druid.coordinator.kill.period=PT1H
   druid.coordinator.kill.durationToRetain=PT1H
   druid.coordinator.kill.maxSegments=100000
   druid.coordinator.kill.pendingSegments.on=true
   ```
   service-wide : 
   ```
   druid.indexer.logs.kill.enabled=true
   druid.indexer.logs.kill.durationToRetain=1000
   ```
   default rules is set to load last 12hours, drop forever
   
   After some time we notice what now metadata storage is growing, even if 
total size of data in cluster stay the same( new data is coming, old data is 
killed). After we stop feed data, segments and datasources was cleaned. But 
when we look into [database](https://prnt.sc/lf37p9) - tasks table is not 
cleared. 
   
   Is there a bug or we have misconfiguration somewhere?
   
   druid version is 12.3
   metadata storage is postgres
   data in hdfs (logs/segments) is properly cleared
   
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org
For additional commands, e-mail: commits-h...@druid.apache.org

Reply via email to