ektravel commented on code in PR #17538:
URL: https://github.com/apache/druid/pull/17538#discussion_r1882800373


##########
docs/configuration/index.md:
##########
@@ -957,7 +957,7 @@ The following table shows the dynamic configuration 
properties for the Coordinat
 |`replicantLifetime`|The maximum number of Coordinator runs for which a 
segment can wait in the load queue of a Historical before Druid raises an 
alert.|15|
 |`replicationThrottleLimit`|The maximum number of segment replicas that can be 
assigned to a historical tier in a single Coordinator run. This property 
prevents Historical services from becoming overwhelmed when loading extra 
replicas of segments that are already available in the cluster.|500|
 |`balancerComputeThreads`|Thread pool size for computing moving cost of 
segments during segment balancing. Consider increasing this if you have a lot 
of segments and moving segments begins to stall.|`num_cores` / 2|
-|`killDataSourceWhitelist`|List of specific data sources for which kill tasks 
are sent if property `druid.coordinator.kill.on` is true. This can be a list of 
comma-separated data source names or a JSON array.|none|
+|`killDataSourceWhitelist`|List of specific data sources for which kill tasks 
are sent if property `druid.coordinator.kill.on` is true. Can be a list of 
comma-separated data source names or a JSON array. If `killDataSourceWhitelist` 
is empty, the Coordinator issues kill tasks for all data sources.|none|

Review Comment:
   Updated



##########
docs/configuration/index.md:
##########
@@ -890,9 +890,9 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.startDelay`|The operation of the Coordinator works on the 
assumption that it has an up-to-date view of the state of the world when it 
runs, the current ZooKeeper interaction code, however, is written in a way that 
doesn’t allow the Coordinator to know for a fact that it’s done loading the 
current state of the world. This delay is a hack to give it enough time to 
believe that it has all the data.|`PT300S`|
 |`druid.coordinator.load.timeout`|The timeout duration for when the 
Coordinator assigns a segment to a Historical service.|`PT15M`|
 |`druid.coordinator.kill.pendingSegments.on`|Boolean flag for whether or not 
the Coordinator clean up old entries in the `pendingSegments` table of metadata 
store. If set to true, Coordinator will check the created time of most recently 
complete task. If it doesn't exist, it finds the created time of the earliest 
running/pending/waiting tasks. Once the created time is found, then for all 
datasources not in the `killPendingSegmentsSkipList` (see [Dynamic 
configuration](#dynamic-configuration)), Coordinator will ask the Overlord to 
clean up the entries 1 day or more older than the found created time in the 
`pendingSegments` table. This will be done periodically based on 
`druid.coordinator.period.indexingPeriod` specified.|true|
-|`druid.coordinator.kill.on`|Boolean flag for whether or not the Coordinator 
should submit kill task for unused segments, that is, permanently delete them 
from metadata store and deep storage. If set to true, then for all whitelisted 
datasources (or optionally all), Coordinator will submit tasks periodically 
based on `period` specified. A whitelist can be set via dynamic configuration 
`killDataSourceWhitelist` described later.<br /><br />When 
`druid.coordinator.kill.on` is true, segments are eligible for permanent 
deletion once their data intervals are older than 
`druid.coordinator.kill.durationToRetain` relative to the current time. If a 
segment's data interval is older than this threshold at the time it is marked 
unused, it is eligible for permanent deletion immediately after being marked 
unused.|false|
+|`druid.coordinator.kill.on`|Boolean flag to enable the Coordinator to submit 
a kill task for unused segments and delete them permanently from the metadata 
store and deep storage.|false|
 |`druid.coordinator.kill.period`| The frequency of sending kill tasks to the 
indexing service. The value must be greater than or equal to 
`druid.coordinator.period.indexingPeriod`. Only applies if kill is turned 
on.|Same as `druid.coordinator.period.indexingPeriod`|
-|`druid.coordinator.kill.durationToRetain`|Only applies if you set 
`druid.coordinator.kill.on` to `true`. This value is ignored if 
`druid.coordinator.kill.ignoreDurationToRetain` is `true`. Valid configurations 
must be a ISO8601 period. Druid will not kill unused segments whose interval 
end date is beyond `now - durationToRetain`. `durationToRetain` can be a 
negative ISO8601 period, which would result in `now - durationToRetain` to be 
in the future.<br /><br />Note that the `durationToRetain` parameter applies to 
the segment interval, not the time that the segment was last marked unused. For 
example, if `durationToRetain` is set to `P90D`, then a segment for a time 
chunk 90 days in the past is eligible for permanent deletion immediately after 
being marked unused.|`P90D`|
+|`druid.coordinator.kill.durationToRetain`|Period to retain unused segments in 
ISO 8601 duration format. When `druid.coordinator.kill.on` is true, segments 
become eligible for permanent deletion once their data intervals become older 
than `durationToRetain` relative to the current time. For example, if 
`durationToRetain` is set to `P90D`, a segment from a time interval 90 days in 
the past becomes eligible for permanent deletion immediately after being marked 
unused. If you set `durationToRetain` to a negative ISO 8601 period, `now - 
durationToRetain` falls in the future.|`P90D`|

Review Comment:
   Updated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to