adarshsanjeev commented on code in PR #17775:
URL: https://github.com/apache/druid/pull/17775#discussion_r2004958595


##########
docs/configuration/index.md:
##########
@@ -885,7 +885,7 @@ These Coordinator static configurations can be defined in 
the `coordinator/runti
 |`druid.coordinator.kill.maxInterval`|The largest interval, as an [ISO 8601 
duration](https://en.wikipedia.org/wiki/ISO_8601#Durations), of segments to 
delete per kill task. Set to zero, e.g. `PT0S`, for unlimited. This only 
applies when `druid.coordinator.kill.on=true`.|`P30D`|
 |`druid.coordinator.balancer.strategy`|The [balancing 
strategy](../design/coordinator.md#balancing-segments-in-a-tier) used by the 
Coordinator to distribute segments among the Historical servers in a tier. The 
`cost` strategy distributes segments by minimizing a cost function, 
`diskNormalized` weights these costs with the disk usage ratios of the servers 
and `random` distributes segments randomly.|`cost`|
 |`druid.coordinator.loadqueuepeon.http.repeatDelay`|The start and repeat delay 
(in milliseconds) for the load queue peon, which manages the load/drop queue of 
segments for any server.|1 minute|
-|`druid.coordinator.loadqueuepeon.http.batchSize`|Number of segment load/drop 
requests to batch in one HTTP request. Note that it must be smaller than 
`druid.segmentCache.numLoadingThreads` config on Historical service.|1|
+|`druid.coordinator.loadqueuepeon.http.batchSize`|Number of segment load/drop 
requests to batch in one HTTP request. Note that it must be smaller than 
`druid.segmentCache.numLoadingThreads` config on Historical service. If the 
value is not provided, automatically sets the value to the `numLoadingThreads` 
on the historical. | `druid.segmentCache.numLoadingThreads` |

Review Comment:
   Changed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to