capistrant opened a new issue, #18150:
URL: https://github.com/apache/druid/issues/18150

   ### Affected Version
   
   33
   
   ### Description
   
   #17674 added AWS transfer manager for s3 segment uploads. This is enabled by 
default and will push segments that are over the threshold size in multiple 
parts.
   
   In practice we have seen that this config being enabled generally works fine 
with default configuration. However, there are cases where we are getting 
throttled by s3. 
   
   In instances where we see the problem, we are suspicious that cold s3 
partitions (created for new segments) are resulting in early throttle from s3 
when Druid starts immediately pushing a multi-part segment. Once throttling 
occurs, index tasks often (maybe always?) fail due to s3 push failures.
   * s3 has a 3.5k write threshold for throttling on a prefix, however it has 
to scale to that threshold and will throttle sooner while the new prefix is 
warming up. Since druid is creating new prefixes for every segment, we will 
always be writing to a cold prefix and at risk of early throttling when a 
multi-part segment push begins
   
   Example Log snippet: `Error: com.amazonaws.SdkClientException: Unable to 
complete multi-part upload. Individual part upload failed: Please reduce your 
request rate. (Service: Amazon S3; Status Code: 503; Error Code: SlowDown; 
Request ID:`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to