qingfei1994 opened a new issue, #6825:
URL: https://github.com/apache/paimon/issues/6825

   ### Search before asking
   
   - [x] I searched in the [issues](https://github.com/apache/paimon/issues) 
and found nothing similar.
   
   
   ### Motivation
   
   When hadoop s3 filesystem is trying to upload a file larger than 128MB, it 
would convert from put object into multipart upload and sometimes it would 
stuck, especially during full compaction, Paimon will upload more files to  
object storage and being throttled.
   Iceberg has provided an options write.object-storage.enabled to add a 
computed hash component for data path to prevent being throttled. 
   
https://iceberg.apache.org/docs/nightly/docs/configuration/?h=write.object+storage.enabled#write-properties
   It would be better if Paimon also provide the same fuctionality.
   
   ### Solution
   
   _No response_
   
   ### Anything else?
   
   _No response_
   
   ### Are you willing to submit a PR?
   
   - [x] I'm willing to submit a PR!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to