jerqi commented on issue #378:
URL: 
https://github.com/apache/incubator-uniffle/issues/378#issuecomment-1343762784

   > > Maybe we could introduce multi-thread writing HDFS. If the file is too 
big, we could split them to multiple files.
   > 
   > Yes. The key of problem is the low speed of writing single one data file.
   > 
   > > ByteDance CSS have similar concept. If file exceed the size, we will 
open and write another file.
   > 
   > Let me take a look. But I think writing another file is not a good 
solution, which wont improve the writing concurrency for multiple same 
partition events.
   
   I mean that we can write multiple files at the same time.
   We can tryLock the file lock, we have multiple locks, if we succeed to 
tryLock, we can write the file. If we fail to tryLock, we will retry another 
file lock.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to