zuston commented on issue #955:
URL: 
https://github.com/apache/incubator-uniffle/issues/955#issuecomment-1595740356

   > we have some jobs that shuffle almost 150TB of data. 
   
   Do you mean that the 150TB for stage total shuffle data size or just is per 
partition? 
   
   If it's the former, I think the single shuffle-server tolerant disk capacity 
* shuffle-server size must hold the 150TB.
   
   And if it's the latter, the jerqi's suggestion is useful, which is for 
holding huge partition for per-partition. And the issue has been solved in 
https://github.com/apache/incubator-uniffle/issues/378 
   
   > One could argue that the job needs to be re-written but as a platform we 
mostly have no control over when the job gets fixed to reduce the shuffle
   
   +1. Feel the same
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@uniffle.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to