baohe-zhang opened a new pull request #29425:
URL: https://github.com/apache/spark/pull/29425


   ### What changes were proposed in this pull request?
   This is a follow-up of https://github.com/apache/spark/pull/29149. The 
change is that when writing a large value list, we divide the value list to a 
set of 128-values smaller batches, and bulk write these smaller batches one by 
one. This improvement can reduce memory pressure caused by serialization and 
give fairness to other writing threads. The idea is proposed by @mridulm in 
https://github.com/apache/spark/pull/29149#issuecomment-671551444.
   
   
   ### Why are the changes needed?
   To reduce the memory pressure and alleviate the fairness issue when batch 
writing a very long value list.
   
   
   ### Does this PR introduce _any_ user-facing change?
   No.
   
   
   ### How was this patch tested?
   Manual test.
   
   I measured the HybridStore switching time for different event logs on an HDD 
disk after the change. It took almost the same amount of time as before.
   
   Log1: 1.3 GB, 1000 jobs, 400 tasks per job
   Original: 108s
   Now: 107s
   
   Log2: 265 MB, 400 jobs, 200 tasks per job
   Original: 23s
   Now: 25s
   
   Log3: 133 MB, 400 jobs, 100 tasks per job
   Original: 13s
   Now: 12s
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to