ad1happy2go commented on issue #10979:
URL: https://github.com/apache/hudi/issues/10979#issuecomment-2049348688

   @wkhappy1 This is bit weird why insert_overwrite_table will cache more data 
compared to bulk_insert. In both cases it will only process the same amount of 
data. insert_overwrite_table will additionally do small file handling. If 
possible  can you post the spark UI screenshot to look into this further?
   
   With hudi 0.14.1 version, along with bulk_insert you can use 
'hoodie.bulkinsert.overwrite.operation.type' as INSERT_OVERWRITE_TABLE along 
with the bulk_inser


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to