hi everyone,

from the tuning guide:

> Off-heap memory : Hudi writes parquet files and that needs good
amount of off-heap memory proportional to schema width. Consider
setting something like spark.executor.memoryOverhead or
spark.driver.memoryOverhead, if you are running into such failures.


can you elaborate if off-heap usage is specific to hudi when writing
parquet files or if this is a general parquet behavior ? Any details on
this would help

Thanks a lot

Reply via email to