bvaradar commented on issue #1939:
URL: https://github.com/apache/hudi/issues/1939#issuecomment-691739964
@RajasekarSribalan : Please reopen if you still have any questions.
Thanks,
Balaji.V
This is an automated
bvaradar commented on issue #1939:
URL: https://github.com/apache/hudi/issues/1939#issuecomment-678220122
Sorry for the delay in responding , here is the default storage level
config I am seeing,
private static final String WRITE_STATUS_STORAGE_LEVEL =
bvaradar commented on issue #1939:
URL: https://github.com/apache/hudi/issues/1939#issuecomment-671690639
To understand, Are you using bulk insert for initial loading and upsert for
subsequent operations ?
For records with LOBs, it is important to tune
bvaradar commented on issue #1939:
URL: https://github.com/apache/hudi/issues/1939#issuecomment-671079032
Regarding OOM errors, please check if which Spark stage is causing the
failure. You might need to tune parallelism for this. The size of parquet
files should not be the issue.