subash-metica commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1973269182
@cbomgit - What is the version you are using ?
Unfortunately, I had to not use multi-writer at this point to circumvent
this problem. Not sure whether this exists in 0.14
cbomgit commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1973206032
Any update on a root cause/fix? We are facing a similar issue suddenly. We
have multi-writer with OCC. Each writer writes distinct partitions and uses
insert_overwrite.
--
This is
KnightChess commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1892033786
The internal method we use for fixing the issue is quite aggressive; we
directly catch and handle exceptions during the reading process. Our version
now is 0.13.1
--
This is an
subash-metica commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1878374530
Hi,
I am facing the issue again, the problem is happening for random instances
and no common pattern I could see.
Hudi version: 0.13.1
The error stack trace,
subash-metica commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1859967411
Hi @KnightChess , @xushiyan and @jjtjiang - have you found a fix for this
issue or a workaround ?
--
This is an automated message from the Apache Git Service.
To respond to the
jjtjiang commented on issue #7057:
URL: https://github.com/apache/hudi/issues/7057#issuecomment-1846470553
@ad1happy2go
i also face this problem .
version : hudi 0.12.3
how to reproduce the issue: just use the insert overwirte sql when
insert a big table .
here is my