Great!

No, we used Hudi-0.12.2 and Hudi-0.13.0, and both do not have this fix. We
will test with Hudi-0.13.1.



On Thu, Nov 9, 2023 at 12:42 PM Danny Chan <[email protected]> wrote:

> Hi, did your local repo already include this fix:
> https://github.com/apache/hudi/pull/8050 ?
>
> Best,
> Danny
>
> Prabhu Joseph <[email protected]> 于2023年11月9日周四 13:25写道:
> >
> > Hi!
> >
> >
> > One of our user's Hudi Flink App fails with OOM. Our stack has Hudi
> (0.12.2), Flink (1.16) and Hadoop (3.3.3).
> >
> >
> > 2023-10-08 03:11:46,425 ERROR
> org.apache.hudi.sink.StreamWriteOperatorCoordinator [] - Executor executes
> action [commits the instant 20231008031058188] error
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> >
> >
> > Eclipse MAT screenshots are attached. HiveSyncContext has a HiveConf
> (extends Hadoop Configuration),
> >
> > which has more than 1000 resource objects in the array list here. I have
> analysed the HiveSyncContext and
> >
> > can see HiveConf is being created for every SQL query submitted and
> holds a maximum of only three resource
> >
> > objects. I'm not sure how it goes above 1000 in my user's case.
> >
> >
> >
> > Any known issue or any pointers on where this leak could be coming from?
> >
> >
> > Thanks,
> >
> > Prabhu Joseph
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to