Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2021-03-08 Thread bat man
The Java options should not have the double quotes. That was the issue. I
was able to generate the heap dump. based on the dump have made some
changes in the code to fix this issue.

This worked -

env.java.opts: -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/dump.hprof

Thanks.

On Mon, 8 Mar 2021 at 7:48 AM, Xintong Song  wrote:

> Hi Hemant,
> I don't see any problem in your settings. Any exceptions suggesting why TM
> containers are not coming up?
>
> Thank you~
>
> Xintong Song
>
>
>
> On Sat, Mar 6, 2021 at 3:53 PM bat man  wrote:
>
>> Hi Xintong Song,
>> I tried using the java options to generate heap dump referring to docs[1]
>> in flink-conf.yaml, however after adding this the task manager containers
>> are not coming up. Note that I am using EMR. Am i doing anything wrong here?
>>
>> env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError
>> -XX:HeapDumpPath=/tmp/dump.hprof"
>>
>> Thanks,
>> Hemant
>>
>>
>>
>>
>>
>> On Fri, Mar 5, 2021 at 3:05 PM Xintong Song 
>> wrote:
>>
>>> Hi Hemant,
>>>
>>> This exception generally suggests that JVM is running out of heap
>>> memory. Per the official documentation [1], the amount of live data barely
>>> fits into the Java heap having little free space for new allocations.
>>>
>>> You can try to increase the heap size following these guides [2].
>>>
>>> If a memory leak is suspected, to further understand where the memory is
>>> consumed, you may need to dump the heap on OOMs and looking for unexpected
>>> memory usages leveraging profiling tools.
>>>
>>> Thank you~
>>>
>>> Xintong Song
>>>
>>>
>>> [1]
>>> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html
>>>
>>> [2]
>>> https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/memory/mem_setup.html
>>>
>>>
>>>
>>> On Fri, Mar 5, 2021 at 4:24 PM bat man  wrote:
>>>
>>>> Hi,
>>>>
>>>> Getting the below OOM but the job failed 4-5 times and recovered from
>>>> there.
>>>>
>>>> j
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *ava.lang.Exception: java.lang.OutOfMemoryError: GC overhead limit
>>>> exceededat
>>>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
>>>>   at
>>>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
>>>>   at
>>>> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>>>>   at
>>>> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>>>>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>>>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>>>> at java.lang.Thread.run(Thread.java:748)Caused by:
>>>> java.lang.OutOfMemoryError: GC overhead limit exceeded*
>>>>
>>>> Is there any way I can debug this. since the job after a few re-starts
>>>> started running fine. what could be the reason behind this.
>>>>
>>>> Thanks,
>>>> Hemant
>>>>
>>>


Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2021-03-07 Thread Xintong Song
Hi Hemant,
I don't see any problem in your settings. Any exceptions suggesting why TM
containers are not coming up?

Thank you~

Xintong Song



On Sat, Mar 6, 2021 at 3:53 PM bat man  wrote:

> Hi Xintong Song,
> I tried using the java options to generate heap dump referring to docs[1]
> in flink-conf.yaml, however after adding this the task manager containers
> are not coming up. Note that I am using EMR. Am i doing anything wrong here?
>
> env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/tmp/dump.hprof"
>
> Thanks,
> Hemant
>
>
>
>
>
> On Fri, Mar 5, 2021 at 3:05 PM Xintong Song  wrote:
>
>> Hi Hemant,
>>
>> This exception generally suggests that JVM is running out of heap memory.
>> Per the official documentation [1], the amount of live data barely fits
>> into the Java heap having little free space for new allocations.
>>
>> You can try to increase the heap size following these guides [2].
>>
>> If a memory leak is suspected, to further understand where the memory is
>> consumed, you may need to dump the heap on OOMs and looking for unexpected
>> memory usages leveraging profiling tools.
>>
>> Thank you~
>>
>> Xintong Song
>>
>>
>> [1]
>> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html
>>
>> [2]
>> https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/memory/mem_setup.html
>>
>>
>>
>> On Fri, Mar 5, 2021 at 4:24 PM bat man  wrote:
>>
>>> Hi,
>>>
>>> Getting the below OOM but the job failed 4-5 times and recovered from
>>> there.
>>>
>>> j
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *ava.lang.Exception: java.lang.OutOfMemoryError: GC overhead limit
>>> exceededat
>>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
>>>   at
>>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
>>>   at
>>> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>>>   at
>>> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>>>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>>> at java.lang.Thread.run(Thread.java:748)Caused by:
>>> java.lang.OutOfMemoryError: GC overhead limit exceeded*
>>>
>>> Is there any way I can debug this. since the job after a few re-starts
>>> started running fine. what could be the reason behind this.
>>>
>>> Thanks,
>>> Hemant
>>>
>>


Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2021-03-05 Thread bat man
Hi Xintong Song,
I tried using the java options to generate heap dump referring to docs[1]
in flink-conf.yaml, however after adding this the task manager containers
are not coming up. Note that I am using EMR. Am i doing anything wrong here?

env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/dump.hprof"

Thanks,
Hemant





On Fri, Mar 5, 2021 at 3:05 PM Xintong Song  wrote:

> Hi Hemant,
>
> This exception generally suggests that JVM is running out of heap memory.
> Per the official documentation [1], the amount of live data barely fits
> into the Java heap having little free space for new allocations.
>
> You can try to increase the heap size following these guides [2].
>
> If a memory leak is suspected, to further understand where the memory is
> consumed, you may need to dump the heap on OOMs and looking for unexpected
> memory usages leveraging profiling tools.
>
> Thank you~
>
> Xintong Song
>
>
> [1]
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html
>
> [2]
> https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/memory/mem_setup.html
>
>
>
> On Fri, Mar 5, 2021 at 4:24 PM bat man  wrote:
>
>> Hi,
>>
>> Getting the below OOM but the job failed 4-5 times and recovered from
>> there.
>>
>> j
>>
>>
>>
>>
>>
>>
>>
>> *ava.lang.Exception: java.lang.OutOfMemoryError: GC overhead limit
>> exceededat
>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
>>   at
>> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
>>   at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>>   at
>> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
>> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
>> at java.lang.Thread.run(Thread.java:748)Caused by:
>> java.lang.OutOfMemoryError: GC overhead limit exceeded*
>>
>> Is there any way I can debug this. since the job after a few re-starts
>> started running fine. what could be the reason behind this.
>>
>> Thanks,
>> Hemant
>>
>


Re: java.lang.OutOfMemoryError: GC overhead limit exceeded

2021-03-05 Thread Xintong Song
Hi Hemant,

This exception generally suggests that JVM is running out of heap memory.
Per the official documentation [1], the amount of live data barely fits
into the Java heap having little free space for new allocations.

You can try to increase the heap size following these guides [2].

If a memory leak is suspected, to further understand where the memory is
consumed, you may need to dump the heap on OOMs and looking for unexpected
memory usages leveraging profiling tools.

Thank you~

Xintong Song


[1]
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/memleaks002.html

[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/memory/mem_setup.html



On Fri, Mar 5, 2021 at 4:24 PM bat man  wrote:

> Hi,
>
> Getting the below OOM but the job failed 4-5 times and recovered from
> there.
>
> j
>
>
>
>
>
>
>
> *ava.lang.Exception: java.lang.OutOfMemoryError: GC overhead limit
> exceededat
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
>   at
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
>   at
> org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
>   at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
> at java.lang.Thread.run(Thread.java:748)Caused by:
> java.lang.OutOfMemoryError: GC overhead limit exceeded*
>
> Is there any way I can debug this. since the job after a few re-starts
> started running fine. what could be the reason behind this.
>
> Thanks,
> Hemant
>


java.lang.OutOfMemoryError: GC overhead limit exceeded

2021-03-05 Thread bat man
Hi,

Getting the below OOM but the job failed 4-5 times and recovered from there.

j







*ava.lang.Exception: java.lang.OutOfMemoryError: GC overhead limit
exceededat
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.checkThrowSourceExecutionException(SourceStreamTask.java:212)
  at
org.apache.flink.streaming.runtime.tasks.SourceStreamTask.performDefaultAction(SourceStreamTask.java:132)
  at
org.apache.flink.streaming.runtime.tasks.StreamTask.run(StreamTask.java:298)
  at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:403)
  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:705)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:530)
at java.lang.Thread.run(Thread.java:748)Caused by:
java.lang.OutOfMemoryError: GC overhead limit exceeded*

Is there any way I can debug this. since the job after a few re-starts
started running fine. what could be the reason behind this.

Thanks,
Hemant


Re: Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-08 Thread 徐涛
Hi Fabian,
No, I did not add any optimization rules.
I have created two JIRAs about this issue, because when I modify the 
SQL a little, the error turns to StackOverflowError then:
https://issues.apache.org/jira/browse/FLINK-12097 
<https://issues.apache.org/jira/browse/FLINK-12097>
https://issues.apache.org/jira/browse/FLINK-12109 
<https://issues.apache.org/jira/browse/FLINK-12109>

The SQL is quite long(about 2000 lines), and some of them are written 
by the DSL my defined, but most of them are same to Flink SQL,  so I put it in 
the JIRA attachment.
Kindly check about it, thanks a lot.

Best,
Henry

> 在 2019年4月8日,下午6:25,Fabian Hueske  写道:
> 
> Hi Henry,
> 
> It seem that the optimizer is not handling this case well. 
> The search space might be too large (or rather the optimizer explores too 
> much of the search space).
> Can you share the query? Did you add any optimization rules?
> 
> Best, Fabian
> 
> Am Mi., 3. Apr. 2019 um 12:36 Uhr schrieb 徐涛  <mailto:happydexu...@gmail.com>>:
> Hi Experts,
>   There is a Flink application(Version 1.7.2) which is written in Flink 
> SQL, and the SQL in the application is quite long, consists of about 10 
> tables, 1500 lines in total. When executing, I found it is hanged in 
> StreamTableEnvironment.sqlUpdate, keep executing some code about calcite and 
> the memory usage keeps grown up, after several minutes 
> java.lang.OutOfMemoryError: GC overhead limit exceeded is got.
> 
>   I get some thread dumps:
> at 
> org.apache.calcite.plan.volcano.RuleQueue.popMatch(RuleQueue.java:475)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:640)
> at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
> at 
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
> at 
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
> at 
> org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
> at 
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
> at 
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)
>   
> 
> 
> at java.io.PrintWriter.write(PrintWriter.java:473)
> at 
> org.apache.calcite.rel.AbstractRelNode$1.explain_(AbstractRelNode.java:415)
> at 
> org.apache.calcite.rel.externalize.RelWriterImpl.done(RelWriterImpl.java:156)
> at 
> org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:312)
> at 
> org.apache.calcite.rel.AbstractRelNode.computeDigest(AbstractRelNode.java:420)
> at 
> org.apache.calcite.rel.AbstractRelNode.recomputeDigest(AbstractRelNode.java:356)
> at 
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:350)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1484)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:859)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:879)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1755)
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
> at 
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
> at 
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
> at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
> at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646)
> at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
> at 
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
> at 
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
> at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860

Re: Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-08 Thread Fabian Hueske
Hi Henry,

It seem that the optimizer is not handling this case well.
The search space might be too large (or rather the optimizer explores too
much of the search space).
Can you share the query? Did you add any optimization rules?

Best, Fabian

Am Mi., 3. Apr. 2019 um 12:36 Uhr schrieb 徐涛 :

> Hi Experts,
> There is a Flink application(Version 1.7.2) which is written in Flink SQL,
> and the SQL in the application is quite long, consists of about 10 tables,
> 1500 lines in total. When executing, I found it is hanged in
> StreamTableEnvironment.sqlUpdate, keep executing some code about calcite
> and the memory usage keeps grown up, after several minutes
> java.lang.OutOfMemoryError: GC overhead limit exceeded is got.
>
> I get some thread dumps:
> at
> org.apache.calcite.plan.volcano.RuleQueue.popMatch(RuleQueue.java:475)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:640)
> at
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
> at
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
> at
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
> at
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
> at
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
> at
> org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
> at
> org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
> at
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
> at
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)
>
>
> at java.io.PrintWriter.write(PrintWriter.java:473)
> at
> org.apache.calcite.rel.AbstractRelNode$1.explain_(AbstractRelNode.java:415)
> at
> org.apache.calcite.rel.externalize.RelWriterImpl.done(RelWriterImpl.java:156)
> at
> org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:312)
> at
> org.apache.calcite.rel.AbstractRelNode.computeDigest(AbstractRelNode.java:420)
> at
> org.apache.calcite.rel.AbstractRelNode.recomputeDigest(AbstractRelNode.java:356)
> at
> org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:350)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1484)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:859)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:879)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1755)
> at
> org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
> at
> org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
> at
> org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
> at
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
> at
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646)
> at
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
> at
> org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
> at
> org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
> at
> org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
> at
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
> at
> org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
> at
> org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
> at
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
> at
> org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)
> * Both point to some code about calcite.*
>
> And also I get the heap dump, found that there are *5703373 RexCall
> instances, and 5909525 String instances, 5909692 char[] instances ,**size
> is 6.8G*. I wonder why there are so many RexCall instances here, why it
> keeps on executing some calcite code and seems never stop.
> char[]
> 5,909,692 (16.4%) 6,873,430,938 (84.3%)
> java.lang.String
> 5,909,525 (16.

Flink SQL hangs in StreamTableEnvironment.sqlUpdate, keeps executing and seems never stop, finally lead to java.lang.OutOfMemoryError: GC overhead limit exceeded

2019-04-03 Thread 徐涛
Hi Experts,
There is a Flink application(Version 1.7.2) which is written in Flink 
SQL, and the SQL in the application is quite long, consists of about 10 tables, 
1500 lines in total. When executing, I found it is hanged in 
StreamTableEnvironment.sqlUpdate, keep executing some code about calcite and 
the memory usage keeps grown up, after several minutes 
java.lang.OutOfMemoryError: GC overhead limit exceeded is got.

I get some thread dumps:
at 
org.apache.calcite.plan.volcano.RuleQueue.popMatch(RuleQueue.java:475)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:640)
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
at 
org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
at 
org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
at 
org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
at 
org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
at 
org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)



at java.io.PrintWriter.write(PrintWriter.java:473)
at 
org.apache.calcite.rel.AbstractRelNode$1.explain_(AbstractRelNode.java:415)
at 
org.apache.calcite.rel.externalize.RelWriterImpl.done(RelWriterImpl.java:156)
at 
org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:312)
at 
org.apache.calcite.rel.AbstractRelNode.computeDigest(AbstractRelNode.java:420)
at 
org.apache.calcite.rel.AbstractRelNode.recomputeDigest(AbstractRelNode.java:356)
at 
org.apache.calcite.rel.AbstractRelNode.onRegister(AbstractRelNode.java:350)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1484)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:859)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:879)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1755)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:135)
at 
org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234)
at 
org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141)
at 
org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:212)
at 
org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:646)
at 
org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:339)
at 
org.apache.flink.table.api.TableEnvironment.runVolcanoPlanner(TableEnvironment.scala:373)
at 
org.apache.flink.table.api.TableEnvironment.optimizeLogicalPlan(TableEnvironment.scala:292)
at 
org.apache.flink.table.api.StreamTableEnvironment.optimize(StreamTableEnvironment.scala:812)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:860)
at 
org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:344)
at 
org.apache.flink.table.api.TableEnvironment.insertInto(TableEnvironment.scala:879)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:817)
at 
org.apache.flink.table.api.TableEnvironment.sqlUpdate(TableEnvironment.scala:777)
Both point to some code about calcite.

And also I get the heap dump, found that there are 5703373 RexCall 
instances, and 5909525 String instances, 5909692 char[] instances ,size is 
6.8G. I wonder why there are so many RexCall instances here, why it keeps on 
executing some calcite code and seems never stop. 
char[]  
5,909,692 (16.4%)   6,873,430,938 (84.3%)
java.lang.String
5,909,525 (16.4%)   165,466,700 (2%)
org.apache.calcite.rex.RexLocalRef  
5,901,313 (16.4%)   259,657,772 (3.2%)

org.apache.flink.calcite.shaded.com.google.common.collect.RegularImmutableList  
5,739,479 (15.9%)   229,579,160 (2.8%)
java.lang.Object[]  
5,732,702 (15.9%)   279,902,336 (3.4%)
org.apache.calcite.rex.RexCall  
5,703,373 (15.8%)   273,761,904 (3.4%)

Look forward to your reply.
Thanks a lot.


Best
Henry