[Question] Why doesn't Flink use Calcite adapter?

2021-06-25 Thread guangyuan wang


I have read the design doc of the Flink planner recently. I've found the
Flink only uses Calcite as an SQL optimizer. It translates an optimized
RelNode to Flink(or Blink)RelNode, and then transfers it to the physical
plan. Why doesn't Flink implement Calcite adapters? Isn't this an easier
way to use calcite?


The link of calcite daptor:calcite.apache.org/docs/adapter.html.


Cancel job error ! Interrupted while waiting for buffer

2021-06-25 Thread SmileSmile
Hi 


   I use Flink 1.12.4 on yarn,  job topology is.  kafka -> source -> 
flatmap -> window 1 min agg -> sink -> kafka.  Checkpoint is enable ,  
checkpoint interval is 20s . When I cancel my job,  some TM cancel  success, 
some TM become cenceling and the TM  will be kill by itself  with 
task.cancellation.timeout = 18.  the TM log show that 


org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: 
Could not forward element to next operator
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:114)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:93)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.collect(ChainingOutput.java:39)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:50)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
com.operation.ParseLineOperationForAgg.flatMap(ParseLineOperationForAgg.java:74)
 [testFlink-1.0.jar:?]
at 
com.operation.ParseLineOperationForAgg.flatMap(ParseLineOperationForAgg.java:29)
 [testFlink-1.0.jar:?]
at 
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
 [flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:112)
 [flink-dist_2.11-1.12.4.jar:1.12.4]


Caused by: java.io.IOException: Interrupted while waiting for buffer
at 
org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewBufferBuilderFromPool(BufferWritingResultPartition.java:341)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.requestNewUnicastBufferBuilder(BufferWritingResultPartition.java:313)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.appendUnicastDataForRecordContinuation(BufferWritingResultPartition.java:257)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.runtime.io.network.partition.BufferWritingResultPartition.emitRecord(BufferWritingResultPartition.java:149)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:104)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.runtime.io.network.api.writer.ChannelSelectorRecordWriter.emit(ChannelSelectorRecordWriter.java:54)
 ~[testFlink-1.0.jar:?]
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWriter(RecordWriterOutput.java:101)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:87)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.io.RecordWriterOutput.collect(RecordWriterOutput.java:43)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:50)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
com.operation.ExtractLineOperationAgg.flatMap(ExtractLineOperationAgg.java:72) 
~[testFlink-1.0.jar:?]
at 
com.operation.ExtractLineOperationAgg.flatMap(ExtractLineOperationAgg.java:28) 
~[testFlink-1.0.jar:?]
at 
org.apache.flink.streaming.api.operators.StreamFlatMap.processElement(StreamFlatMap.java:47)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
at 
org.apache.flink.streaming.runtime.tasks.ChainingOutput.pushToOperator(ChainingOutput.java:112)
 ~[flink-dist_2.11-1.12.4.jar:1.12.4]
... 32 more




My question :


1. what can I do to deal with  this error ? 
2. if I cancel job with savepoint ,  will  this error  affect  savepoint ?




Best !



Re: upgrade kafka-schema-registry-client to 6.1.2 for Flink-avro-confluent-registry

2021-06-25 Thread Lian Jiang
Thanks Fabian. [FLINK-23160] upgrade kafka-schema-registry-client to 6.1.2
for Flink-avro-confluent-registry - ASF JIRA (apache.org)
 is created. I will
create a private build Flink to try out the fix. If it goes well, I can
contribute back. Thanks. Regards!

On Fri, Jun 25, 2021 at 2:02 AM Fabian Paul 
wrote:

> Hi,
>
> Thanks for bringing this up. It looks to me like something we definitely
> want to fix. Unfortunately, I also do not see an easy workaround
> besides building your own flink-avro-confluent-registry and bumping the
> dependency.
>
> Can you create a JIRA ticket for bumping the dependencies and would you be
> willing to work on this? A few things are still a bit unclear
> i.e. are the newer confluent schema registry versions compatible with out
> Kafka version (2.4.1).
>
> Best,
> Fabian



-- 

Create your own email signature



Re: Metric for JVM Overhaed

2021-06-25 Thread Yun Tang
Hi Pranjul,

Currently, Flink only have the metrics shown in taskmanager UI to tell the 
capacity of JMV overhead. However, Flink cannot detect how much overhead memory 
has been occupied as those memory footprints might be asked by the third-party 
library via OS malloc directly instead of via JVM.

Some tools provided by memory allocator such jemalloc or tcmalloc, could help 
find how much the memory usage via OS malloc. Even though, there still exists 
some memory used by mmap or on local stack, which is not so easy to detect.

Best
Yun Tang

From: Guowei Ma 
Sent: Friday, June 25, 2021 15:22
To: Pranjul Ahuja 
Cc: user 
Subject: Re: Metric for JVM Overhaed

Hi Pranjul
There are already some system metrics that track the jvm 
status(CPU/Memory/Threads/GC). You could find them in the [1]

[1]https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/metrics/#system-metrics
Best,
Guowei


On Fri, Jun 25, 2021 at 2:33 PM Pranjul Ahuja 
mailto:ahuja0...@gmail.com>> wrote:
Hi,

Is there any metric to track the task manager JVM overhead? Or is it the case 
that it is already included in the metric Status.JVM.Memory.NonHeap?

Thanks,
Pranjul


Re: How to make onTimer() trigger on a CoProcessFunction after a failure?

2021-06-25 Thread Piotr Nowojski
Sorry for the delayed response but I'm glad to hear you have solved the
problem.

Piotrek



czw., 24 cze 2021 o 10:55 Felipe Gutierrez 
napisał(a):

> So, just an update.
>
> When I used this code (My stateful watermark) on the original application
> it seems that I can recover the latest watermark and further process the
> join with stuck events on it.
> I don't even have to create MyCoProcessFunction to implement a low-level
> join. The available .coGroup(MyCoGroupFunction) works as a charm.
>
> Thank you again for the clarifications!
> Felipe
>
> On Mon, Jun 21, 2021 at 5:18 PM Felipe Gutierrez <
> felipe.o.gutier...@gmail.com> wrote:
>
>> Hello Piotr,
>>
>> Could you please help me to ensure that I am implementing it in the
>> correct way?
>>
>> I created the WatermarkFunction [1] based on the FilterFunction from
>> Flink and the WatermarkStreamOperator [2] and I am doing unit test [3].
>> Then there are things that I am not sure how to do.
>>
>> How to make the ListState singleton on all parallel operators?
>>
>> When my job restarts I don't even have to call "processWatermark(new
>> Watermark(maxWatermark));" on the end of the "initializeState()". I can see
>> that the job process the previous watermarks before it fails. Is it because
>> the source is one that I created at the end of the unit test "MySource"? Or
>> is it because I don't have a join on the stream pipeline? I have the output
>> of my unit test below at this message in case you are not able to runt the
>> test.
>>
>> [1]
>> https://github.com/felipegutierrez/explore-flink/blob/master/docker/ops-playground-image/java/explore-flink/src/main/java/org/sense/flink/examples/stream/operator/watermark/WatermarkFunction.java
>> [2]
>> https://github.com/felipegutierrez/explore-flink/blob/master/docker/ops-playground-image/java/explore-flink/src/main/java/org/sense/flink/examples/stream/operator/watermark/WatermarkStreamOperator.java
>> [3]
>> https://github.com/felipegutierrez/explore-flink/blob/master/docker/ops-playground-image/java/explore-flink/src/test/java/org/sense/flink/examples/stream/operator/watermark/WatermarkStreamOperatorTest.java#L113
>>
>> $ cd explore-flink/docker/ops-playground-image/java/explore-flink/
>> $ mvn -Dtest=WatermarkStreamOperatorTest#testRestartWithLatestWatermark
>> test
>>
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> initializeState... 0
>> initializeState... 0
>> initializeState... 0
>> initializeState... 0
>> maxWatermark: 0
>> maxWatermark: 0
>> maxWatermark: 0
>> maxWatermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> Attempts restart: 0
>> processing watermark: 1
>> processing watermark: 1
>> processing watermark: 1
>> processing watermark: 1
>> Attempts restart: 0
>> processing watermark: 2
>> processing watermark: 2
>> processing watermark: 2
>> processing watermark: 2
>> Attempts restart: 0
>> processing watermark: 3
>> processing watermark: 3
>> processing watermark: 3
>> processing watermark: 3
>> Attempts restart: 0
>> processing watermark: 9223372036854775807
>> processing watermark: 9223372036854775807
>> processing watermark: 9223372036854775807
>> processing watermark: 9223372036854775807
>> This exception will trigger until the reference time [2021-06-21
>> 16:57:19.531] reaches the trigger time [2021-06-21 16:57:21.672] // HERE
>> THE JOB IS RESTARTING
>> initializeState... 1
>> initializeState... 1
>> initializeState... 1
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> WatermarkStreamOperator.initializeState
>> watermarkList recovered: 0
>> watermarkList recovered: 0
>> watermarkList recovered: 0
>> watermarkList recovered: 0
>> watermarkList recovered: 0
>> watermarkList recovered: 1
>> watermarkList recovered: 2
>> initializeState... 1
>> maxWatermark: 2 // HERE IS THE LATEST WATERMARK
>> processing watermark: 2 // I PROCESS IT HERE
>> watermarkList recovered: 0
>> watermarkList recovered: 1
>> watermarkList recovered: 0
>> watermarkList recovered: 0
>> watermarkList recovered: 1
>> watermarkList recovered: 1
>> watermarkList recovered: 2
>> watermarkList recovered: 2
>> watermarkList recovered: 2
>> maxWatermark: 2
>> maxWatermark: 2
>> processing watermark: 2
>> processing watermark: 2
>> maxWatermark: 2
>> processing watermark: 2
>> processing watermark: 0 // IS IS ALSO PROCESSING THE OTHER WATERMARKS.
>> WHY?
>> processing watermark: 0
>> processing watermark: 0
>> processing watermark: 0
>> Attempts restart: 1
>> processing watermark: 1
>> processing watermark: 1
>> processing watermark: 1
>> processing watermark: 1
>> Attempts restart: 1
>> processing watermark: 2
>> processing watermark: 2
>> 

Looking for example code

2021-06-25 Thread traef
I'm just starting with Flink. I've been trying all the examples online and none 
of them work.I am not a Java programmer but have been programming since 1982.I 
would like example code to read from a Pulsar topic and output to another 
Pulsar topic.Pulsar version 2.8.0Flink version 1.13.1Scala version 2.11Thank 
you in advance. 

Re: upgrade kafka-schema-registry-client to 6.1.2 for Flink-avro-confluent-registry

2021-06-25 Thread Fabian Paul
Hi,

Thanks for bringing this up. It looks to me like something we definitely want 
to fix. Unfortunately, I also do not see an easy workaround
besides building your own flink-avro-confluent-registry and bumping the 
dependency.

Can you create a JIRA ticket for bumping the dependencies and would you be 
willing to work on this? A few things are still a bit unclear
i.e. are the newer confluent schema registry versions compatible with out Kafka 
version (2.4.1).

Best,
Fabian

Re: Flink 1.4.1 randomically responds HTTP 500 when sending job to Job Manager

2021-06-25 Thread Guowei Ma
Hi Burcu
Could you show more logs? I could try to help find out what is happening.
But to be honest the 1.4 is too old a version that the community does not
support. You’d better upgrade to a newer version.
Best,
Guowei


On Fri, Jun 25, 2021 at 2:48 PM Burcu Gül POLAT EĞRİ 
wrote:

> Dear All,
>
> we are using Flink 1.4.1 one of our projects. We send some image
> processing jobs to our processing nodes via Flink. Flink Task Managers are
> installed on each processing nodes. And our main application sends jobs to
> Flink Job Manager and Flink Job Manager sends jobs to Flink Task Manages
> according to availability. We implement a java application(let's say node
> application) and send this application jar to nodes while sending jobs.
> Flink executes this application. And this applications executes our
> processors running on processing nodes. This was working properly but some
> how we get a wierd error sometimes these day. We can not understan why. Our
> main application send lots of jobs to Job Manager and some times it
> responds HTTP 500 with below exception. But our node application continues
> to execution. When we receive HTTP 500 we send the job again and for this
> time Job Manager returns HTTP 200. We cannot understand why we received
> HTTP 500 and below exception. This error causes to generate same images and
> our customer doesn't want to generate images more than one.
>
> 09:45:49.614 WARN  [local-cluster-thread-2]
> t.c.s.m.w.n.a.e.FlinkJobExecutor.initializeJob:977 - [PROCESS_ID:
> WFM-ba350a80-1b5a-4ca4-869a-e3c9d3a0c32d]/Cannot instantiate job in FLINK
> in 1. trial; no job identifier is provided by Flink api, please check if
> system configuration is valid and Flink is running. Flink responds with
> http response is 500. Flink return response String:
> java.util.concurrent.CompletionException:
> org.apache.flink.util.FlinkException: Could not run the jar.
> at
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:90)
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.util.FlinkException: Could not run the jar.
> ... 9 more
> Caused by: org.apache.flink.client.program.ProgramInvocationException: The
> program caused an error:
> at
> org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:93)
> at
> org.apache.flink.client.program.ClusterClient.getOptimizedPlan(ClusterClient.java:334)
> at
> org.apache.flink.runtime.webmonitor.handlers.JarActionHandler.getJobGraphAndClassLoader(JarActionHandler.java:87)
> at
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:69)
> ... 8 more
> Caused by:
> org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException
> at
> org.apache.flink.client.program.OptimizerPlanEnvironment.execute(OptimizerPlanEnvironment.java:54)
> at
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:815)
> at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
> at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
> at
> tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:139)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)
> at
> org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
> ... 11 more
>
> --
> BURCU
>


Re: Metric for JVM Overhaed

2021-06-25 Thread Guowei Ma
Hi Pranjul
There are already some system metrics that track the jvm
status(CPU/Memory/Threads/GC). You could find them in the [1]

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/metrics/#system-metrics
Best,
Guowei


On Fri, Jun 25, 2021 at 2:33 PM Pranjul Ahuja  wrote:

> Hi,
>
> Is there any metric to track the task manager JVM overhead? Or is it the
> case that it is already included in the metric Status.JVM.Memory.NonHeap?
>
> Thanks,
> Pranjul
>


Flink 1.4.1 randomically responds HTTP 500 when sending job to Job Manager

2021-06-25 Thread Burcu Gül POLAT EĞRİ
Dear All,

we are using Flink 1.4.1 one of our projects. We send some image processing
jobs to our processing nodes via Flink. Flink Task Managers are installed
on each processing nodes. And our main application sends jobs to Flink Job
Manager and Flink Job Manager sends jobs to Flink Task Manages according to
availability. We implement a java application(let's say node application)
and send this application jar to nodes while sending jobs. Flink executes
this application. And this applications executes our processors running on
processing nodes. This was working properly but some how we get a wierd
error sometimes these day. We can not understan why. Our main application
send lots of jobs to Job Manager and some times it responds HTTP 500 with
below exception. But our node application continues to execution. When we
receive HTTP 500 we send the job again and for this time Job Manager
returns HTTP 200. We cannot understand why we received HTTP 500 and below
exception. This error causes to generate same images and our customer
doesn't want to generate images more than one.

09:45:49.614 WARN  [local-cluster-thread-2]
t.c.s.m.w.n.a.e.FlinkJobExecutor.initializeJob:977 - [PROCESS_ID:
WFM-ba350a80-1b5a-4ca4-869a-e3c9d3a0c32d]/Cannot instantiate job in FLINK
in 1. trial; no job identifier is provided by Flink api, please check if
system configuration is valid and Flink is running. Flink responds with
http response is 500. Flink return response String:
java.util.concurrent.CompletionException:
org.apache.flink.util.FlinkException: Could not run the jar.
at
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:90)
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.util.FlinkException: Could not run the jar.
... 9 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: The
program caused an error:
at
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:93)
at
org.apache.flink.client.program.ClusterClient.getOptimizedPlan(ClusterClient.java:334)
at
org.apache.flink.runtime.webmonitor.handlers.JarActionHandler.getJobGraphAndClassLoader(JarActionHandler.java:87)
at
org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleJsonRequest$0(JarRunHandler.java:69)
... 8 more
Caused by:
org.apache.flink.client.program.OptimizerPlanEnvironment$ProgramAbortException
at
org.apache.flink.client.program.OptimizerPlanEnvironment.execute(OptimizerPlanEnvironment.java:54)
at
org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:815)
at org.apache.flink.api.java.DataSet.collect(DataSet.java:413)
at org.apache.flink.api.java.DataSet.print(DataSet.java:1652)
at
tr.com.sdt.mm.wfm.processor.api.agent.ProcessorInvokerAgent.main(ProcessorInvokerAgent.java:139)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:525)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:417)
at
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)
... 11 more

-- 
BURCU


Metric for JVM Overhaed

2021-06-25 Thread Pranjul Ahuja
Hi,

Is there any metric to track the task manager JVM overhead? Or is it the case 
that it is already included in the metric Status.JVM.Memory.NonHeap?

Thanks,
Pranjul