Issue was with double quotes around the Java options. This worked -

env.java.opts: -XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/tmp/dump.hprof

On Mon, 8 Mar 2021 at 12:02 PM, Yun Gao <yungao...@aliyun.com> wrote:

> Hi,
>
> I tried with the standalone session (sorry I do not have a yarn cluster in
> hand) and it seems that
> the flink cluster could startup normally. Could you check the log of
> NodeManager to see the detail
> reason that the container does not get launched? Also have you check if
> there are some spell error
> or some unexpected special white space character for the configuration ?
>
> For the case of configuring `env.java.opts`, it seems the JobManager also
> could not be launched with
> this configuration.
>
> Best,
> Yun
>
> ------------------Original Mail ------------------
> *Sender:*bat man <tintin0...@gmail.com>
> *Send Date:*Sat Mar 6 16:03:06 2021
> *Recipients:*user <user@flink.apache.org>
> *Subject:*java options to generate heap dump in EMR not working
>
> Hi,
>>
>> I am trying to generate a heap dump to debug a GC overhead OOM. For that
>> I added the below java options in flink-conf.yaml, however after adding
>> this the yarn is not able to launch the containers. The job logs show it
>> goes on requesting for containers from yarn and it gets them, again
>> releases it. then again the same cycle continues. If I remove the option
>> from flink-conf.yaml then the containers are launched and the job starts
>> processing.
>>
>>
>> *env.java.opts.taskmanager: "-XX:+HeapDumpOnOutOfMemoryError
>> -XX:HeapDumpPath=/tmp/dump.hprof"*
>>
>> If I try this then yarn client does not comes up -
>>
>>
>> *env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError
>> -XX:HeapDumpPath=/tmp/dump.hprof"*
>>
>> Am I doing anything wrong here?
>>
>> PS: I am using EMR.
>>
>> Thanks,
>> Hemant
>>
>

Reply via email to