Are you able to replay this scenario? Did you accidently send killing
signal to the job mananger process?
On Thu, 13 Oct 2022 at 4:02 PM, Puneet Duggal
wrote:
> Hi,
>
> We use session deployment mode with HA setup. Currently we have 3 job
> managers and 3 task managers running on flink version 1
Hi,We use session deployment mode with HA setup. Currently we have 3 job managers and 3 task managers running on flink version 1.12.1. Please find attached the complete job manager logs.
jobManager.log
Description: Binary data
On 13-Oct-2022, at 7:28 AM, Xintong Song wrote:
I meant your jobmanager also received a SIGTERM signal, and you would need
to figure out where it comes from.
To be specific, this line of log:
> 2022-10-11 22:11:21,683 INFO
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] - RECEIVED
> SIGNAL 15: SIGTERM. Shutting down as reques
Hi,
Which deployment mode do you use? What is the Flink version?
I think killing TaskManagers won't make the JobMananger restart. You can
provide the whole log as an attachment to investigate.
On Wed, 12 Oct 2022 at 6:01 PM, Puneet Duggal
wrote:
> Hi Xintong Song,
>
> Thanks for your immediate
Hi Xintong Song,
Thanks for your immediate reply. Yes, I do restart task manager via kill
command and then flink restart because I have seen cases where simple flink
restart does not pickup the latest configuration. But what I am confused about
is why killing the task manager process and then r
The log shows that the jobmanager received a SIGTERM signal from external.
Depending on how you deploy Flink, that could be a 'kill ' command, or
a kubernetes pod removal / eviction, etc. You may want to check where the
signal came from.
Best,
Xintong
On Wed, Oct 12, 2022 at 6:26 AM Puneet Dug
Hi,
I am facing an issue where when restarting task manager after adding some
configuration changes, even though task manager restarts successfully with the
updated configuration change, is causing the leader job manager to restart as
well. Pasting the leader job manager logs here
2022-10-11