This issue had to do with the update strategy for the Flink deployment.
When I changed it to the following, it will work:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
On Tue, Nov 3, 2020 at 1:39 PM Robert Metzger wrote:
> Thanks a lot for prov
Thanks a lot for providing the logs.
My theory of what is happening is the following:
1. You are probably increasing the memory for the JobManager, when changing
the jobmanager.memory.flink.size configuration value
2. Due to this changed memory configuration, Kubernetes, Docker or the
Linux kerne
Thanks for your reply Robert. Please see attached log from the job
manager, the last line is the only thing I see different from a pod that
starts up successfully.
On Tue, Nov 3, 2020 at 10:41 AM Robert Metzger wrote:
> Hi Claude,
>
> I agree that you should be able to restart individual pods w
Hi Claude,
I agree that you should be able to restart individual pods with a changed
memory configuration. Can you share the full Jobmanager log of the failed
restart attempt?
I don't think that the log statement you've posted explains a start failure.
Regards,
Robert
On Tue, Nov 3, 2020 at 2:3
Hello,
I have Flink 1.10.2 installed in a Kubernetes cluster.
Anytime I make a change to the flink.conf, the Flink jobmanager pod fails
to restart.
For example, I modified the following memory setting in the flink.conf:
jobmanager.memory.flink.size.
After I deploy the change, the pod fails to rest