Hi,
I deployed flink in session mode. I didn't run any jobs. I saw below logs.
That is normal, same as Flink menual shows.
+ /opt/flink/bin/run-job-manager.sh
Starting HA cluster with 1 masters.
Starting standalonesession daemon on host job-manager-776dcf6dd-xzs8g.
Starting taskexecutor daemon on
Hi Qihua,
I guess, looking into kubectl describe and the JobManager logs would help
in understanding what's going on.
Best,
Matthias
On Wed, Sep 29, 2021 at 8:37 PM Qihua Yang wrote:
> Hi,
> I deployed flink in session mode. I didn't run any jobs. I saw below logs.
> That is normal, same as Fli
Is the run-job-manager.sh script actually blocking?
Since you (apparently) use that as an entrypoint, if that scripts exits
after starting the JM then from the perspective of Kubernetes everything
is done.
On 30/09/2021 08:59, Matthias Pohl wrote:
Hi Qihua,
I guess, looking into kubectl descr
I did check the kubectl describe, it shows below info. Reason is Completed.
Ports: 8081/TCP, 6123/TCP, 6124/TCP, 6125/TCP
Host Ports:0/TCP, 0/TCP, 0/TCP, 0/TCP
Command:
/opt/flink/bin/entrypoint.sh
Args:
/opt/flink/bin/run-job-manager.sh
State:
Thank you for your reply.
>From the log, exit code is 0, and reason is Completed.
Looks like the cluster is fine. But why kubenetes restart the pod. As you
said, from perspective of Kubernetes everything is done. Then how to
prevent the restart?
It didn't even give chance to upload and run a jar...
Looks like after script *flink-daemon.sh *complete, it return exit 0.
Kubernetes regard it as done. Is that expected?
Thanks,
Qihua
On Thu, Sep 30, 2021 at 11:11 AM Qihua Yang wrote:
> Thank you for your reply.
> From the log, exit code is 0, and reason is Completed.
> Looks like the cluster is
Did you use the "jobmanager.sh start-foreground" in your own
"run-job-manager.sh", just like what Flink has done
in the docker-entrypoint.sh[1]?
I strongly suggest to start the Flink session cluster with official
yamls[2].
[1].
https://github.com/apache/flink-docker/blob/master/1.13/scala_2.11-ja