Hello
I successfully ran a job with 'flink run -c', but this is for the local
setup.
How should i proceed with a cluster? Will flink automagically instantiate
the job on all servers - i hope i don't have to start 'flink run -c' on all
machines.
New to flink and bigdata, so sorry for the proba
Hi!
Given a Flink cluster, you would only call `flink run ...` to submit a
job once; for simplicity i would submit it on the node where you started
the cluster. Flink will automatically distribute job across the cluster,
in smaller independent parts known as Tasks.
Regards,
Chesnay
On 28.09
cted. Am I guessing wrong?
Thanks!
Rob
> Оригинално писмо
>От: Chesnay Schepler ches...@apache.org
>Относно: Re: how many 'run -c' commands to start?
>До: user@flink.apache.org
>Изпратено на: 28.09.2017 15:05
> Hi!
>
> Given a F
xpected. Am I guessing wrong?
Thanks!
Rob
> Оригинално писмо
>От: Chesnay Schepler ches...@apache.org
>Относно: Re: how many 'run -c' commands to start?
>До: user@flink.apache.org
>Изпратено на: 28.09.2017 15:05
Hi!
Given a
HEDULED
...
I thought --detach will put the process in the background, and give me back the
cmdline, but maybe I got the meaning behind this option wrong?
Thank you!
> Оригинално писмо
>От: Chesnay Schepler ches...@apache.org
>Относно: Re: how many 'run
ected. Am I guessing wrong?
Thanks!
Rob
> Оригинално писмо
>От: Chesnay Schepler ches...@apache.org
>Относно: Re: how many 'run -c' commands to start?
>До: user@flink.apache.org
Thanks, Chesnay, that was indeed the problem.
It also explains why -p5 was not working for me from the cmdline
Best regards
Robert
> Оригинално писмо
>От: Chesnay Schepler ches...@apache.org
>Относно: Re: how many 'run -c' commands to