Hi,
I would suggest the same thing as Vino did: it might be possible to use stdout
somehow, but it’s a better idea to coordinate in some other way. Produce some
(side?) output with a control message from one job once it finishes, that will
control the second job.
Piotrek
> On 25 Nov 2019, at
Hi Komal,
> Thank you! That's exactly what's happening. Is there any way to force it
write to a specific .out of a TaskManager?
No, I am curious why the two jobs depend on stdout? Can we introduce
another coordinator other than stdout? IMO, this mechanism is not always
available.
Best,
Vino
Hi Theo,
I want to interrupt/cancel my current job as it has produced the desired
results even though it runs infinitely, and the next one requires full
resources.
Due to some technical issue we cannot access the web UI so just working
with the CLI, for now.
I found a less crude way by running
Hi Komal,
Since you use the Flink standalone deployment mode, the tasks of the jobs
which print information to the STDOUT may randomly deploy in any task
manager of the cluster. Did you check other Task Managers out file?
Best,
Vino
Komal Mariam 于2019年11月22日周五 下午6:59写道:
> Dear all,
>
> Thank
Dear all,
Thank you for your help regarding my previous queries. Unfortunately, I'm
stuck with another one and will really appreciate your input.
I can't seem to produce any outputs in "flink-taskexecutor-0.out" from my
second job after submitting the first one in my 3-node-flink standalone