Hi Thomas,

quick question: Why do you wanna use the JarRunHandler? If another process
is building the JobGraph, then one could use the JobSubmitHandler which
expects a JobGraph and then starts executing it.

Cheers,
Till

On Thu, Jul 25, 2019 at 7:45 PM Thomas Weise <t...@apache.org> wrote:

> Hi,
>
> While considering different options to launch Beam jobs through the Flink
> REST API, I noticed that the implementation of JarRunHandler places quite a
> few restrictions on how the entry point shall construct a Flink job, by
> extracting and manipulating the job graph.
>
> That's normally not a problem for Flink Java programs, but in the scenario
> I'm looking at, the job graph would be constructed by a different process
> and isn't available to the REST handler. Instead, I would like to be able
> to just respond with the job ID of the already launched job.
>
> For context, please see:
>
>
> https://docs.google.com/document/d/1z3LNrRtr8kkiFHonZ5JJM_L4NWNBBNcqRc_yAf6G0VI/edit#heading=h.fh2f571kms4d
>
> The current JarRunHandler code is here:
>
>
> https://github.com/apache/flink/blob/f3c5dd960ff81a022ece2391ed3aee86080a352a/flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/JarRunHandler.java#L82
>
> It would be nice if there was an option to delegate the responsibility for
> job submission to the user code / entry point. That would be useful for
> Beam and other frameworks built on top of Flink that dynamically create a
> job graph from a different representation.
>
> Possible ways to get there:
>
> * an interface that the main class can be implement end when present, the
> jar run handler calls instead of main.
>
> * an annotated method
>
> Either way query parameters like savepoint path and parallelism would be
> forwarded to the user code and the result would be the ID of the launched
> job.
>
> Thougths?
>
> Thanks,
> Thomas
>

Reply via email to