Hi Vasily,

Unfortunately no, I don't think there is such an option in your case. With
per job mode, you could try to use the Distributed Cache, it should be
working in streaming as well [1], but this doesn't work in the application
mode, as in that case no code is executed on the JobMaster [2]

Two workarounds that I could propose, that I know are not perfect is to:
- bundle the configuration file in the jar
- pass the entire configuration as a parameter to the job though some json,
or base64 encoded parameter.

Best,
Piotrek

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/dataset/overview/#distributed-cache
[2]
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/overview/#overview-and-reference-architecture

wt., 9 lis 2021 o 14:14 Vasily Melnik <vasily.mel...@glowbyteconsulting.com>
napisaƂ(a):

> Hi all.
>
> While running Flink jobs in application mode on YARN and Kuber, we need to
> provide some configuration files to main class. Is there any option on
> Flink CLI  to copy local files on cluster without manually copying on DFS
> or in docker image, something like *--files* option in spark-submit?
>
>
>

Reply via email to