Thanks Kostas !
I am using DataStream API.

I have few config/property files (key vale text file) and also have
business rule files (json).
These rules and configurations are needed when we process incoming event.
Is there any way to share them to task nodes from driver program ?
I think this is very common use case and am sure other users may face
similar issues.

+Baswaraj

On Mon, Aug 22, 2016 at 4:56 PM, Kostas Kloudas <k.klou...@data-artisans.com
> wrote:

> Hello Baswaraj,
>
> Are you using the DataSet (batch) or the DataStream API?
>
> If you are in the first, you can use a broadcast variable
> <https://ci.apache.org/projects/flink/flink-docs-master/apis/batch/index.html#broadcast-variables>
>  for
> your task.
> If you are using the DataStream one, then there is no proper support for
> that.
>
> Thanks,
> Kostas
>
> On Aug 20, 2016, at 12:33 PM, Baswaraj Kasture <kbaswar...@gmail.com>
> wrote:
>
> Am running Flink standalone cluster.
>
> I have text file that need to be shared across tasks when i submit my
> application.
> in other words , put this text file in class path of running tasks.
>
> How can we achieve this with flink ?
>
> In spark, spark-submit has --jars option that puts all the files specified
> in class path of executors (executors run in separate JVM and spawned
> dynamically, so it is possible).
>
> Flink's task managers run tasks in separate thread under taskmanager JVM
> (?) , how can we make this text file to be accessible on all tasks spawned
> by current application ?
>
> Using HDFS, NFS or including file in program jar is one way that i know,
> but am looking for solution that can allows me to provide text file at run
> time and still accessible in all tasks.
> Thanks.
>
>
>

Reply via email to