hi everyone! can flink submit job which read some custom file distributed by
hdfs DistributedCache.
like spark can do that with the follow command:
bin/spark-submit --master yarn --deploy-mode cluster --files
/opt/its007-datacollection-conf.properties#its007-datacollection-conf.properties
...
then spark driver can read `its007-datacollection-conf.properties` file in
work directory.thanks!
