Rong,thanks for your reply!  This is what i need! 



------------------ ???????? ------------------
??????: "Rong Rong"<[email protected]>; 
????????: 2018??9??3??(??????) ????0:02
??????: "??????"<[email protected]>; 
????: "user"<[email protected]>; 
????: Re: flink use hdfs DistributedCache



I am not sure if this suits your use case, but Flink YARN cli does support 
transferring local resource to all YARN nodes.Simply use[1]:
bin/flink run -m yarn-cluster -yt <local_resource> 
or 
bin/flink run -m yarn-cluster --yarnship <local_resource> 

should do the trick.


It might have not been using the HDFS DistributedCache API though. 


Thanks,
Rong


[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.6/ops/cli.html#usage



On Sun, Sep 2, 2018 at 2:07 AM ?????? <[email protected]> wrote:

hi everyone! can flink submit job which read some custom file distributed by 
hdfs DistributedCache.
 like spark can do that with the follow command:
    bin/spark-submit  --master yarn  --deploy-mode cluster  --files 
/opt/its007-datacollection-conf.properties#its007-datacollection-conf.properties
   ...
 then spark driver can read `its007-datacollection-conf.properties` file in 
work directory.


thanks!

Reply via email to