It should be similar to other hadoop jobs. You need hadoop configuration in your client machine, and point the HADOOP_CONF_DIR in spark to the configuration.
Thanks Zhan Zhang On Sep 22, 2015, at 6:37 PM, Zhiliang Zhu <zchl.j...@yahoo.com.INVALID<mailto:zchl.j...@yahoo.com.INVALID>> wrote: Dear Experts, Spark job is running on the cluster by yarn. Since the job can be submited at the place on the machine from the cluster, however, I would like to submit the job from another machine which does not belong to the cluster. I know for this, hadoop job could be done by way of another machine which is installed hadoop gateway which is used to connect the cluster. Then what would go for spark, is it same as hadoop... And where is the instruction doc for installing this gateway... Thank you very much~~ Zhiliang