You can have a look at this discussion
http://apache-spark-user-list.1001560.n3.nabble.com/Submitting-Spark-job-on-Unix-cluster-from-dev-environment-Windows-td16989.html

Thanks
Best Regards

On Mon, Jan 5, 2015 at 6:11 PM, Aaron <aarongm...@gmail.com> wrote:

> Hello there, I was wondering if there is a way to have the spark-shell (or
> pyspark) sit behind a NAT when talking to the cluster?
>
> Basically, we have OpenStack instances that run with internal IPs, and we
> assign floating IPs as needed.  Since the workers make direct TCP
> connections back, the spark-shell is binding to the internal IP..not the
> "floating."  Our other use case is running Vagrant VMs on our local
> machines..but, we don't have those VMs' NICs setup in "bridged" mode..it
> too has an "internal" IP.
>
> I tried using the SPARK_LOCAL_IP, and the various --conf spark.driver.host
> parameters...but it still get's "angry."
>
> Any thoughts/suggestions?
>
> Currently our work around is to VPNC connection from inside the vagrant
> VMs or Openstack instances...but, that doesn't seem like a long term plan.
>
> Thanks in advance!
>
> Cheers,
> Aaron
>

Reply via email to