Re: Spark Driver "behind" NAT

2015-01-06 Thread Aaron
Found the issue in JIRA: https://issues.apache.org/jira/browse/SPARK-4389?jql=project%20%3D%20SPARK%20AND%20text%20~%20NAT On Tue, Jan 6, 2015 at 10:45 AM, Aaron wrote: > From what I can tell, this isn't a "firewall" issue per se..it's how the > Remoting Service "binds" to an IP given cmd line

Re: Spark Driver "behind" NAT

2015-01-06 Thread Aaron
>From what I can tell, this isn't a "firewall" issue per se..it's how the Remoting Service "binds" to an IP given cmd line parameters. So, if I have a VM (or OpenStack or EC2 instance) running on a private network let's say, where the IP address is 192.168.X.Y...I can't tell the Workers to "reach

Re: Spark Driver "behind" NAT

2015-01-05 Thread Aaron
Thanks for the link! However, from reviewing the thread, it appears you cannot have a NAT/firewall between the cluster and the spark-driver/shell..is this correct? When the shell starts up, it binds to the internal IP (e.g. 192.168.x.y)..not the external floating IP..which is routable from the cl

Re: Spark Driver "behind" NAT

2015-01-05 Thread Akhil Das
You can have a look at this discussion http://apache-spark-user-list.1001560.n3.nabble.com/Submitting-Spark-job-on-Unix-cluster-from-dev-environment-Windows-td16989.html Thanks Best Regards On Mon, Jan 5, 2015 at 6:11 PM, Aaron wrote: > Hello there, I was wondering if there is a way to have the

Spark Driver "behind" NAT

2015-01-05 Thread Aaron
Hello there, I was wondering if there is a way to have the spark-shell (or pyspark) sit behind a NAT when talking to the cluster? Basically, we have OpenStack instances that run with internal IPs, and we assign floating IPs as needed. Since the workers make direct TCP connections back, the spark-