Hi All,

I`m having an issue when lanching an app (python) against a stand alone
cluster, but runs in local, as it doesn't reach the cluster. 
It's the first time i try the cluster, in local works ok.

i made this: 

-> /home/user/Spark/spark-1.3.0-bin-hadoop2.4/sbin/start-all.sh # Master and
worker are up in localhost:8080/4040
-> /home/user/Spark/spark-1.3.0-bin-hadoop2.4/bin/spark-submit --master
spark://localhost:7077 Script.py
           * The script runs ok but in local :(    i can check it in
localhost:4040, but i don't see any job in cluster UI

The only warning it's:
WARN Utils: Your hostname, localhost resolves to a loopback address:
127.0.0.1; using 192.168.1.132 instead (on interface eth0)

I set SPARK_LOCAL_IP=127.0.0.1 to solve this, al least de warning disappear,
but the script keep executing in local not in cluster.

I think it has something to do with my virtual server:
-> Host Server: Linux Mint
-> The Virtual Server (workstation 10) where runs Spark is Linux Mint as
well.

Any ideas what am i doing wrong?

Thanks in advance for any suggestion, i getting mad on it!!




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Problem-submiting-an-script-py-against-an-standalone-cluster-tp24091.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to