Re: Problem submiting an script .py against an standalone cluster.

2015-08-04 Thread Ford Farline
The code is very simple, just a couple of lines. When i lanch it runs in
local but not in cluster.

sc = SparkContext(local, Tech Companies Feedback)

beginning_time = datetime.now()

time.sleep(60)

print datetime.now() - beginning_time

sc.stop()

Thanks for your interest,

Gonzalo



On Fri, Jul 31, 2015 at 4:24 AM, Marcelo Vanzin van...@cloudera.com wrote:

 Can you share the part of the code in your script where you create the
 SparkContext instance?

 On Thu, Jul 30, 2015 at 7:19 PM, fordfarline fordfarl...@gmail.com
 wrote:

 Hi All,

 I`m having an issue when lanching an app (python) against a stand alone
 cluster, but runs in local, as it doesn't reach the cluster.
 It's the first time i try the cluster, in local works ok.

 i made this:

 - /home/user/Spark/spark-1.3.0-bin-hadoop2.4/sbin/start-all.sh # Master
 and
 worker are up in localhost:8080/4040
 - /home/user/Spark/spark-1.3.0-bin-hadoop2.4/bin/spark-submit --master
 spark://localhost:7077 Script.py
* The script runs ok but in local :(i can check it in
 localhost:4040, but i don't see any job in cluster UI

 The only warning it's:
 WARN Utils: Your hostname, localhost resolves to a loopback address:
 127.0.0.1; using 192.168.1.132 instead (on interface eth0)

 I set SPARK_LOCAL_IP=127.0.0.1 to solve this, al least de warning
 disappear,
 but the script keep executing in local not in cluster.

 I think it has something to do with my virtual server:
 - Host Server: Linux Mint
 - The Virtual Server (workstation 10) where runs Spark is Linux Mint as
 well.

 Any ideas what am i doing wrong?

 Thanks in advance for any suggestion, i getting mad on it!!




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Problem-submiting-an-script-py-against-an-standalone-cluster-tp24091.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




 --
 Marcelo



Re: Problem submiting an script .py against an standalone cluster.

2015-07-30 Thread Marcelo Vanzin
Can you share the part of the code in your script where you create the
SparkContext instance?

On Thu, Jul 30, 2015 at 7:19 PM, fordfarline fordfarl...@gmail.com wrote:

 Hi All,

 I`m having an issue when lanching an app (python) against a stand alone
 cluster, but runs in local, as it doesn't reach the cluster.
 It's the first time i try the cluster, in local works ok.

 i made this:

 - /home/user/Spark/spark-1.3.0-bin-hadoop2.4/sbin/start-all.sh # Master
 and
 worker are up in localhost:8080/4040
 - /home/user/Spark/spark-1.3.0-bin-hadoop2.4/bin/spark-submit --master
 spark://localhost:7077 Script.py
* The script runs ok but in local :(i can check it in
 localhost:4040, but i don't see any job in cluster UI

 The only warning it's:
 WARN Utils: Your hostname, localhost resolves to a loopback address:
 127.0.0.1; using 192.168.1.132 instead (on interface eth0)

 I set SPARK_LOCAL_IP=127.0.0.1 to solve this, al least de warning
 disappear,
 but the script keep executing in local not in cluster.

 I think it has something to do with my virtual server:
 - Host Server: Linux Mint
 - The Virtual Server (workstation 10) where runs Spark is Linux Mint as
 well.

 Any ideas what am i doing wrong?

 Thanks in advance for any suggestion, i getting mad on it!!




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Problem-submiting-an-script-py-against-an-standalone-cluster-tp24091.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




-- 
Marcelo


Re: Problem submiting an script .py against an standalone cluster.

2015-07-30 Thread Anh Hong
You might want to run spark-submit with option --deploy-mode cluster
 


 On Thursday, July 30, 2015 7:24 PM, Marcelo Vanzin van...@cloudera.com 
wrote:
   

 Can you share the part of the code in your script where you create the 
SparkContext instance?
On Thu, Jul 30, 2015 at 7:19 PM, fordfarline fordfarl...@gmail.com wrote:

Hi All,

I`m having an issue when lanching an app (python) against a stand alone
cluster, but runs in local, as it doesn't reach the cluster.
It's the first time i try the cluster, in local works ok.

i made this:

- /home/user/Spark/spark-1.3.0-bin-hadoop2.4/sbin/start-all.sh # Master and
worker are up in localhost:8080/4040
- /home/user/Spark/spark-1.3.0-bin-hadoop2.4/bin/spark-submit --master
spark://localhost:7077 Script.py
           * The script runs ok but in local :(    i can check it in
localhost:4040, but i don't see any job in cluster UI

The only warning it's:
WARN Utils: Your hostname, localhost resolves to a loopback address:
127.0.0.1; using 192.168.1.132 instead (on interface eth0)

I set SPARK_LOCAL_IP=127.0.0.1 to solve this, al least de warning disappear,
but the script keep executing in local not in cluster.

I think it has something to do with my virtual server:
- Host Server: Linux Mint
- The Virtual Server (workstation 10) where runs Spark is Linux Mint as
well.

Any ideas what am i doing wrong?

Thanks in advance for any suggestion, i getting mad on it!!




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Problem-submiting-an-script-py-against-an-standalone-cluster-tp24091.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org





-- 
Marcelo