[ 
https://issues.apache.org/jira/browse/SPARK-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14300108#comment-14300108
 ] 

hash-x edited comment on SPARK-5510 at 2/1/15 8:13 AM:
-------------------------------------------------------

My Master and Node of Spark path is 
/home/hadoop/Distribute/spark-1.0.2

and my laptop of Spark path is 
/home/hash-x/spark-1.0.2

How can I fix the spark-submit and running the program rightly ? Node 2 is 
worked for me.But I want to submit the program from my laptop...I am 
confused.....Help!!!!!!!!!!!!!!!!!!!!!!!!!!!

And I have another Question about ,when I submit the program from a node 2 to 
Master ,And It can running collectly,But!!!!!!!!!!!!!!!!!!!!!!!!   The webUI 
have some program ,,cause the number of cores is ZERO!!!!But can FINISHED 
collect. Why ??? If I run the SparkPi on my laptop,And it correct ,The number 
of core is 12.Why ? I am very confused.


was (Author: hash-x):
My Master and Node of Spark path is 
/home/hadoop/Distribute/spark-1.0.2

and my laptop of Spark path is 
/home/hash-x/spark-1.0.2

How can I fix the spark-submit and running the program rightly ? Node 2 is 
worked for me.But I want to submit the program from my laptop...I am 
confused.....Help!!!!!!!!!!!!!!!!!!!!!!!!!!!

> How can I fix the spark-submit script and then running the spark App on a 
> Driver?
> ---------------------------------------------------------------------------------
>
>                 Key: SPARK-5510
>                 URL: https://issues.apache.org/jira/browse/SPARK-5510
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 1.0.2
>            Reporter: hash-x
>              Labels: Help!!!!!!!!!!
>             Fix For: 1.0.2
>
>
> Reference: My Question is how can I fix the script and can submit the program 
> to a Master from my laptop? Not submit the program from a cluster. Submit 
> program from Node 2 is work for me.But the laptop is not!How can i do to fix 
> ??? help!!!!!!!!!!!
> Hi Ken,
> This is unfortunately a limitation of spark-shell and the way it works on the 
> standalone mode.
> spark-shell sets an environment variable, SPARK_HOME, which tells Spark where 
> to find its
> code installed on the cluster. This means that the path on your laptop must 
> be the same as
> on the cluster, which is not the case. I recommend one of two things:
> 1) Either run spark-shell from a cluster node, where it will have the right 
> path. (In general
> it’s also better for performance to have it close to the cluster)
> 2) Or, edit the spark-shell script and re-export SPARK_HOME right before it 
> runs the Java
> command (ugly but will probably work).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to