[ https://issues.apache.org/jira/browse/SPARK-5510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
hash-x updated SPARK-5510: -------------------------- Description: Reference: My Question is how can I fix the script and can submit the program to a Master from my laptop? Not submit the program from a cluster. Submit program from Node 2 is work for me.But the laptop is not!How can i do to fix ??? help!!!!!!!!!!! I have looked the follow Email and I accept the recommend of One - run spark-shell from a cluster node! But I want to solve the program with the recommend of 2.But I am confused.......... Hi Ken, This is unfortunately a limitation of spark-shell and the way it works on the standalone mode. spark-shell sets an environment variable, SPARK_HOME, which tells Spark where to find its code installed on the cluster. This means that the path on your laptop must be the same as on the cluster, which is not the case. I recommend one of two things: 1) Either run spark-shell from a cluster node, where it will have the right path. (In general it’s also better for performance to have it close to the cluster) 2) Or, edit the spark-shell script and re-export SPARK_HOME right before it runs the Java command (ugly but will probably work). was: Reference: My Question is how can I fix the script and can submit the program to a Master from my laptop? Not submit the program from a cluster. Submit program from Node 2 is work for me.But the laptop is not!How can i do to fix ??? help!!!!!!!!!!! Hi Ken, This is unfortunately a limitation of spark-shell and the way it works on the standalone mode. spark-shell sets an environment variable, SPARK_HOME, which tells Spark where to find its code installed on the cluster. This means that the path on your laptop must be the same as on the cluster, which is not the case. I recommend one of two things: 1) Either run spark-shell from a cluster node, where it will have the right path. (In general it’s also better for performance to have it close to the cluster) 2) Or, edit the spark-shell script and re-export SPARK_HOME right before it runs the Java command (ugly but will probably work). > How can I fix the spark-submit script and then running the spark App on a > Driver? > --------------------------------------------------------------------------------- > > Key: SPARK-5510 > URL: https://issues.apache.org/jira/browse/SPARK-5510 > Project: Spark > Issue Type: Bug > Components: Spark Shell > Affects Versions: 1.0.2 > Reporter: hash-x > Labels: Help!!!!!!!!!! > Fix For: 1.0.2 > > > Reference: My Question is how can I fix the script and can submit the program > to a Master from my laptop? Not submit the program from a cluster. Submit > program from Node 2 is work for me.But the laptop is not!How can i do to fix > ??? help!!!!!!!!!!! > I have looked the follow Email and I accept the recommend of One - run > spark-shell from a cluster node! But I want to solve the program with the > recommend of 2.But I am confused.......... > Hi Ken, > This is unfortunately a limitation of spark-shell and the way it works on the > standalone mode. > spark-shell sets an environment variable, SPARK_HOME, which tells Spark where > to find its > code installed on the cluster. This means that the path on your laptop must > be the same as > on the cluster, which is not the case. I recommend one of two things: > 1) Either run spark-shell from a cluster node, where it will have the right > path. (In general > it’s also better for performance to have it close to the cluster) > 2) Or, edit the spark-shell script and re-export SPARK_HOME right before it > runs the Java > command (ugly but will probably work). -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org