Hi All,
I'm getting the following error when I execute start-master.sh which also 
invokes spark-class at the end.








Failed to find Spark assembly in /root/spark/assembly/target/scala-2.10/
You need to build Spark with 'sbt/sbt assembly' before running this program.
After digging into the code, I see the CLASSPATH is hardcoded with 
"spark-assembly.*hadoop.*.jar".In bin/spark-class :
if [ ! -f "$FWDIR/RELEASE" ]; then  # Exit if the user hasn't compiled Spark  
num_jars=$(ls "$FWDIR"/assembly/target/scala-$SCALA_VERSION/ | grep 
"spark-assembly.*hadoop.*.jar" | wc -l)  jars_list=$(ls 
"$FWDIR"/assembly/target/scala-$SCALA_VERSION/ | grep 
"spark-assembly.*hadoop.*.jar")  if [ "$num_jars" -eq "0" ]; then    echo 
"Failed to find Spark assembly in $FWDIR/assembly/target/scala-$SCALA_VERSION/" 
>&2    echo "You need to build Spark with 'sbt/sbt assembly' before running 
this program." >&2    exit 1  fi  if [ "$num_jars" -gt "1" ]; then    echo 
"Found multiple Spark assembly jars in 
$FWDIR/assembly/target/scala-$SCALA_VERSION:" >&2    echo "$jars_list"    echo 
"Please remove all but one jar."    exit 1  fi






















fi
Is there any reason why this is only grabbing spark-assembly.*hadoop.*.jar ? I 
am trying to run Spark that links to my own version of Hadoop under 
/opt/hadoop23/, and I use 'sbt/sbt clean package' to build the package without 
the Hadoop jar. What is the correct way to link to my own Hadoop jar?

                                                                                
  

Reply via email to