[
https://issues.apache.org/jira/browse/PIG-4903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15301784#comment-15301784
]
Srikanth Sundarrajan commented on PIG-4903:
-------------------------------------------
[~kellyzly], If spark-assembly is available, we can skip this completely &
simplify. A modified version of pig bin script runs in our environment. Am
attaching here for quick reference. If it makes sense I can attach the diff as
patch against this jira.
{code}
if [ -n "$SPARK_HOME" ]; then
echo "Using Spark Home: " ${SPARK_HOME}
export SPARK_JARS=`ls ${SPARK_HOME}/lib/spark-assembly*`
CLASSPATH=${CLASSPATH}:${SPARK_JARS}
export SPARK_HOME
fi
if [ -z "$SPARK_HOME" ]; then
for f in $PIG_HOME/lib/spark/*.jar; do
if [[ $f == $PIG_HOME/lib/spark/spark-yarn* ]]; then
# Exclude spark-yarn.jar from shipped jars, but retain in classpath
SPARK_JARS=${SPARK_JARS}:$f;
else
SPARK_JARS=${SPARK_JARS}:$f;
SPARK_YARN_DIST_FILES=${SPARK_YARN_DIST_FILES},file://$f;
SPARK_DIST_CLASSPATH=${SPARK_DIST_CLASSPATH}:./`basename $f`
fi
done
for f in $PIG_HOME/lib/*.jar; do
SPARK_JARS=${SPARK_JARS}:$f;
SPARK_YARN_DIST_FILES=${SPARK_YARN_DIST_FILES},file://$f;
SPARK_DIST_CLASSPATH=${SPARK_DIST_CLASSPATH}:./`basename $f`
done
CLASSPATH=${CLASSPATH}:${SPARK_JARS}
export SPARK_YARN_DIST_FILES=`echo ${SPARK_YARN_DIST_FILES} | sed 's/^,//g'`
export SPARK_JARS=${SPARK_YARN_DIST_FILES}
export SPARK_DIST_CLASSPATH
fi
{code}
> Avoid add all spark dependency jars to SPARK_YARN_DIST_FILES and
> SPARK_DIST_CLASSPATH
> --------------------------------------------------------------------------------------
>
> Key: PIG-4903
> URL: https://issues.apache.org/jira/browse/PIG-4903
> Project: Pig
> Issue Type: Sub-task
> Components: spark
> Reporter: liyunzhang_intel
>
> There are some comments about bin/pig on
> https://reviews.apache.org/r/45667/#comment198955.
> {code}
> ################# ADDING SPARK DEPENDENCIES ##################
> # Spark typically works with a single assembly file. However this
> # assembly isn't available as a artifact to pull in via ivy.
> # To work around this short coming, we add all the jars barring
> # spark-yarn to DIST through dist-files and then add them to classpath
> # of the executors through an independent env variable. The reason
> # for excluding spark-yarn is because spark-yarn is already being added
> # by the spark-yarn-client via jarOf(Client.Class)
> for f in $PIG_HOME/lib/*.jar; do
> if [[ $f == $PIG_HOME/lib/spark-assembly* ]]; then
> # Exclude spark-assembly.jar from shipped jars, but retain in
> classpath
> SPARK_JARS=${SPARK_JARS}:$f;
> else
> SPARK_JARS=${SPARK_JARS}:$f;
> SPARK_YARN_DIST_FILES=${SPARK_YARN_DIST_FILES},file://$f;
> SPARK_DIST_CLASSPATH=${SPARK_DIST_CLASSPATH}:\${PWD}/`basename $f`
> fi
> done
> CLASSPATH=${CLASSPATH}:${SPARK_JARS}
> export SPARK_YARN_DIST_FILES=`echo ${SPARK_YARN_DIST_FILES} | sed 's/^,//g'`
> export SPARK_JARS=${SPARK_YARN_DIST_FILES}
> export SPARK_DIST_CLASSPATH
> {code}
> Here we first copy all spark dependency jar like
> spark-network-shuffle_2.10-1.6.1 jar to distcache(SPARK_YARN_DIST_FILES) then
> add them to the classpath of executor(SPARK_DIST_CLASSPATH). Actually we need
> not copy all these depency jar to SPARK_DIST_CLASSPATH because all these
> dependency jars are included in spark-assembly.jar and spark-assembly.jar is
> uploaded with the spark job.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)