Hi,
I am just playing around with the codes in Spark.
I am printing out some statements of the codes given in Spark so as to see
how it looks.
Every time I change/add something to the code I have to run the command

*SPARK_HADOOP_VERSION=2.3.0 sbt/sbt assembly*

which is tiresome at times.
Is there any way to check out the codes or even add a new code (a new file
to the examples directory) to the Spark already available without having to
do sbt/sbt assembly? Please tell me for a single node as well as a multi
node cluster.

Thank You

Reply via email to