> On 8 Oct 2015, at 19:31, sbiookag <sbioo...@asu.edu> wrote:
> 
> Thanks Ted for reply.
> 
> But this is not what I want. This would tell spark to read hadoop dependency
> from maven repository, which is the original version of hadoop. I myslef is
> modifying the hadoop code, and wanted to include them inside the spark fat
> jar. "Spark-Class" would run slaves with the fat jar created in the assembly
> folder, and that jar does not contain my modified classes. 

it should if you have built a local hadoop version and done the -Phadoop-2.6 
-Dhadoop.version=2.8.0-SNAPSHOT

if you are rebuilding hadoop with an existing version number (e.g. 2.6.0, 
2.7.1) then maven may not actually be picking up your new code


> 
> Something that confuses me is, what spark includes the hadoop classes in
> it's built jar output? Isn't it supposed to go and read from the hadoop
> folder in each worker node?


There's a hadoop-provided profile which you can build with; this should leave 
the hadoop artifacts (and other stuff expected to be in the far-end's 
classpath) out of the assembly

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to