Hi,
When I publish my version of Hadoop, it is installed in:
/HOME_DIRECTORY/.m2/repository/org/apache/hadoop, but when I compile Spark,
it will fetch Hadoop libraries from
https://repo1.maven.org/maven2/org/apache/hadoop. How can I let Spark fetch
Hadoop libraries from my local M2 cache? Great
There is spark without hadoop version.. You can use that to link with any
custom hadoop version.
Raghav
On Oct 10, 2015 5:34 PM, "Steve Loughran" wrote:
>
> During development, I'd recommend giving Hadoop a version ending with
> -SNAPSHOT, and building spark with maven,
During development, I'd recommend giving Hadoop a version ending with
-SNAPSHOT, and building spark with maven, as mvn knows to refresh the snapshot
every day.
you can do this in hadoop with
mvn versions:set 2.7.0.stevel-SNAPSHOT
if you are working on hadoop branch-2 or trunk direct, they
You can publish your version of Hadoop to your Maven cache with mvn publish
(just give it a different version number, e.g. 2.7.0a) and then pass that as
the Hadoop version to Spark's build (see
http://spark.apache.org/docs/latest/building-spark.html
Hi all,
I have modified Hadoop source code, and I want to compile Spark with my
modified Hadoop. Do you know how to do that? Great thanks!