Hi,

When I publish my version of Hadoop, it is installed in:
/HOME_DIRECTORY/.m2/repository/org/apache/hadoop, but when I compile Spark,
it will fetch Hadoop libraries from
https://repo1.maven.org/maven2/org/apache/hadoop. How can I let Spark fetch
Hadoop libraries from my local M2 cache? Great thanks!

On Fri, Oct 9, 2015 at 5:31 PM, Matei Zaharia <matei.zaha...@gmail.com>
wrote:

> You can publish your version of Hadoop to your Maven cache with mvn
> publish (just give it a different version number, e.g. 2.7.0a) and then
> pass that as the Hadoop version to Spark's build (see
> http://spark.apache.org/docs/latest/building-spark.html).
>
> Matei
>
> On Oct 9, 2015, at 3:10 PM, Dogtail L <spark.ru...@gmail.com> wrote:
>
> Hi all,
>
> I have modified Hadoop source code, and I want to compile Spark with my
> modified Hadoop. Do you know how to do that? Great thanks!
>
>
>

Reply via email to