Thank you for the quick reply. I am very new to maven and always use the 
default settings. Can you please be a little more specific on the instructions?

I think all the jar files from Hadoop build are located at 
Hadoop-3.0.0-SNAPSHOT/share/hadoop. Which ones I need to use to compile Spark 
and how can I change the pom.xml?

Thanks,
Lucy

From: fightf...@163.com [mailto:fightf...@163.com]
Sent: Monday, March 07, 2016 11:15 PM
To: Lu, Yingqi <yingqi...@intel.com>; user <user@spark.apache.org>
Subject: Re: How to compile Spark with private build of Hadoop

I think you can establish your own maven repository and deploy your modified 
hadoop binary jar
with your modified version number. Then you can add your repository in spark 
pom.xml and use
mvn -Dhadoop.version=

________________________________
fightf...@163.com<mailto:fightf...@163.com>

From: Lu, Yingqi<mailto:yingqi...@intel.com>
Date: 2016-03-08 15:09
To: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: How to compile Spark with private build of Hadoop
Hi All,

I am new to Spark and I have a question regarding to compile Spark. I modified 
trunk version of Hadoop source code. How can I compile Spark (standalone mode) 
with my modified version of Hadoop (HDFS, Hadoop-common and etc.)?

Thanks a lot for your help!

Thanks,
Lucy




Reply via email to