Hi, there,
You may try to use nexus to establish maven local repository. I think this link 
would be helpful. 
http://www.sonatype.org/nexus/2015/02/27/setup-local-nexus-repository-and-deploy-war-file-from-maven/
 

After you had done the repository, you may use maven-deploy-plugin to deploy 
your customized 
hadoop jar and relative pom.xml to nexus repository. Check the link for 
reference: 
https://books.sonatype.com/nexus-book/reference/staging-deployment.html 




fightf...@163.com
 
From: Lu, Yingqi
Date: 2016-03-08 15:23
To: fightf...@163.com; user
Subject: RE: How to compile Spark with private build of Hadoop
Thank you for the quick reply. I am very new to maven and always use the 
default settings. Can you please be a little more specific on the instructions?
 
I think all the jar files from Hadoop build are located at 
Hadoop-3.0.0-SNAPSHOT/share/hadoop. Which ones I need to use to compile Spark 
and how can I change the pom.xml?
 
Thanks,
Lucy
 
From: fightf...@163.com [mailto:fightf...@163.com] 
Sent: Monday, March 07, 2016 11:15 PM
To: Lu, Yingqi <yingqi...@intel.com>; user <user@spark.apache.org>
Subject: Re: How to compile Spark with private build of Hadoop
 
I think you can establish your own maven repository and deploy your modified 
hadoop binary jar 
with your modified version number. Then you can add your repository in spark 
pom.xml and use 
mvn -Dhadoop.version=
 


fightf...@163.com
 
From: Lu, Yingqi
Date: 2016-03-08 15:09
To: user@spark.apache.org
Subject: How to compile Spark with private build of Hadoop
Hi All,
 
I am new to Spark and I have a question regarding to compile Spark. I modified 
trunk version of Hadoop source code. How can I compile Spark (standalone mode) 
with my modified version of Hadoop (HDFS, Hadoop-common and etc.)?
 
Thanks a lot for your help!
 
Thanks,
Lucy
 
 
 
 

Reply via email to