Re: I'm trying to understand how to compile Spark

2016-07-19 Thread Jacek Laskowski
Hi, hadoop-2.7 would be more fresh. You don't need hadoop.version when the defaults are fine. 2.7.2 for hadoop-2.7 profile. Jacdk On 19 Jul 2016 6:09 p.m., "Jakob Odersky" wrote: > Hi Eli, > > to build spark, just run > > build/mvn -Pyarn -Phadoop-2.6

Re: I'm trying to understand how to compile Spark

2016-07-19 Thread Jakob Odersky
Hi Eli, to build spark, just run build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests package in your source directory, where package is the actual word "package". This will recompile the whole project, so it may take a while when running the first time. Replacing a single file

Re: I'm trying to understand how to compile Spark

2016-07-19 Thread Ted Yu
org.apache.spark.mllib.fpm is not a maven goal. -pl is For Individual Projects. Your first build action should not include -pl. On Tue, Jul 19, 2016 at 4:22 AM, Eli Super wrote: > Hi > > I have a windows laptop > > I just downloaded the spark 1.4.1 source code. > > I try

I'm trying to understand how to compile Spark

2016-07-19 Thread Eli Super
Hi I have a windows laptop I just downloaded the spark 1.4.1 source code. I try to compile *org.apache.spark.mllib.fpm* with *mvn * My goal is to replace *original *org\apache\spark\mllib\fpm\* in *spark-assembly-1.4.1-hadoop2.6.0.jar* As I understand from this link