For hortonworks, I believe it should work to just link against the
corresponding upstream version. I.e. just set the Hadoop version to 2.4.0
Does that work?
- Patrick
On Mon, Aug 4, 2014 at 12:13 AM, Ron's Yahoo! zlgonza...@yahoo.com.invalid
wrote:
Hi,
Not sure whose issue this is, but if
That failed since it defaulted the versions for yarn and hadoop
I’ll give it a try with just 2.4.0 for both yarn and hadoop…
Thanks,
Ron
On Aug 4, 2014, at 9:44 AM, Patrick Wendell pwend...@gmail.com wrote:
Can you try building without any of the special `hadoop.version` flags and
just
I meant yarn and hadoop defaulted to 1.0.4 so the yarn build fails since 1.0.4
doesn’t exist for yarn...
Thanks,
Ron
On Aug 4, 2014, at 10:01 AM, Ron's Yahoo! zlgonza...@yahoo.com wrote:
That failed since it defaulted the versions for yarn and hadoop
I’ll give it a try with just 2.4.0 for
@spark.apache.org, d...@spark.apache.org
d...@spark.apache.org
Subject: Re: Issues with HDP 2.4.0.2.1.3.0-563
Ah I see, yeah you might need to set hadoop.version and yarn.version. I
thought he profile set this automatically.
On Mon, Aug 4, 2014 at 10:02 AM, Ron's Yahoo! zlgonza...@yahoo.com wrote:
I meant
The profile does set it automatically:
https://github.com/apache/spark/blob/master/pom.xml#L1086
yarn.version should default to hadoop.version
It shouldn't hurt, and should work, to set to any other specific
version. If one HDP version works and another doesn't, are you sure
the repo has the
What would such a profile do though? In general building for a
specific vendor version means setting hadoop.verison and/or
yarn.version. Any hard-coded value is unlikely to match what a
particular user needs. Setting protobuf versions and so on is already
done by the generic profiles.
In a
Hmm. Fair enough. I hadn¹t given that answer much thought and on
reflection think you¹re right in that a profile would just be a bad hack.
On 8/4/14, 10:35, Sean Owen so...@cloudera.com wrote:
What would such a profile do though? In general building for a
specific vendor version means setting
One key thing I forgot to mention is that I changed the avro version to 1.7.7
to get AVRO-1476.
I took a closer look at the jars, and what I noticed is that the assembly jars
that work do not have the org.apache.avro.mapreduce package packaged into the
assembly. For spark-1.0.1,