On Wed, Aug 24, 2016 at 12:12 PM, Sean Owen wrote:
> If you're just varying versions (or things that can be controlled by a
> profile, which is most everything including dependencies), you don't
> need and probably don't want multiple POM files. Even that wouldn't
> mean you
If you're just varying versions (or things that can be controlled by a
profile, which is most everything including dependencies), you don't
need and probably don't want multiple POM files. Even that wouldn't
mean you can't use classifiers.
I have seen it used for HBase, core Hadoop. I am not sure
Have you seen any successful applications of this for Spark 1.x/2.x?
>From the doc "The classifier allows to distinguish artifacts that were
built from the same POM but differ in their content."
We'd be building from different POMs, since we'd be modifying the Spark
dependency version (and
This is also what "classifiers" are for in Maven, to have variations
on one artifact and version. https://maven.apache.org/pom.html
It has been used to ship code for Hadoop 1 vs 2 APIs.
In a way it's the same idea as Scala's "_2.xx" naming convention, with
a less unfortunate implementation.
On
Ah yes, thank you for the clarification.
On Wed, Aug 24, 2016 at 11:44 AM, Ted Yu wrote:
> 'Spark 1.x and Scala 2.10 & 2.11' was repeated.
>
> I guess your second line should read:
>
> org.bdgenomics.adam:adam-{core,apis,cli}-spark2_2.1[0,1] for Spark 2.x
> and Scala 2.10
'Spark 1.x and Scala 2.10 & 2.11' was repeated.
I guess your second line should read:
org.bdgenomics.adam:adam-{core,apis,cli}-spark2_2.1[0,1] for Spark 2.x and
Scala 2.10 & 2.11
On Wed, Aug 24, 2016 at 9:41 AM, Michael Heuer wrote:
> Hello,
>
> We're a project downstream
Hello,
We're a project downstream of Spark and need to provide separate artifacts
for Spark 1.x and Spark 2.x. Has any convention been established or even
proposed for artifact names and/or qualifiers?
We are currently thinking
org.bdgenomics.adam:adam-{core,apis,cli}_2.1[0,1] for Spark 1.x