it would seem 2nd option is preferable if doable. Any option that has most
desirable combinations prebuilt, is preferable i guess. Spark itself also
releases tons of hadoop profile binary variations. so i don't have to build
one myself.

On Fri, Jul 7, 2017 at 8:57 AM, Trevor Grant <trevor.d.gr...@gmail.com>
wrote:

> Hey all,
>
> Working on releasing 0.13.1 with multiple spark/scala combos.
>
> Afaik, there is no 'standard' for multiple spark versions (but I may be
> wrong, I don't claim expertise here).
>
> One approach is simply only release binaries for:
> Spark-1.6 + Scala 2.10
> Spark-2.1 + Scala 2.11
>
> OR
>
> We could do like dl4j
>
> org.apache.mahout:mahout-spark_2.10:0.13.1_spark_1
> org.apache.mahout:mahout-spark_2.11:0.13.1_spark_1
>
> org.apache.mahout:mahout-spark_2.10:0.13.1_spark_2
> org.apache.mahout:mahout-spark_2.11:0.13.1_spark_2
>
> OR
>
> some other option I don't know of.
>

Reply via email to