Should Hadoop 2 + Hive 2 be considered to work on JDK 11? I wasn't
sure if 2.7 did, but honestly I've lost track.
Anyway, it doesn't matter much as the JDK doesn't cause another build
permutation. All are built targeting Java 8.

I also don't know if we have to declare a binary release a default.
The published POM will be agnostic to Hadoop / Hive; well, it will
link against a particular version but can be overridden. That's what
you're getting at?


On Tue, Nov 19, 2019 at 7:11 PM Hyukjin Kwon <gurwls...@gmail.com> wrote:
>
> So, are we able to conclude our plans as below?
>
> 1. In Spark 3,  we release as below:
>   - Hadoop 3.2 + Hive 2.3 + JDK8 build that also works JDK 11
>   - Hadoop 2.7 + Hive 2.3 + JDK8 build that also works JDK 11
>   - Hadoop 2.7 + Hive 1.2.1 (fork) + JDK8 (default)
>
> 2. In Spark 3.1, we target:
>   - Hadoop 3.2 + Hive 2.3 + JDK8 build that also works JDK 11
>   - Hadoop 2.7 + Hive 2.3 + JDK8 build that also works JDK 11 (default)
>
> 3. Avoid to remove "Hadoop 2.7 + Hive 1.2.1 (fork) + JDK8 (default)" combo 
> right away after cutting branch-3 to see if Hive 2.3 is considered as stable 
> in general.
>     I roughly suspect it would be a couple of months after Spark 3.0 release 
> (?).
>
> BTW, maybe we should officially note that "Hadoop 2.7 + Hive 1.2.1 (fork) + 
> JDK8 (default)" combination is deprecated anyway in Spark 3.
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to