Great work!

On Sun, Aug 25, 2019 at 6:03 AM Xiao Li <[email protected]> wrote:

> Thank you for your contributions! This is a great feature for Spark
> 3.0! We finally achieve it!
>
> Xiao
>
> On Sat, Aug 24, 2019 at 12:18 PM Felix Cheung <[email protected]>
> wrote:
>
>> That’s great!
>>
>> ------------------------------
>> *From:* ☼ R Nair <[email protected]>
>> *Sent:* Saturday, August 24, 2019 10:57:31 AM
>> *To:* Dongjoon Hyun <[email protected]>
>> *Cc:* [email protected] <[email protected]>; user @spark/'user
>> @spark'/spark users/user@spark <[email protected]>
>> *Subject:* Re: JDK11 Support in Apache Spark
>>
>> Finally!!! Congrats
>>
>> On Sat, Aug 24, 2019, 11:11 AM Dongjoon Hyun <[email protected]>
>> wrote:
>>
>>> Hi, All.
>>>
>>> Thanks to your many many contributions,
>>> Apache Spark master branch starts to pass on JDK11 as of today.
>>> (with `hadoop-3.2` profile: Apache Hadoop 3.2 and Hive 2.3.6)
>>>
>>>
>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2-jdk-11/326/
>>>     (JDK11 is used for building and testing.)
>>>
>>> We already verified all UTs (including PySpark/SparkR) before.
>>>
>>> Please feel free to use JDK11 in order to build/test/run `master` branch
>>> and
>>> share your experience including any issues. It will help Apache Spark
>>> 3.0.0 release.
>>>
>>> For the follow-ups, please follow
>>> https://issues.apache.org/jira/browse/SPARK-24417 .
>>> The next step is `how to support JDK8/JDK11 together in a single
>>> artifact`.
>>>
>>> Bests,
>>> Dongjoon.
>>>
>>
>
> --
> [image: Databricks Summit - Watch the talks]
> <https://databricks.com/sparkaisummit/north-america>
>

Reply via email to