/github.com/LucaCanali/Miscellaneous/blob/master/Spark_Notes/Spark_Set_Java_Home_Howto.md
>
>
>
> Best,
>
> Luca
>
>
>
> *From:* Dongjoon Hyun
> *Sent:* Saturday, December 9, 2023 09:39
> *To:* Jason Xu
> *Cc:* dev@spark.apache.org
> *Subject:* Re: Spark on Yarn with
/Spark_Set_Java_Home_Howto.md
Best,
Luca
From: Dongjoon Hyun
Sent: Saturday, December 9, 2023 09:39
To: Jason Xu
Cc: dev@spark.apache.org
Subject: Re: Spark on Yarn with Java 17
Please try Apache Spark 3.3+ (SPARK-33772) with Java 17 on your cluster simply,
Jason.
I believe you can set up for your Spark
Please try Apache Spark 3.3+ (SPARK-33772) with Java 17 on your cluster
simply, Jason.
I believe you can set up for your Spark 3.3+ jobs to run with Java 17 while
your cluster(DataNode/NameNode/ResourceManager/NodeManager) is still
sitting on Java 8.
Dongjoon.
On Fri, Dec 8, 2023 at 11:12 PM Jas
Dongjoon, thank you for the fast response!
Apache Spark 4.0.0 depends on only Apache Hadoop client library.
To better understand your answer, does that mean a Spark application built
with Java 17 can successfully run on a Hadoop cluster on version 3.3 and
Java 8 runtime?
On Fri, Dec 8, 2023 at 4
Hi, Jason.
Apache Spark 4.0.0 depends on only Apache Hadoop client library.
You can track all `Apache Spark 4` activities including Hadoop dependency here.
https://issues.apache.org/jira/browse/SPARK-44111
(Prepare Apache Spark 4.0.0)
According to the release history, the original suggested tim