ch/node/192
Best,
Luca
From: Faiz Halde <haldef...@gmail.com>
Sent: Thursday, December 7, 2023 23:25
To: user@spark.apache.org
Subject: Spark on Java 17
Hello,
We are planning to switch to Java 17 for Spark and were wondering if there's any obvious learnings from anybody
b.cern.ch/node/192
>
>
>
> Best,
>
> Luca
>
>
>
>
>
> *From:* Faiz Halde
> *Sent:* Thursday, December 7, 2023 23:25
> *To:* user@spark.apache.org
> *Subject:* Spark on Java 17
>
>
>
> Hello,
>
>
>
> We are planning to switch to Java 1
If you do tests with newer Java versions you can also try:
- UseNUMA: -XX:+UseNUMA. See https://openjdk.org/jeps/345
You can also assess the new Java GC algorithms:
- -XX:+UseShenandoahGC - works with terabyte of heaps - more memory efficient
than zgc with heaps <32 GB. See also:
https://develo
/LucaCanali/sparkMeasure
A few tests of microbenchmarking Spark reading Parquet with a few different
JDKs at: https://db-blog.web.cern.ch/node/192
Best,
Luca
From: Faiz Halde
Sent: Thursday, December 7, 2023 23:25
To: user@spark.apache.org
Subject: Spark on Java 17
Hello,
We are planning to switch
Hello,
We are planning to switch to Java 17 for Spark and were wondering if
there's any obvious learnings from anybody related to JVM tuning?
We've been running on Java 8 for a while now and used to use the parallel
GC as that used to be a general recommendation for high throughout systems.
How h