(spark) branch master updated: [SPARK-46899][CORE][FOLLOWUP] Enable `/workers/kill` if `spark.decommission.enabled=true`

2024-02-03 Thread yao
This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new ed6fe4fccabe [SPARK-46899][CORE][FOLLOWUP] Enable `/workers/kill` if 
`spark.decommission.enabled=true`
ed6fe4fccabe is described below

commit ed6fe4fccabe8068b3d1e1365e87b51c66908474
Author: Dongjoon Hyun 
AuthorDate: Sun Feb 4 14:44:51 2024 +0800

[SPARK-46899][CORE][FOLLOWUP] Enable `/workers/kill` if 
`spark.decommission.enabled=true`

### What changes were proposed in this pull request?

This PR aims to re-enable `/workers/kill` API if 
`spark.decommission.enabled=true` as a follow-up of
- #44926

### Why are the changes needed?

To address this review comment in order to prevent a regression.
- https://github.com/apache/spark/pull/44926#pullrequestreview-1854788375

### Does this PR introduce _any_ user-facing change?

No, this will recover the previous feature.

### How was this patch tested?

Manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #45010 from dongjoon-hyun/SPARK-46899-2.

Authored-by: Dongjoon Hyun 
Signed-off-by: Kent Yao 
---
 .../main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala  | 6 --
 .../apache/spark/deploy/master/ui/ReadOnlyMasterWebUISuite.scala| 5 -
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git 
a/core/src/main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala 
b/core/src/main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala
index 74e7f4c67ade..9f5738ce4863 100644
--- a/core/src/main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/master/ui/MasterWebUI.scala
@@ -43,7 +43,7 @@ class MasterWebUI(
 
   val masterEndpointRef = master.self
   val killEnabled = master.conf.get(UI_KILL_ENABLED)
-  val decommissionDisabled = !master.conf.get(DECOMMISSION_ENABLED)
+  val decommissionEnabled = master.conf.get(DECOMMISSION_ENABLED)
   val decommissionAllowMode = 
master.conf.get(MASTER_UI_DECOMMISSION_ALLOW_MODE)
 
   initialize()
@@ -61,11 +61,13 @@ class MasterWebUI(
 "/app/kill", "/", masterPage.handleAppKillRequest, httpMethods = 
Set("POST")))
   attachHandler(createRedirectHandler(
 "/driver/kill", "/", masterPage.handleDriverKillRequest, httpMethods = 
Set("POST")))
+}
+if (decommissionEnabled) {
   attachHandler(createServletHandler("/workers/kill", new HttpServlet {
 override def doPost(req: HttpServletRequest, resp: 
HttpServletResponse): Unit = {
   val hostnames: Seq[String] = Option(req.getParameterValues("host"))
 .getOrElse(Array[String]()).toImmutableArraySeq
-  if (decommissionDisabled || !isDecommissioningRequestAllowed(req)) {
+  if (!isDecommissioningRequestAllowed(req)) {
 resp.sendError(HttpServletResponse.SC_METHOD_NOT_ALLOWED)
   } else {
 val removedWorkers = masterEndpointRef.askSync[Integer](
diff --git 
a/core/src/test/scala/org/apache/spark/deploy/master/ui/ReadOnlyMasterWebUISuite.scala
 
b/core/src/test/scala/org/apache/spark/deploy/master/ui/ReadOnlyMasterWebUISuite.scala
index c52ce91fda8b..ab323aaf7999 100644
--- 
a/core/src/test/scala/org/apache/spark/deploy/master/ui/ReadOnlyMasterWebUISuite.scala
+++ 
b/core/src/test/scala/org/apache/spark/deploy/master/ui/ReadOnlyMasterWebUISuite.scala
@@ -24,13 +24,16 @@ import org.mockito.Mockito.{mock, when}
 import org.apache.spark.{SecurityManager, SparkConf, SparkFunSuite}
 import org.apache.spark.deploy.master._
 import org.apache.spark.deploy.master.ui.MasterWebUISuite._
+import org.apache.spark.internal.config.DECOMMISSION_ENABLED
 import org.apache.spark.internal.config.UI.UI_KILL_ENABLED
 import org.apache.spark.rpc.{RpcEndpointRef, RpcEnv}
 import org.apache.spark.util.Utils
 
 class ReadOnlyMasterWebUISuite extends SparkFunSuite {
 
-  val conf = new SparkConf().set(UI_KILL_ENABLED, false)
+  val conf = new SparkConf()
+.set(UI_KILL_ENABLED, false)
+.set(DECOMMISSION_ENABLED, false)
   val securityMgr = new SecurityManager(conf)
   val rpcEnv = mock(classOf[RpcEnv])
   val master = mock(classOf[Master])


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46970][CORE] Rewrite `OpenHashSet#hasher` with `pattern matching`

2024-02-03 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 7ca355cbc225 [SPARK-46970][CORE] Rewrite `OpenHashSet#hasher` with 
`pattern matching`
7ca355cbc225 is described below

commit 7ca355cbc225653b090020271117a763ec59536d
Author: yangjie01 
AuthorDate: Sat Feb 3 21:07:16 2024 -0800

[SPARK-46970][CORE] Rewrite `OpenHashSet#hasher` with `pattern matching`

### What changes were proposed in this pull request?
The proposed changes in this pr involve refactoring the method of creating 
a `Hasher[T]` instance in the code. The original code used a series of if-else 
statements to check the class type of `T` and create the corresponding 
`Hasher[T]` instance. The proposed change simplifies this process by using 
Scala's pattern matching feature. The new code is more concise and easier to 
read.

### Why are the changes needed?
The changes are needed for several reasons. Firstly, the use of pattern 
matching makes the code more idiomatic to Scala, which is beneficial for 
readability and maintainability. Secondly, the original code contains a comment 
about a bug in the Scala 2.9.x compiler that prevented the use of pattern 
matching in this context. However, Apache Spark 4.0 has switched to using Scala 
2.13, and the new code has passed all tests, it appears that the bug no longer 
exists in the new version of Sc [...]

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
Pass GitHub Actions

### Was this patch authored or co-authored using generative AI tooling?
No

Closes #44998 from LuciferYang/openhashset-hasher.

Lead-authored-by: yangjie01 
Co-authored-by: YangJie 
Signed-off-by: Dongjoon Hyun 
---
 .../apache/spark/util/collection/OpenHashSet.scala | 28 +-
 1 file changed, 6 insertions(+), 22 deletions(-)

diff --git 
a/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala 
b/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala
index 6815e47a198d..faee9ce56a0a 100644
--- a/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala
+++ b/core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala
@@ -62,28 +62,12 @@ class OpenHashSet[@specialized(Long, Int, Double, Float) T: 
ClassTag](
   // specialization to work (specialized class extends the non-specialized one 
and needs access
   // to the "private" variables).
 
-  protected val hasher: Hasher[T] = {
-// It would've been more natural to write the following using pattern 
matching. But Scala 2.9.x
-// compiler has a bug when specialization is used together with this 
pattern matching, and
-// throws:
-// scala.tools.nsc.symtab.Types$TypeError: type mismatch;
-//  found   : scala.reflect.AnyValManifest[Long]
-//  required: scala.reflect.ClassTag[Int]
-// at 
scala.tools.nsc.typechecker.Contexts$Context.error(Contexts.scala:298)
-// at 
scala.tools.nsc.typechecker.Infer$Inferencer.error(Infer.scala:207)
-// ...
-val mt = classTag[T]
-if (mt == ClassTag.Long) {
-  (new LongHasher).asInstanceOf[Hasher[T]]
-} else if (mt == ClassTag.Int) {
-  (new IntHasher).asInstanceOf[Hasher[T]]
-} else if (mt == ClassTag.Double) {
-  (new DoubleHasher).asInstanceOf[Hasher[T]]
-} else if (mt == ClassTag.Float) {
-  (new FloatHasher).asInstanceOf[Hasher[T]]
-} else {
-  new Hasher[T]
-}
+  protected val hasher: Hasher[T] = classTag[T] match {
+case ClassTag.Long => new LongHasher().asInstanceOf[Hasher[T]]
+case ClassTag.Int => new IntHasher().asInstanceOf[Hasher[T]]
+case ClassTag.Double => new DoubleHasher().asInstanceOf[Hasher[T]]
+case ClassTag.Float => new FloatHasher().asInstanceOf[Hasher[T]]
+case _ => new Hasher[T]
   }
 
   protected var _capacity = nextPowerOf2(initialCapacity)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46967][CORE][UI] Hide `Thread Dump` and `Heap Histogram` of `Dead` executors in `Executors` UI

2024-02-03 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 062522e96a50 [SPARK-46967][CORE][UI] Hide `Thread Dump` and `Heap 
Histogram` of `Dead` executors in `Executors` UI
062522e96a50 is described below

commit 062522e96a50b8b46b313aae62668717ba88639f
Author: Dongjoon Hyun 
AuthorDate: Sat Feb 3 19:17:33 2024 -0800

[SPARK-46967][CORE][UI] Hide `Thread Dump` and `Heap Histogram` of `Dead` 
executors in `Executors` UI

### What changes were proposed in this pull request?

This PR aims to hide `Thread Dump` and `Heap Histogram` links of `Dead` 
executors in Spark Driver `Executors` UI.

**BEFORE**
![Screenshot 2024-02-02 at 11 40 46 
PM](https://github.com/apache/spark/assets/9700541/9fb45667-b25c-44cc-9c7c-c2ff981c5a2f)

**AFTER**
![Screenshot 2024-02-02 at 11 40 03 
PM](https://github.com/apache/spark/assets/9700541/9963452a-773c-4f8b-b025-9362853d3cae)

### Why are the changes needed?

Since both `Thread Dump` and `Heap Histogram` requires a live JVM, those 
links are broken and leads to the following pages.

**Broken Thread Dump Link**
![Screenshot 2024-02-02 at 11 36 55 
PM](https://github.com/apache/spark/assets/9700541/2cfff1b1-dc00-4fef-ab68-5e3fad5df7a0)

**Broken Heap Histogram Link**
![Screenshot 2024-02-02 at 11 37 12 
PM](https://github.com/apache/spark/assets/9700541/8450cb3e-3756-4755-896f-7ced682f09b0)

We had better hide them.

### Does this PR introduce _any_ user-facing change?

Yes, but this PR only hides the broken links.

### How was this patch tested?

Manual.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #45009 from dongjoon-hyun/SPARK-46967.

Authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 .../main/resources/org/apache/spark/ui/static/executorspage.js | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git 
a/core/src/main/resources/org/apache/spark/ui/static/executorspage.js 
b/core/src/main/resources/org/apache/spark/ui/static/executorspage.js
index 41164c7997bb..1b02fc0493e7 100644
--- a/core/src/main/resources/org/apache/spark/ui/static/executorspage.js
+++ b/core/src/main/resources/org/apache/spark/ui/static/executorspage.js
@@ -587,14 +587,16 @@ $(document).ready(function () {
 {name: 'executorLogsCol', data: 'executorLogs', render: 
formatLogsCells},
 {
   name: 'threadDumpCol',
-  data: 'id', render: function (data, type) {
-return type === 'display' ? ("Thread Dump" ) : data;
+  data: function (row) { return row.isActive ? row.id : '' },
+  render: function (data, type) {
+return data != '' && type === 'display' ? ("Thread Dump" ) : data;
   }
 },
 {
   name: 'heapHistogramCol',
-  data: 'id', render: function (data, type) {
-return type === 'display' ? ("Heap Histogram") : data;
+  data: function (row) { return row.isActive ? row.id : '' },
+  render: function (data, type) {
+return data != '' && type === 'display' ? ("Heap Histogram") : data;
   }
 },
 {


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated (0154c059cddb -> fd476c1c855a)

2024-02-03 Thread yangjie01
This is an automated email from the ASF dual-hosted git repository.

yangjie01 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


from 0154c059cddb [MINOR][DOCS] Remove Java 8/11 at 
`IgnoreUnrecognizedVMOptions` description
 add fd476c1c855a [SPARK-46969][SQL][TESTS] Recover `to_timestamp('366', 
'DD')` test case of `datetime-parsing-invalid.sql`

No new revisions were added by this update.

Summary of changes:
 .../ansi/datetime-parsing-invalid.sql.out|  7 +++
 .../analyzer-results/datetime-parsing-invalid.sql.out|  7 +++
 .../sql-tests/inputs/datetime-parsing-invalid.sql|  3 +--
 .../results/ansi/datetime-parsing-invalid.sql.out| 16 
 .../sql-tests/results/datetime-parsing-invalid.sql.out   |  8 
 5 files changed, 39 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [MINOR][DOCS] Remove Java 8/11 at `IgnoreUnrecognizedVMOptions` description

2024-02-03 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 0154c059cddb [MINOR][DOCS] Remove Java 8/11 at 
`IgnoreUnrecognizedVMOptions` description
0154c059cddb is described below

commit 0154c059cddba7cafe74243b3f9eedd9db367b72
Author: Dongjoon Hyun 
AuthorDate: Sat Feb 3 18:47:30 2024 -0800

[MINOR][DOCS] Remove Java 8/11 at `IgnoreUnrecognizedVMOptions` description

### What changes were proposed in this pull request?

This PR aims to remove old Java 8 and Java 11 from 
`IgnoreUnrecognizedVMOptions` JVM option description.

### Why are the changes needed?

From Apache Spark 4.0.0, we use `IgnoreUnrecognizedVMOptions` JVM option to 
be robust, not for Java 8 and Java 11 support.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #45012 from dongjoon-hyun/IgnoreUnrecognizedVMOptions.

Authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 .../src/main/java/org/apache/spark/launcher/JavaModuleOptions.java | 2 +-
 .../yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala  | 3 +--
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git 
a/launcher/src/main/java/org/apache/spark/launcher/JavaModuleOptions.java 
b/launcher/src/main/java/org/apache/spark/launcher/JavaModuleOptions.java
index a7a6891746c2..8893f4bcb85a 100644
--- a/launcher/src/main/java/org/apache/spark/launcher/JavaModuleOptions.java
+++ b/launcher/src/main/java/org/apache/spark/launcher/JavaModuleOptions.java
@@ -20,7 +20,7 @@ package org.apache.spark.launcher;
 /**
  * This helper class is used to place the all `--add-opens` options
  * required by Spark when using Java 17. `DEFAULT_MODULE_OPTIONS` has added
- * `-XX:+IgnoreUnrecognizedVMOptions` to be compatible with Java 8 and Java 11.
+ * `-XX:+IgnoreUnrecognizedVMOptions` to be robust.
  *
  * @since 3.3.0
  */
diff --git 
a/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
 
b/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
index 22037ad5..6e3e0a1e644e 100644
--- 
a/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
+++ 
b/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
@@ -1031,8 +1031,7 @@ private[spark] class Client(
 javaOpts += s"-Djava.net.preferIPv6Addresses=${Utils.preferIPv6}"
 
 // SPARK-37106: To start AM with Java 17, 
`JavaModuleOptions.defaultModuleOptions`
-// is added by default. It will not affect Java 8 and Java 11 due to 
existence of
-// `-XX:+IgnoreUnrecognizedVMOptions`.
+// is added by default.
 javaOpts += JavaModuleOptions.defaultModuleOptions()
 
 // Set the environment variable through a command prefix


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-45276][INFRA][FOLLOWUP] Fix Java version comment from 11 to 17

2024-02-03 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new f4525ff978a7 [SPARK-45276][INFRA][FOLLOWUP] Fix Java version comment 
from 11 to 17
f4525ff978a7 is described below

commit f4525ff978a7626d93311cb45425cbd591c0454e
Author: Dongjoon Hyun 
AuthorDate: Sat Feb 3 18:33:59 2024 -0800

[SPARK-45276][INFRA][FOLLOWUP] Fix Java version comment from 11 to 17

### What changes were proposed in this pull request?

This is a follow-up of
- #43076

### Why are the changes needed?

To match with the code.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual review.

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes #45013 from dongjoon-hyun/SPARK-45276.

Authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 connector/docker/spark-test/base/Dockerfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/connector/docker/spark-test/base/Dockerfile 
b/connector/docker/spark-test/base/Dockerfile
index 0e8593f8af5b..c397abc211e2 100644
--- a/connector/docker/spark-test/base/Dockerfile
+++ b/connector/docker/spark-test/base/Dockerfile
@@ -18,7 +18,7 @@
 FROM ubuntu:20.04
 
 # Upgrade package index
-# install a few other useful packages plus Open Java 11
+# install a few other useful packages plus Open Java 17
 # Remove unneeded /var/lib/apt/lists/* after install to reduce the
 # docker image size (by ~30MB)
 RUN apt-get update && \


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46760][SQL][DOCS] Make the document of spark.sql.adaptive.coalescePartitions.parallelismFirst clearer

2024-02-03 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 9d4d41c43f1c [SPARK-46760][SQL][DOCS] Make the document of 
spark.sql.adaptive.coalescePartitions.parallelismFirst clearer
9d4d41c43f1c is described below

commit 9d4d41c43f1cb4cf724e0e27c1762df8bbdf2a54
Author: beliefer 
AuthorDate: Sat Feb 3 09:06:38 2024 -0600

[SPARK-46760][SQL][DOCS] Make the document of 
spark.sql.adaptive.coalescePartitions.parallelismFirst clearer

### What changes were proposed in this pull request?
This PR propose to make the document of 
`spark.sql.adaptive.coalescePartitions.parallelismFirst` clearer.

### Why are the changes needed?
The default value of 
`spark.sql.adaptive.coalescePartitions.parallelismFirst` is true, but the 
document contains the word `recommended to set this config to false and respect 
the configured target size`. It's very confused.

### Does this PR introduce _any_ user-facing change?
'Yes'.
The document is more clear.

### How was this patch tested?
N/A

### Was this patch authored or co-authored using generative AI tooling?
'No'.

Closes #44787 from beliefer/SPARK-46760.

Authored-by: beliefer 
Signed-off-by: Sean Owen 
---
 docs/sql-performance-tuning.md   | 2 +-
 .../src/main/scala/org/apache/spark/sql/internal/SQLConf.scala   | 5 +++--
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/docs/sql-performance-tuning.md b/docs/sql-performance-tuning.md
index 1dbe1bb7e1a2..25c22d660562 100644
--- a/docs/sql-performance-tuning.md
+++ b/docs/sql-performance-tuning.md
@@ -267,7 +267,7 @@ This feature coalesces the post shuffle partitions based on 
the map output stati
  
spark.sql.adaptive.coalescePartitions.parallelismFirst
  true
  
-   When true, Spark ignores the target size specified by 
spark.sql.adaptive.advisoryPartitionSizeInBytes (default 64MB) 
when coalescing contiguous shuffle partitions, and only respect the minimum 
partition size specified by 
spark.sql.adaptive.coalescePartitions.minPartitionSize (default 
1MB), to maximize the parallelism. This is to avoid performance regression when 
enabling adaptive query execution. It's recommended to set this config to false 
and respect th [...]
+   When true, Spark ignores the target size specified by 
spark.sql.adaptive.advisoryPartitionSizeInBytes (default 64MB) 
when coalescing contiguous shuffle partitions, and only respect the minimum 
partition size specified by 
spark.sql.adaptive.coalescePartitions.minPartitionSize (default 
1MB), to maximize the parallelism. This is to avoid performance regressions 
when enabling adaptive query execution. It's recommended to set this config to 
true on a busy clus [...]
  
  3.2.0

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index d88cbed6b27d..1bff0ff1a350 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -713,8 +713,9 @@ object SQLConf {
 "shuffle partitions, but adaptively calculate the target size 
according to the default " +
 "parallelism of the Spark cluster. The calculated size is usually 
smaller than the " +
 "configured target size. This is to maximize the parallelism and avoid 
performance " +
-"regression when enabling adaptive query execution. It's recommended 
to set this config " +
-"to false and respect the configured target size.")
+"regressions when enabling adaptive query execution. It's recommended 
to set this " +
+"config to true on a busy cluster to make resource utilization more 
efficient (not many " +
+"small tasks).")
   .version("3.2.0")
   .booleanConf
   .createWithDefault(true)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r67146 - /dev/spark/KEYS

2024-02-03 Thread kabhwan
Author: kabhwan
Date: Sat Feb  3 11:42:23 2024
New Revision: 67146

Log:
Update KEYS

Modified:
dev/spark/KEYS

Modified: dev/spark/KEYS
==
--- dev/spark/KEYS (original)
+++ dev/spark/KEYS Sat Feb  3 11:42:23 2024
@@ -2021,3 +2021,61 @@ fRTcYJkfeano1n8Bmb8EDvwTdF3LNVZfEiTUPEFI
 3Y0blL+bi0NrD82NZvCKoY1RaGFaUO11D7wpmAqf20hDBgRCTvaS9p511YbE2g==
 =nkBh
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa4096 2024-02-03 [SC]
+  FD3E84942E5E6106235A1D25BD356A9F8740E4FF
+uid   [ultimate] Jungtaek Lim (CODE SIGNING KEY for ASF) 

+sub   rsa4096 2024-02-03 [E]
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBGW+JH0BEAC7PdRu3RkVeynRjbO9EJ8ZArSjNWOzvs0lbJ9rk++IgbTWnQtN
+FktO8vajVqpbtVbWoGJoeHt7ueG+CEJQLSmi21k1Hc+2k3c3aNjBIYsU3dFihut7
+kLhA65WOYN7VA6lHeeKSrV4NxgfH0OGIE0phvH8r6QdZQeu+2TmrlbX2STnPdFwu
+aOYJ6ZSQAmeBogISRRNOdBioYyYwREFcGrpErjIMGY7DNu9rR+cLstBmevwmJEBZ
+H3UYrb+pt5l9pOjIXZ+HtGSsvGbS+O7hifayKZc7/L103EKLra8Fuzofopc71C4y
+nnSLuuyubv9PlkuuHMJCFDG6BzVM4E/HPgn0SYwFVGYef6mVEEST2byLL7WDgUl8
+SvL+LB+B3P0evM7tmlldL+kW7TgxriYzKEIfakL/+GnyWmLD7uGxKEyAE8MSwi1+
+a86VA1yYDdA8sFknuIcsaZF+fmk90MwbP1FdirBC6NlguIP4u092Xj5wN6i5WuXn
+mNQj8EeQexQamEYuJQIT/PGw6eE17V3WacymKmPwVHglgvItMCym4lx/Ik8jmuPH
+d2+wUPb0UHlSTkl/CFPSpwmRNmxErBT9Xvghdhu3rv7Ql34cCFiIEy6QBWbsqQ1S
++ov3oZwHS6H2kYYzGZR8pzQGZPp9ibGomzAH2fMVqMYJzx683EkHuuRu9wARAQAB
+tDxKdW5ndGFlayBMaW0gKENPREUgU0lHTklORyBLRVkgZm9yIEFTRikgPGthYmh3
+YW5AYXBhY2hlLm9yZz6JAlEEEwEIADsWIQT9PoSULl5hBiNaHSW9NWqfh0Dk/wUC
+Zb4kfQIbAwULCQgHAgIiAgYVCgkICwIEFgIDAQIeBwIXgAAKCRC9NWqfh0Dk/3Jf
+EACOH4Jjgd0pWkxT9dqX2o1JXddeiawbv43MbwwZkVkT/oOm4bGLILntkuYqFXnq
+9rNP8wE+8wpAchIk3yuy5cPsONhtHxvkWKCnzRsHdt8QBhtdi0KLRMl/9t6280qc
+JUaZdOOmr8vXI0xZ8dy3ZdUyupB8PC3YRm+pTIFVFNwFkJF/+a6C6OQd6qk488BX
+Hi0XgMbTHDJ6pgzaXqR2rUrNDL8L6Per2omaZDbZVw/5GPhNf0LweOT0iXSJtRxR
+N5yvd4AR3M74fswfke7h4nsXx/T4ZctYjs6LAb832MjwZIxLfHFgawvYyVcaJ3/9
+77Gd5uzy9sGNBTmgzVGpmDL7G3AmWN0NQJraj29XWabjsUglgmJecagQGnWNbrvo
+wd64ZzTlFGhOZnMQeSjr2mIWVkgLkzLPCRodY0QTmwaG1ThkhC3APcgnoVdCP4lk
+CSO86NMSfWPT1El36FT5Wtek8jxFHcM6OAAj2RR+Ut1ykHwBE98ErrCMf4Qeq/9R
+awuVaLB9cqLc+PXczguoVuPlHMFp/lSJIS9ztaFsJcSu7a5G/j6iVAgTq/kCF0w4
+fcRkTg2ah5MSt6sroi9eiPhXU+MHPufBvROSozuLqjTHaFOfqIUevSC1MkCfQRjy
+OpoV5O5XyB7sP5XiskgxhJGnqDx3X6h37JLuiEtlgBk6TLkCDQRlviR9ARAAo9X5
+ZtcUima+5NwNCkKi21ZX/55uq97HQK5mrY9Plyw5n+r5WSKUWwa8Cq5qOlUhvo6B
+wJic6V6y7MoLH+GnouaPs98borbRjofkx87v2e7M2BvPJduKeYc1TWDZQMXBQYsY
+2y8sfrbXkL9P4JNfCN9nJ9pVtZ18Dpm4OXkL1cuFGkp+JfvrqnGN/dMXyKh5C/Cr
+TsYJTbQqrUUGgyniQoYXbKkegbfBAxBW1aM2Nd9OEbxHcnpttnZoZyvbDfYcr9R5
+LGirLKh4/MfF9LxWc/hIFaPSo2HqLnPcUv0N2OS5EotqfezqJTsQkk5vtrSmxcCs
+lBnAHjeDYesBZ3vEz6C2l45aF1zV5qJgGoSUy997JVf5cHp8sUz/tGO4SM0hKVi2
+0C2u5uTBI32X0DYi6gPpBECtyzDJA+k17/sZKeYhHs2ec/muC8siZJMeTLGquC7d
+RDrl6cQOgnbcxvhxvI4u6AY5xlXklESj6IBSLgUYDsoAMj0/EBgfvA4qJ5FBerEO
+S5dsZ9vjfcQfAmT2cwmslMWXd3iyu+bUk9GLG7Q9s0vufEoh4VPlv+4llQWoTdE7
+bOJoPGxiUwEo2IhBxLEZRt50FPKxHdqG2GGlkDvFl6hD6h6Gsn6MFniezu36dzG6
+HqvkOEffr1RYOL4QaLDYZXHP2VWnEzuNIN0Z298AEQEAAYkCNgQYAQgAIBYhBP0+
+hJQuXmEGI1odJb01ap+HQOT/BQJlviR9AhsMAAoJEL01ap+HQOT/2AIP/182BxBk
+oAqCaZWrSVx0h9ETpzB8MMk45n3aVia0N5Qv890D3douEdvV5fK8ANcQfcSB/wje
+gG/Uo8b4r0W4vOaWrXEBEybr2twtQgtE+I/LAkPQbAHLS3zOqhTnLpJoSUT3dqEy
+BZ/HvF7o/AyqSHp0k7uKyIVJXNdL/nPzrfUQiQQnxXN2jj3jTxx6woPNHw9EvLCc
+/jh03QfZyc6453BdXYlbZ1VT+1TSoT/LalQ/s8AEJM+m2Pkv+zIggzxpaSAXqGNB
+9LdwcCPcP1RL97veFkXNXs3dIhvfPzzriVNcVY0XkdXMrqjztp3jma0lUa5gB2wz
+b2QCC+zv8S7g70FnZwDSVe4bCfbjAubck/pI1yOaES77hnnQI6XEXLhabqZPwlA3
+OUmwxG3AYCb/c+Qc1nGYDsUTe3e8b9ewKB9/SpPlaf0WCMKTJrPMowWS31VmzCO+
+h3gw3QsCzarqE0xg7ffs5Q41+JIgvrK8GdCThtCBU+Bo6YJLBjV04SjOwaRgKk5T
+5ii6rCzOMtpbEXewrOChQmAV22bXIbwNDafBd0xOaBT6oGrqJ8JLYWOlgIsk+GlC
+QuYsshdUdZCg8yDzqGW2qQNtqfdP/J2jUMdZRwdYDQ08z+6L5Fy7JVVLqicukUPc
+ThVo7dEVoknhannfoULNv5ekjZ/LsFNGHRUZ
+=9cvL
+-END PGP PUBLIC KEY BLOCK-
+



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



(spark) branch master updated: [SPARK-46968][SQL] Replace `UnsupportedOperationException` by `SparkUnsupportedOperationException` in `sql`

2024-02-03 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 6e60b232c769 [SPARK-46968][SQL] Replace 
`UnsupportedOperationException` by `SparkUnsupportedOperationException` in `sql`
6e60b232c769 is described below

commit 6e60b232c7693738b1d005858e5dac24e7bafcaf
Author: Max Gekk 
AuthorDate: Sat Feb 3 00:22:06 2024 -0800

[SPARK-46968][SQL] Replace `UnsupportedOperationException` by 
`SparkUnsupportedOperationException` in `sql`

### What changes were proposed in this pull request?
In the PR, I propose to replace all `UnsupportedOperationException` by 
`SparkUnsupportedOperationException` in `sql` code base, and introduce new 
legacy error classes with the `_LEGACY_ERROR_TEMP_` prefix.

### Why are the changes needed?
To unify Spark SQL exception, and port Java exceptions on Spark exceptions 
with error classes.

### Does this PR introduce _any_ user-facing change?
Yes, it can if user's code assumes some particular format of 
`UnsupportedOperationException` messages.

### How was this patch tested?
By running the affected test suites:
```
$ build/sbt "core/testOnly *SparkThrowableSuite"
```

### Was this patch authored or co-authored using generative AI tooling?
No.

Closes #44937 from MaxGekk/migrate-UnsupportedOperationException-api.

Authored-by: Max Gekk 
Signed-off-by: Dongjoon Hyun 
---
 common/utils/src/main/resources/error/error-classes.json | 10 ++
 .../org/apache/spark/sql/catalyst/trees/QueryContexts.scala  | 12 ++--
 .../scala/org/apache/spark/sql/catalyst/util/UDTUtils.scala  |  3 ++-
 .../org/apache/spark/sql/execution/UnsafeRowSerializer.scala |  2 +-
 .../sql/execution/streaming/CompactibleFileStreamLog.scala   |  4 ++--
 .../spark/sql/execution/streaming/ValueStateImpl.scala   |  2 --
 .../streaming/state/HDFSBackedStateStoreProvider.scala   |  5 ++---
 .../apache/spark/sql/execution/streaming/state/RocksDB.scala |  7 ---
 8 files changed, 27 insertions(+), 18 deletions(-)

diff --git a/common/utils/src/main/resources/error/error-classes.json 
b/common/utils/src/main/resources/error/error-classes.json
index 8399311cbfc4..ef9e81c98e05 100644
--- a/common/utils/src/main/resources/error/error-classes.json
+++ b/common/utils/src/main/resources/error/error-classes.json
@@ -7489,6 +7489,16 @@
   "Datatype not supported "
 ]
   },
+  "_LEGACY_ERROR_TEMP_3193" : {
+"message" : [
+  "Creating multiple column families with HDFSBackedStateStoreProvider is 
not supported"
+]
+  },
+  "_LEGACY_ERROR_TEMP_3197" : {
+"message" : [
+  "Failed to create column family with reserved name="
+]
+  },
   "_LEGACY_ERROR_USER_RAISED_EXCEPTION" : {
 "message" : [
   ""
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/QueryContexts.scala
 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/QueryContexts.scala
index 57271e535afb..c716002ef35c 100644
--- 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/QueryContexts.scala
+++ 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/trees/QueryContexts.scala
@@ -17,7 +17,7 @@
 
 package org.apache.spark.sql.catalyst.trees
 
-import org.apache.spark.{QueryContext, QueryContextType}
+import org.apache.spark.{QueryContext, QueryContextType, 
SparkUnsupportedOperationException}
 
 /** The class represents error context of a SQL query. */
 case class SQLQueryContext(
@@ -131,16 +131,16 @@ case class SQLQueryContext(
   originStartIndex.get <= originStopIndex.get
   }
 
-  override def callSite: String = throw new UnsupportedOperationException
+  override def callSite: String = throw SparkUnsupportedOperationException()
 }
 
 case class DataFrameQueryContext(stackTrace: Seq[StackTraceElement]) extends 
QueryContext {
   override val contextType = QueryContextType.DataFrame
 
-  override def objectType: String = throw new UnsupportedOperationException
-  override def objectName: String = throw new UnsupportedOperationException
-  override def startIndex: Int = throw new UnsupportedOperationException
-  override def stopIndex: Int = throw new UnsupportedOperationException
+  override def objectType: String = throw SparkUnsupportedOperationException()
+  override def objectName: String = throw SparkUnsupportedOperationException()
+  override def startIndex: Int = throw SparkUnsupportedOperationException()
+  override def stopIndex: Int = throw SparkUnsupportedOperationException()
 
   override val fragment: String = {
 stackTrace.headOption.map { firstElem =>
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/UDTUtils.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/UDTUtils.scala
index 98768a35e8a5..a98aa26d02ef 100644
---