[spark] branch master updated (94bbca3 -> 42f59ca)

2021-05-06 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 94bbca3  [SPARK-35306][MLLIB][TESTS] Add benchmark results for 
BLASBenchmark created by GitHub Actions machines
 add 42f59ca  [SPARK-35133][SQL] Explain codegen works with AQE

No new revisions were added by this update.

Summary of changes:
 .../apache/spark/sql/execution/debug/package.scala |   6 ++
 .../scala/org/apache/spark/sql/ExplainSuite.scala  |  27 ++
 .../spark/sql/execution/debug/DebuggingSuite.scala | 105 +++--
 3 files changed, 89 insertions(+), 49 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (e834ef7 -> 94bbca3)

2021-05-06 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from e834ef7  [SPARK-35293][SQL][TESTS][FOLLOWUP] Update the hash key to 
refresh TPC-DS cache data in forked GA jobs
 add 94bbca3  [SPARK-35306][MLLIB][TESTS] Add benchmark results for 
BLASBenchmark created by GitHub Actions machines

No new revisions were added by this update.

Summary of changes:
 .../benchmarks/BLASBenchmark-jdk11-results.txt | 252 +
 mllib-local/benchmarks/BLASBenchmark-results.txt   | 252 +
 2 files changed, 504 insertions(+)
 create mode 100644 mllib-local/benchmarks/BLASBenchmark-jdk11-results.txt
 create mode 100644 mllib-local/benchmarks/BLASBenchmark-results.txt

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35293][SQL][TESTS][FOLLOWUP] Update the hash key to refresh TPC-DS cache data in forked GA jobs

2021-05-06 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new e834ef7  [SPARK-35293][SQL][TESTS][FOLLOWUP] Update the hash key to 
refresh TPC-DS cache data in forked GA jobs
e834ef7 is described below

commit e834ef74dcbfc29f5288a41392dc3d5c08119fcf
Author: Takeshi Yamamuro 
AuthorDate: Thu May 6 16:06:50 2021 -0700

[SPARK-35293][SQL][TESTS][FOLLOWUP] Update the hash key to refresh TPC-DS 
cache data in forked GA jobs

### What changes were proposed in this pull request?

This is a follow-up PRi of #32420 and it intends to update the hash key to 
refresh TPC-DS cache data in forked GA jobs.

### Why are the changes needed?

To recover GA jobs.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

GA passed.

Closes #32460 from maropu/SPARK-35293-FOLLOWUP.

Authored-by: Takeshi Yamamuro 
Signed-off-by: Dongjoon Hyun 
---
 .github/workflows/build_and_test.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.github/workflows/build_and_test.yml 
b/.github/workflows/build_and_test.yml
index 3b6ce04..e6b5446 100644
--- a/.github/workflows/build_and_test.yml
+++ b/.github/workflows/build_and_test.yml
@@ -520,7 +520,7 @@ jobs:
   uses: actions/cache@v2
   with:
 path: ./tpcds-sf-1
-key: tpcds-${{ 
hashFiles('sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala') }}
+key: tpcds-${{ hashFiles('.github/workflows/build_and_test.yml', 
'sql/core/src/test/scala/org/apache/spark/sql/TPCDSSchema.scala') }}
 - name: Checkout tpcds-kit repository
   if: steps.cache-tpcds-sf-1.outputs.cache-hit != 'true'
   uses: actions/checkout@v2

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35326][BUILD][FOLLOWUP] Update dependency manifest files

2021-05-06 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 482b43d  [SPARK-35326][BUILD][FOLLOWUP] Update dependency manifest 
files
482b43d is described below

commit 482b43d78de2fbeb85a2ba54c59e08dab45f59aa
Author: Dongjoon Hyun 
AuthorDate: Thu May 6 09:08:10 2021 -0700

[SPARK-35326][BUILD][FOLLOWUP] Update dependency manifest files

### What changes were proposed in this pull request?

This is a followup of https://github.com/apache/spark/pull/32453.

### Why are the changes needed?

Jenkins doesn't check dependency manifest files.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Pass the GitHub Action or manually.

Closes #32458 from dongjoon-hyun/SPARK-35326.

Authored-by: Dongjoon Hyun 
Signed-off-by: Dongjoon Hyun 
---
 dev/deps/spark-deps-hadoop-2.7-hive-2.3 | 13 ++---
 dev/deps/spark-deps-hadoop-3.2-hive-2.3 | 13 ++---
 2 files changed, 12 insertions(+), 14 deletions(-)

diff --git a/dev/deps/spark-deps-hadoop-2.7-hive-2.3 
b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
index 0500bd6..c8077d5 100644
--- a/dev/deps/spark-deps-hadoop-2.7-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-2.7-hive-2.3
@@ -131,13 +131,12 @@ jaxb-api/2.2.11//jaxb-api-2.2.11.jar
 jaxb-runtime/2.3.2//jaxb-runtime-2.3.2.jar
 jcl-over-slf4j/1.7.30//jcl-over-slf4j-1.7.30.jar
 jdo-api/3.0.1//jdo-api-3.0.1.jar
-jersey-client/2.30//jersey-client-2.30.jar
-jersey-common/2.30//jersey-common-2.30.jar
-jersey-container-servlet-core/2.30//jersey-container-servlet-core-2.30.jar
-jersey-container-servlet/2.30//jersey-container-servlet-2.30.jar
-jersey-hk2/2.30//jersey-hk2-2.30.jar
-jersey-media-jaxb/2.30//jersey-media-jaxb-2.30.jar
-jersey-server/2.30//jersey-server-2.30.jar
+jersey-client/2.34//jersey-client-2.34.jar
+jersey-common/2.34//jersey-common-2.34.jar
+jersey-container-servlet-core/2.34//jersey-container-servlet-core-2.34.jar
+jersey-container-servlet/2.34//jersey-container-servlet-2.34.jar
+jersey-hk2/2.34//jersey-hk2-2.34.jar
+jersey-server/2.34//jersey-server-2.34.jar
 jetty-sslengine/6.1.26//jetty-sslengine-6.1.26.jar
 jetty-util/6.1.26//jetty-util-6.1.26.jar
 jetty/6.1.26//jetty-6.1.26.jar
diff --git a/dev/deps/spark-deps-hadoop-3.2-hive-2.3 
b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
index c05b8bc..841dd52 100644
--- a/dev/deps/spark-deps-hadoop-3.2-hive-2.3
+++ b/dev/deps/spark-deps-hadoop-3.2-hive-2.3
@@ -106,13 +106,12 @@ jaxb-api/2.2.11//jaxb-api-2.2.11.jar
 jaxb-runtime/2.3.2//jaxb-runtime-2.3.2.jar
 jcl-over-slf4j/1.7.30//jcl-over-slf4j-1.7.30.jar
 jdo-api/3.0.1//jdo-api-3.0.1.jar
-jersey-client/2.30//jersey-client-2.30.jar
-jersey-common/2.30//jersey-common-2.30.jar
-jersey-container-servlet-core/2.30//jersey-container-servlet-core-2.30.jar
-jersey-container-servlet/2.30//jersey-container-servlet-2.30.jar
-jersey-hk2/2.30//jersey-hk2-2.30.jar
-jersey-media-jaxb/2.30//jersey-media-jaxb-2.30.jar
-jersey-server/2.30//jersey-server-2.30.jar
+jersey-client/2.34//jersey-client-2.34.jar
+jersey-common/2.34//jersey-common-2.34.jar
+jersey-container-servlet-core/2.34//jersey-container-servlet-core-2.34.jar
+jersey-container-servlet/2.34//jersey-container-servlet-2.34.jar
+jersey-hk2/2.34//jersey-hk2-2.34.jar
+jersey-server/2.34//jersey-server-2.34.jar
 jline/2.14.6//jline-2.14.6.jar
 joda-time/2.10.5//joda-time-2.10.5.jar
 jodd-core/3.5.2//jodd-core-3.5.2.jar

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35326][BUILD] Upgrade Jersey to 2.34

2021-05-06 Thread dongjoon
This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new bb93547  [SPARK-35326][BUILD] Upgrade Jersey to 2.34
bb93547 is described below

commit bb93547cdf0791c38dffaf2ca28bf04b85680100
Author: Kousuke Saruta 
AuthorDate: Thu May 6 08:36:32 2021 -0700

[SPARK-35326][BUILD] Upgrade Jersey to 2.34

### What changes were proposed in this pull request?

This PR upgrades Jersey to 2.34.

### Why are the changes needed?

CVE-2021-28168, a local information disclosure vulnerability, is reported 
(https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-28168).
Spark 3.1.1, 3.0.2 and 3.2.0 use an affected version 2.30.

### Does this PR introduce _any_ user-facing change?

It's not clear how much the impact is but Spark uses an affected version of 
Jersey so I think it's better to upgrade it just in case.

### How was this patch tested?

CI.

Closes #32453 from sarutak/upgrade-jersey.

Authored-by: Kousuke Saruta 
Signed-off-by: Dongjoon Hyun 
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 827b405..f8ab52b 100644
--- a/pom.xml
+++ b/pom.xml
@@ -185,7 +185,7 @@
 4.1.17
 14.0.1
 3.0.16
-2.30
+2.34
 2.10.5
 3.5.2
 3.0.0

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6cd5cf5 -> dfb3343)

2021-05-06 Thread kabhwan
This is an automated email from the ASF dual-hosted git repository.

kabhwan pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6cd5cf5  [SPARK-35215][SQL] Update custom metric per certain rows and 
at the end of the task
 add dfb3343  [SPARK-34526][SS] Ignore the error when checking the path in 
FileStreamSink.hasMetadata

No new revisions were added by this update.

Summary of changes:
 .../sql/execution/streaming/FileStreamSink.scala   | 20 +++---
 .../spark/sql/streaming/FileStreamSinkSuite.scala  | 45 +-
 2 files changed, 57 insertions(+), 8 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35215][SQL] Update custom metric per certain rows and at the end of the task

2021-05-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 6cd5cf5  [SPARK-35215][SQL] Update custom metric per certain rows and 
at the end of the task
6cd5cf5 is described below

commit 6cd5cf57229050ba9542a644ed0a4c844949a832
Author: Liang-Chi Hsieh 
AuthorDate: Thu May 6 13:21:08 2021 +

[SPARK-35215][SQL] Update custom metric per certain rows and at the end of 
the task

### What changes were proposed in this pull request?

This patch changes custom metric updating to update per certain rows 
(currently 100), instead of per row.

### Why are the changes needed?

Based on previous discussion 
https://github.com/apache/spark/pull/31451#discussion_r605413557, we should 
only update custom metrics per certain (e.g. 100) rows and also at the end of 
the task. Updating per row doesn't make too much benefit.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing unit test.

Closes #32330 from viirya/metric-update.

Authored-by: Liang-Chi Hsieh 
Signed-off-by: Wenchen Fan 
---
 .../sql/execution/datasources/v2/DataSourceRDD.scala  | 19 ---
 .../spark/sql/execution/metric/CustomMetrics.scala| 15 ++-
 .../continuous/ContinuousDataSourceRDD.scala  |  9 ++---
 3 files changed, 32 insertions(+), 11 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceRDD.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceRDD.scala
index 7850dfa..217a1d5 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceRDD.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceRDD.scala
@@ -26,7 +26,7 @@ import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.connector.read.{InputPartition, PartitionReader, 
PartitionReaderFactory}
 import org.apache.spark.sql.errors.QueryExecutionErrors
-import org.apache.spark.sql.execution.metric.SQLMetric
+import org.apache.spark.sql.execution.metric.{CustomMetrics, SQLMetric}
 import org.apache.spark.sql.vectorized.ColumnarBatch
 
 class DataSourceRDDPartition(val index: Int, val inputPartition: 
InputPartition)
@@ -66,7 +66,12 @@ class DataSourceRDD(
 new PartitionIterator[InternalRow](rowReader, customMetrics))
   (iter, rowReader)
 }
-context.addTaskCompletionListener[Unit](_ => reader.close())
+context.addTaskCompletionListener[Unit] { _ =>
+  // In case of early stopping before consuming the entire iterator,
+  // we need to do one more metric update at the end of the task.
+  CustomMetrics.updateMetrics(reader.currentMetricsValues, customMetrics)
+  reader.close()
+}
 // TODO: SPARK-25083 remove the type erasure hack in data source scan
 new InterruptibleIterator(context, 
iter.asInstanceOf[Iterator[InternalRow]])
   }
@@ -81,6 +86,8 @@ private class PartitionIterator[T](
 customMetrics: Map[String, SQLMetric]) extends Iterator[T] {
   private[this] var valuePrepared = false
 
+  private var numRow = 0L
+
   override def hasNext: Boolean = {
 if (!valuePrepared) {
   valuePrepared = reader.next()
@@ -92,12 +99,10 @@ private class PartitionIterator[T](
 if (!hasNext) {
   throw QueryExecutionErrors.endOfStreamError()
 }
-reader.currentMetricsValues.foreach { metric =>
-  assert(customMetrics.contains(metric.name()),
-s"Custom metrics ${customMetrics.keys.mkString(", ")} do not contain 
the metric " +
-  s"${metric.name()}")
-  customMetrics(metric.name()).set(metric.value())
+if (numRow % CustomMetrics.NUM_ROWS_PER_UPDATE == 0) {
+  CustomMetrics.updateMetrics(reader.currentMetricsValues, customMetrics)
 }
+numRow += 1
 valuePrepared = false
 reader.get()
   }
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/metric/CustomMetrics.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/metric/CustomMetrics.scala
index f2449a1..3e6cad2 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/metric/CustomMetrics.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/metric/CustomMetrics.scala
@@ -17,11 +17,13 @@
 
 package org.apache.spark.sql.execution.metric
 
-import org.apache.spark.sql.connector.metric.CustomMetric
+import org.apache.spark.sql.connector.metric.{CustomMetric, CustomTaskMetric}
 
 object CustomMetrics {
   private[spark] val V2_CUSTOM = "v2Custom"
 
+  private[spark] val NUM_ROWS_PER_UPDATE = 100
+
   /**
* Given a class name, builds and returns a metric type for a V2 custom 
metric class
* `CustomMetric`.

[spark] branch master updated: [SPARK-35240][SS] Use CheckpointFileManager for checkpoint file manipulation

2021-05-06 Thread viirya
This is an automated email from the ASF dual-hosted git repository.

viirya pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new c6d3f37  [SPARK-35240][SS] Use CheckpointFileManager for checkpoint 
file manipulation
c6d3f37 is described below

commit c6d3f3778faa308308492fd758d2e9bd027f4768
Author: Liang-Chi Hsieh 
AuthorDate: Thu May 6 00:49:37 2021 -0700

[SPARK-35240][SS] Use CheckpointFileManager for checkpoint file manipulation

### What changes were proposed in this pull request?

This patch changes a few places using `FileSystem` API to manipulate 
checkpoint file to `CheckpointFileManager`.

### Why are the changes needed?

`CheckpointFileManager` is designed to handle checkpoint file manipulation. 
However, there are a few places exposing `FileSystem` from checkpoint 
files/paths. We should use `CheckpointFileManager` to manipulate checkpoint 
files. For example, we may want to have one storage system for checkpoint file. 
If all checkpoint file manipulation is performed through 
`CheckpointFileManager`, we can only implement `CheckpointFileManager` for the 
storage system, and don't need to implement `FileSy [...]

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Existing unit tests.

Closes #32361 from viirya/checkpoint-manager.

Authored-by: Liang-Chi Hsieh 
Signed-off-by: Liang-Chi Hsieh 
---
 .../execution/streaming/CheckpointFileManager.scala| 18 ++
 .../sql/execution/streaming/ResolveWriteToStream.scala | 11 +--
 .../sql/execution/streaming/StreamExecution.scala  |  7 +--
 .../spark/sql/execution/streaming/StreamMetadata.scala |  7 ---
 4 files changed, 32 insertions(+), 11 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CheckpointFileManager.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CheckpointFileManager.scala
index c2b69ec..85484d3 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CheckpointFileManager.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/CheckpointFileManager.scala
@@ -83,6 +83,12 @@ trait CheckpointFileManager {
 
   /** Is the default file system this implementation is operating on the local 
file system. */
   def isLocal: Boolean
+
+  /**
+   * Creates the checkpoint path if it does not exist, and returns the 
qualified
+   * checkpoint path.
+   */
+  def createCheckpointDirectory(): Path
 }
 
 object CheckpointFileManager extends Logging {
@@ -285,6 +291,12 @@ class FileSystemBasedCheckpointFileManager(path: Path, 
hadoopConf: Configuration
 case _: LocalFileSystem | _: RawLocalFileSystem => true
 case _ => false
   }
+
+  override def createCheckpointDirectory(): Path = {
+val qualifiedPath = fs.makeQualified(path)
+fs.mkdirs(qualifiedPath, FsPermission.getDirDefault)
+qualifiedPath
+  }
 }
 
 
@@ -351,6 +363,12 @@ class FileContextBasedCheckpointFileManager(path: Path, 
hadoopConf: Configuratio
 case _ => false
   }
 
+  override def createCheckpointDirectory(): Path = {
+val qualifiedPath = fc.makeQualified(path)
+fc.mkdir(qualifiedPath, FsPermission.getDirDefault, true)
+qualifiedPath
+  }
+
   private def mayRemoveCrcFile(path: Path): Unit = {
 try {
   val checksumFile = new Path(path.getParent, s".${path.getName}.crc")
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ResolveWriteToStream.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ResolveWriteToStream.scala
index 3e01b31..10bc927 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ResolveWriteToStream.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ResolveWriteToStream.scala
@@ -89,11 +89,12 @@ object ResolveWriteToStream extends Rule[LogicalPlan] with 
SQLConfHelper {
 s"""SparkSession.conf.set("${SQLConf.CHECKPOINT_LOCATION.key}", 
...)""")
   }
 }
+val fileManager = CheckpointFileManager.create(new 
Path(checkpointLocation), s.hadoopConf)
+
 // If offsets have already been created, we trying to resume a query.
 if (!s.recoverFromCheckpointLocation) {
   val checkpointPath = new Path(checkpointLocation, "offsets")
-  val fs = checkpointPath.getFileSystem(s.hadoopConf)
-  if (fs.exists(checkpointPath)) {
+  if (fileManager.exists(checkpointPath)) {
 throw new AnalysisException(
   s"This query does not support recovering from checkpoint location. " 
+
 s"Delete $checkpointPath to start over.")
@@ -102,7 +103,6 @@ object ResolveWriteToStream extends Rule[LogicalPlan] with 
SQLConfHelper {
 
 val resolvedCheckpointRoot = {
   val checkpointPath = new 

[spark] branch master updated (5c67d0c -> 3f5a209)

2021-05-06 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 5c67d0c  [SPARK-35293][SQL][TESTS] Use the newer dsdgen for 
TPCDSQueryTestSuite
 add 3f5a209  [SPARK-35318][SQL] Hide internal view properties for describe 
table cmd

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/catalog/interface.scala |  4 +++-
 .../sql-tests/results/charvarchar.sql.out  |  8 +++
 .../resources/sql-tests/results/describe.sql.out   |  4 ++--
 .../results/postgreSQL/create_view.sql.out | 28 +++---
 .../sql-tests/results/show-tables.sql.out  |  2 +-
 5 files changed, 24 insertions(+), 22 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (19661f6 -> 5c67d0c)

2021-05-06 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 19661f6  [SPARK-35325][SQL][TESTS] Add nested column ORC encryption 
test case
 add 5c67d0c  [SPARK-35293][SQL][TESTS] Use the newer dsdgen for 
TPCDSQueryTestSuite

No new revisions were added by this update.

Summary of changes:
 .github/workflows/build_and_test.yml   |6 +-
 .../resources/tpcds-query-results/v1_4/q1.sql.out  |  184 +-
 .../resources/tpcds-query-results/v1_4/q10.sql.out |   11 +-
 .../resources/tpcds-query-results/v1_4/q11.sql.out |6 +
 .../resources/tpcds-query-results/v1_4/q12.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q13.sql.out |2 +-
 .../tpcds-query-results/v1_4/q14a.sql.out  |  200 +-
 .../tpcds-query-results/v1_4/q14b.sql.out  |  200 +-
 .../resources/tpcds-query-results/v1_4/q15.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q16.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q17.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q18.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q19.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q2.sql.out  | 5026 +--
 .../resources/tpcds-query-results/v1_4/q20.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q21.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q22.sql.out |  200 +-
 .../tpcds-query-results/v1_4/q23a.sql.out  |2 +-
 .../tpcds-query-results/v1_4/q23b.sql.out  |5 +-
 .../tpcds-query-results/v1_4/q24a.sql.out  |8 +-
 .../tpcds-query-results/v1_4/q24b.sql.out  |2 +-
 .../resources/tpcds-query-results/v1_4/q25.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q26.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q27.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q28.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q29.sql.out |3 +-
 .../resources/tpcds-query-results/v1_4/q3.sql.out  |  172 +-
 .../resources/tpcds-query-results/v1_4/q30.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q31.sql.out |  112 +-
 .../resources/tpcds-query-results/v1_4/q32.sql.out |2 -
 .../resources/tpcds-query-results/v1_4/q33.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q34.sql.out |  434 +-
 .../resources/tpcds-query-results/v1_4/q35.sql.out |  188 +-
 .../resources/tpcds-query-results/v1_4/q36.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q37.sql.out |3 +-
 .../resources/tpcds-query-results/v1_4/q38.sql.out |2 +-
 .../tpcds-query-results/v1_4/q39a.sql.out  |  449 +-
 .../tpcds-query-results/v1_4/q39b.sql.out  |   24 +-
 .../resources/tpcds-query-results/v1_4/q4.sql.out  |   10 +-
 .../resources/tpcds-query-results/v1_4/q40.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q41.sql.out |9 +-
 .../resources/tpcds-query-results/v1_4/q42.sql.out |   21 +-
 .../resources/tpcds-query-results/v1_4/q43.sql.out |   12 +-
 .../resources/tpcds-query-results/v1_4/q44.sql.out |   20 +-
 .../resources/tpcds-query-results/v1_4/q45.sql.out |   39 +-
 .../resources/tpcds-query-results/v1_4/q46.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q47.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q48.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q49.sql.out |   64 +-
 .../resources/tpcds-query-results/v1_4/q5.sql.out  |  200 +-
 .../resources/tpcds-query-results/v1_4/q50.sql.out |   12 +-
 .../resources/tpcds-query-results/v1_4/q51.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q52.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q53.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q54.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q55.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q56.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q57.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q58.sql.out |4 +-
 .../resources/tpcds-query-results/v1_4/q59.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q6.sql.out  |   91 +-
 .../resources/tpcds-query-results/v1_4/q60.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q61.sql.out |2 +-
 .../resources/tpcds-query-results/v1_4/q62.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q63.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q64.sql.out |   19 +-
 .../resources/tpcds-query-results/v1_4/q65.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q66.sql.out |   10 +-
 .../resources/tpcds-query-results/v1_4/q67.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q68.sql.out |  200 +-
 .../resources/tpcds-query-results/v1_4/q69.sql.out |  182 +-
 .../resources/tpcds-query-results/v1_4/q7.sql.out  |  200 +-
 .../resources/tpcds-query-results/v1_4/q70.sql.out |6 +-