[spark] branch branch-3.0 updated: [SPARK-35244][SQL] Invoke should throw the original exception

2021-04-27 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new 6e83789b [SPARK-35244][SQL] Invoke should throw the original exception
6e83789b is described below

commit 6e83789be5fe5141affee600f9b614996cd91482
Author: Wenchen Fan 
AuthorDate: Wed Apr 28 10:45:04 2021 +0900

[SPARK-35244][SQL] Invoke should throw the original exception

### What changes were proposed in this pull request?

This PR updates the interpreted code path of invoke expressions, to unwrap 
the `InvocationTargetException`

### Why are the changes needed?

Make interpreted and codegen path consistent for invoke expressions.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

new UT

Closes #32370 from cloud-fan/minor.

Authored-by: Wenchen Fan 
Signed-off-by: hyukjinkwon 
---
 .../spark/sql/catalyst/expressions/objects/objects.scala   |  7 ++-
 .../sql/catalyst/expressions/ObjectExpressionsSuite.scala  | 10 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
index 7ebf70b..71eacce 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
@@ -127,7 +127,12 @@ trait InvokeLike extends Expression with NonSQLExpression {
   // return null if one of arguments is null
   null
 } else {
-  val ret = method.invoke(obj, args: _*)
+  val ret = try {
+method.invoke(obj, args: _*)
+  } catch {
+// Re-throw the original exception.
+case e: java.lang.reflect.InvocationTargetException => throw e.getCause
+  }
   val boxedClass = ScalaReflection.typeBoxedJavaMapping.get(dataType)
   if (boxedClass.isDefined) {
 boxedClass.get.cast(ret)
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
index c401493..50c76f1 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
@@ -604,6 +604,16 @@ class ObjectExpressionsSuite extends SparkFunSuite with 
ExpressionEvalHelper {
 checkExceptionInExpression[RuntimeException](
   serializer4, EmptyRow, "Cannot use null as map key!")
   }
+
+  test("SPARK-35244: invoke should throw the original exception") {
+val strClsType = ObjectType(classOf[String])
+checkExceptionInExpression[StringIndexOutOfBoundsException](
+  Invoke(Literal("a", strClsType), "substring", strClsType, 
Seq(Literal(3))), "")
+
+val mathCls = classOf[Math]
+checkExceptionInExpression[ArithmeticException](
+  StaticInvoke(mathCls, IntegerType, "addExact", 
Seq(Literal(Int.MaxValue), Literal(1))), "")
+  }
 }
 
 class TestBean extends Serializable {

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.1 updated: [SPARK-35244][SQL] Invoke should throw the original exception

2021-04-27 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.1
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.1 by this push:
 new e58055b  [SPARK-35244][SQL] Invoke should throw the original exception
e58055b is described below

commit e58055bdec919704ce82d659aad2d18913de7512
Author: Wenchen Fan 
AuthorDate: Wed Apr 28 10:45:04 2021 +0900

[SPARK-35244][SQL] Invoke should throw the original exception

### What changes were proposed in this pull request?

This PR updates the interpreted code path of invoke expressions, to unwrap 
the `InvocationTargetException`

### Why are the changes needed?

Make interpreted and codegen path consistent for invoke expressions.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

new UT

Closes #32370 from cloud-fan/minor.

Authored-by: Wenchen Fan 
Signed-off-by: hyukjinkwon 
---
 .../spark/sql/catalyst/expressions/objects/objects.scala   |  7 ++-
 .../sql/catalyst/expressions/ObjectExpressionsSuite.scala  | 10 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
index 76ba523..b93e2e6 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
@@ -127,7 +127,12 @@ trait InvokeLike extends Expression with NonSQLExpression {
   // return null if one of arguments is null
   null
 } else {
-  val ret = method.invoke(obj, args: _*)
+  val ret = try {
+method.invoke(obj, args: _*)
+  } catch {
+// Re-throw the original exception.
+case e: java.lang.reflect.InvocationTargetException => throw e.getCause
+  }
   val boxedClass = ScalaReflection.typeBoxedJavaMapping.get(dataType)
   if (boxedClass.isDefined) {
 boxedClass.get.cast(ret)
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
index bc2b93e..6e71c95 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/ObjectExpressionsSuite.scala
@@ -608,6 +608,16 @@ class ObjectExpressionsSuite extends SparkFunSuite with 
ExpressionEvalHelper {
 checkExceptionInExpression[RuntimeException](
   serializer4, EmptyRow, "Cannot use null as map key!")
   }
+
+  test("SPARK-35244: invoke should throw the original exception") {
+val strClsType = ObjectType(classOf[String])
+checkExceptionInExpression[StringIndexOutOfBoundsException](
+  Invoke(Literal("a", strClsType), "substring", strClsType, 
Seq(Literal(3))), "")
+
+val mathCls = classOf[Math]
+checkExceptionInExpression[ArithmeticException](
+  StaticInvoke(mathCls, IntegerType, "addExact", 
Seq(Literal(Int.MaxValue), Literal(1))), "")
+  }
 }
 
 class TestBean extends Serializable {

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (046c8c3 -> 56bb815)

2021-04-27 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 046c8c3  [SPARK-34878][SQL][TESTS] Check actual sizes of year-month 
and day-time intervals
 add 56bb815  [SPARK-35085][SQL] Get columns operation should handle ANSI 
interval column properly

No new revisions were added by this update.

Summary of changes:
 .../thriftserver/SparkGetColumnsOperation.scala|  5 +-
 .../thriftserver/SparkMetadataOperationSuite.scala | 54 ++
 2 files changed, 57 insertions(+), 2 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (253a1ae -> 046c8c3)

2021-04-27 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 253a1ae  [SPARK-35246][SS] Don't allow streaming-batch intersects
 add 046c8c3  [SPARK-34878][SQL][TESTS] Check actual sizes of year-month 
and day-time intervals

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/execution/columnar/ColumnType.scala  | 65 ++
 .../sql/execution/columnar/ColumnTypeSuite.scala   |  3 +
 2 files changed, 68 insertions(+)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (10c2b68 -> 253a1ae)

2021-04-27 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 10c2b68  [SPARK-35244][SQL] Invoke should throw the original exception
 add 253a1ae  [SPARK-35246][SS] Don't allow streaming-batch intersects

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/analysis/UnsupportedOperationChecker.scala | 4 ++--
 .../spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala  | 4 +++-
 2 files changed, 5 insertions(+), 3 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (abb1f0c -> 10c2b68)

2021-04-27 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from abb1f0c  [SPARK-35236][SQL] Support archive files as resources for 
CREATE FUNCTION USING syntax
 add 10c2b68  [SPARK-35244][SQL] Invoke should throw the original exception

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/expressions/objects/objects.scala   |  7 ++-
 .../sql/catalyst/expressions/ObjectExpressionsSuite.scala  | 10 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated: [SPARK-35236][SQL] Support archive files as resources for CREATE FUNCTION USING syntax

2021-04-27 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new abb1f0c  [SPARK-35236][SQL] Support archive files as resources for 
CREATE FUNCTION USING syntax
abb1f0c is described below

commit abb1f0c5d7e78b06dd5f2bf6856d2baf97f95b10
Author: Kousuke Saruta 
AuthorDate: Wed Apr 28 10:15:21 2021 +0900

[SPARK-35236][SQL] Support archive files as resources for CREATE FUNCTION 
USING syntax

### What changes were proposed in this pull request?

This PR proposes to make `CREATE FUNCTION USING` syntax can take archives 
as resources.

### Why are the changes needed?

It would be useful.
`CREATE FUNCTION USING` syntax doesn't support archives as resources 
because archives were not supported in Spark SQL.
Now Spark SQL supports archives so I think we can support them for the 
syntax.

### Does this PR introduce _any_ user-facing change?

Yes. Users can specify archives for `CREATE FUNCTION USING` syntax.

### How was this patch tested?

New test.

Closes #32359 from sarutak/load-function-using-archive.

Authored-by: Kousuke Saruta 
Signed-off-by: hyukjinkwon 
---
 .../apache/spark/sql/internal/SessionState.scala   |  5 +-
 .../spark/sql/hive/execution/HiveUDFSuite.scala| 53 +-
 2 files changed, 53 insertions(+), 5 deletions(-)

diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala
index 319b226..79fbca6 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala
@@ -150,10 +150,7 @@ class SessionResourceLoader(session: SparkSession) extends 
FunctionResourceLoade
 resource.resourceType match {
   case JarResource => addJar(resource.uri)
   case FileResource => session.sparkContext.addFile(resource.uri)
-  case ArchiveResource =>
-throw new AnalysisException(
-  "Archive is not allowed to be loaded. If YARN mode is used, " +
-"please use --archives options while calling spark-submit.")
+  case ArchiveResource => session.sparkContext.addArchive(resource.uri)
 }
   }
 
diff --git 
a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala
 
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala
index 9e8046b9..f988287 100644
--- 
a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala
+++ 
b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveUDFSuite.scala
@@ -20,6 +20,8 @@ package org.apache.spark.sql.hive.execution
 import java.io.{DataInput, DataOutput, File, PrintWriter}
 import java.util.{ArrayList, Arrays, Properties}
 
+import scala.collection.JavaConverters._
+
 import org.apache.hadoop.conf.Configuration
 import org.apache.hadoop.hive.ql.exec.UDF
 import org.apache.hadoop.hive.ql.udf.{UDAFPercentile, UDFType}
@@ -30,6 +32,7 @@ import 
org.apache.hadoop.hive.serde2.objectinspector.{ObjectInspector, ObjectIns
 import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory
 import org.apache.hadoop.io.{LongWritable, Writable}
 
+import org.apache.spark.{SparkFiles, TestUtils}
 import org.apache.spark.sql.{AnalysisException, QueryTest, Row}
 import org.apache.spark.sql.catalyst.plans.logical.Project
 import org.apache.spark.sql.execution.command.FunctionsCommand
@@ -676,6 +679,47 @@ class HiveUDFSuite extends QueryTest with 
TestHiveSingleton with SQLTestUtils {
   assert(msg2.contains(s"No handler for UDF/UDAF/UDTF 
'${classOf[ArraySumUDF].getName}'"))
 }
   }
+
+  test("SPARK-35236: CREATE FUNCTION should take an archive in USING clause") {
+withTempDir { dir =>
+  withUserDefinedFunction("testListFiles1" -> false) {
+val text1 = File.createTempFile("test1_", ".txt", dir)
+val json1 = File.createTempFile("test1_", ".json", dir)
+val zipFile1 = File.createTempFile("test1_", ".zip", dir)
+TestUtils.createJar(Seq(text1, json1), zipFile1)
+
+sql(s"CREATE FUNCTION testListFiles1 AS 
'${classOf[ListFiles].getName}' " +
+  s"USING ARCHIVE '${zipFile1.getAbsolutePath}'")
+val df1 = sql(s"SELECT 
testListFiles1('${SparkFiles.get(zipFile1.getName)}')")
+val fileList1 =
+  df1.collect().map(_.getList[String](0)).head.asScala.filter(_ != 
"META-INF")
+
+assert(fileList1.length === 2)
+assert(fileList1.contains(text1.getName))
+assert(fileList1.contains(json1.getName))
+  }
+
+  // Test for file#alias style archive registration.
+  withUserDefinedFunction("testListFiles2" -> false) {
+val text2 = File.createT

[spark] branch master updated: [SPARK-34979][PYTHON][DOC] Add PyArrow installation note for PySpark aarch64 user

2021-04-27 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
 new 0769049  [SPARK-34979][PYTHON][DOC] Add PyArrow installation note for 
PySpark aarch64 user
0769049 is described below

commit 0769049ee1701535646a3e5e709e333ca46fac1f
Author: Yikun Jiang 
AuthorDate: Wed Apr 28 09:56:17 2021 +0900

[SPARK-34979][PYTHON][DOC] Add PyArrow installation note for PySpark 
aarch64 user

### What changes were proposed in this pull request?

This patch adds a note for aarch64 user to install the specific 
pyarrow>=4.0.0.

### Why are the changes needed?

The pyarrow aarch64 support is 
[introduced](https://github.com/apache/arrow/pull/9285) in [PyArrow 
4.0.0](https://github.com/apache/arrow/releases/tag/apache-arrow-4.0.0), and it 
has been published 27.Apr.2021.

See more in 
[SPARK-34979](https://issues.apache.org/jira/browse/SPARK-34979).

### Does this PR introduce _any_ user-facing change?
Yes, this doc can help user install arrow on aarch64.

### How was this patch tested?
doc test passed.

Closes #32363 from Yikun/SPARK-34979.

Authored-by: Yikun Jiang 
Signed-off-by: hyukjinkwon 
---
 python/docs/source/getting_started/install.rst | 8 
 1 file changed, 8 insertions(+)

diff --git a/python/docs/source/getting_started/install.rst 
b/python/docs/source/getting_started/install.rst
index a14f2b8..4bc68a9 100644
--- a/python/docs/source/getting_started/install.rst
+++ b/python/docs/source/getting_started/install.rst
@@ -164,3 +164,11 @@ Package   Minimum supported version Note
 Note that PySpark requires Java 8 or later with ``JAVA_HOME`` properly set.  
 If using JDK 11, set ``-Dio.netty.tryReflectionSetAccessible=true`` for Arrow 
related features and refer
 to |downloading|_.
+
+Note for AArch64 (ARM64) users: PyArrow is required by PySpark SQL, but 
PyArrow support for AArch64
+is introduced in PyArrow 4.0.0. If PySpark installation fails on AArch64 due 
to PyArrow
+installation errors, you can install PyArrow >= 4.0.0 as below:
+
+.. code-block:: bash
+
+pip install "pyarrow>=4.0.0" --prefer-binary

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (26a8d2f -> 5b77ebb)

2021-04-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 26a8d2f  [SPARK-35238][DOC] Add JindoFS SDK in cloud integration 
documents
 add 5b77ebb  [SPARK-35150][ML] Accelerate fallback BLAS with 
dev.ludovic.netlib

No new revisions were added by this update.

Summary of changes:
 dev/deps/spark-deps-hadoop-2.7-hive-2.3|   3 +
 dev/deps/spark-deps-hadoop-3.2-hive-2.3|   3 +
 docs/ml-linalg-guide.md|   2 +-
 graphx/pom.xml |   5 +-
 .../org/apache/spark/graphx/lib/SVDPlusPlus.scala  |  31 +-
 .../{LICENSE-bootstrap.txt => LICENSE-blas.txt}|  13 +-
 mllib-local/pom.xml|  33 +-
 .../org/apache/spark/ml/linalg/VectorizedBLAS.java | 483 ---
 .../scala/org/apache/spark/ml/linalg/BLAS.scala|  29 +-
 .../org/apache/spark/ml/linalg/BLASBenchmark.scala | 530 ++---
 mllib/pom.xml  |  13 +
 .../org/apache/spark/mllib/feature/Word2Vec.scala  |  24 +-
 .../org/apache/spark/mllib/linalg/ARPACK.scala |  35 +-
 .../scala/org/apache/spark/mllib/linalg/BLAS.scala |  40 +-
 .../spark/mllib/linalg/CholeskyDecomposition.scala |   5 +-
 .../mllib/linalg/EigenValueDecomposition.scala |  15 +-
 ...ngularValueDecomposition.scala => LAPACK.scala} |  31 +-
 .../org/apache/spark/mllib/linalg/Matrices.scala   |   3 +-
 .../org/apache/spark/mllib/optimization/NNLS.scala |  32 +-
 .../recommendation/MatrixFactorizationModel.scala  |  11 +-
 .../apache/spark/mllib/stat/KernelDensity.scala|   7 +-
 .../mllib/tree/model/treeEnsembleModels.scala  |   4 +-
 .../apache/spark/mllib/util/SVMDataGenerator.scala |   6 +-
 .../org/apache/spark/mllib/linalg/BLASSuite.scala  |   2 +-
 pom.xml|  16 +
 project/SparkBuild.scala   |   2 +-
 python/pyspark/ml/recommendation.py|   4 +-
 27 files changed, 560 insertions(+), 822 deletions(-)
 copy licenses-binary/{LICENSE-bootstrap.txt => LICENSE-blas.txt} (83%)
 delete mode 100644 
mllib-local/src/jvm-vectorized/java/org/apache/spark/ml/linalg/VectorizedBLAS.java
 copy 
streaming/src/main/scala/org/apache/spark/streaming/dstream/ConstantInputDStream.scala
 => mllib/src/main/scala/org/apache/spark/mllib/linalg/ARPACK.scala (54%)
 copy 
mllib/src/main/scala/org/apache/spark/mllib/linalg/{SingularValueDecomposition.scala
 => LAPACK.scala} (52%)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r47449 - in /dev/spark/v2.4.8-rc3-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2021-04-27 Thread viirya
Author: viirya
Date: Tue Apr 27 17:05:11 2021
New Revision: 47449

Log:
Apache Spark v2.4.8-rc3 docs


[This commit notification would consist of 1461 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r47447 - /dev/spark/v2.4.8-rc3-bin/

2021-04-27 Thread viirya
Author: viirya
Date: Tue Apr 27 15:27:18 2021
New Revision: 47447

Log:
Apache Spark v2.4.8-rc3

Added:
dev/spark/v2.4.8-rc3-bin/
dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz   (with props)
dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.asc
dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.sha512
dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz   (with props)
dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.asc
dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.sha512
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.6.tgz.asc
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.6.tgz.sha512
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.7.tgz.asc
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-hadoop2.7.tgz.sha512
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop-scala-2.12.tgz   
(with props)
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop-scala-2.12.tgz.asc

dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop-scala-2.12.tgz.sha512
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop.tgz   (with props)
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop.tgz.asc
dev/spark/v2.4.8-rc3-bin/spark-2.4.8-bin-without-hadoop.tgz.sha512
dev/spark/v2.4.8-rc3-bin/spark-2.4.8.tgz   (with props)
dev/spark/v2.4.8-rc3-bin/spark-2.4.8.tgz.asc
dev/spark/v2.4.8-rc3-bin/spark-2.4.8.tgz.sha512

Added: dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.asc
==
--- dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.asc (added)
+++ dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.asc Tue Apr 27 15:27:18 2021
@@ -0,0 +1,14 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1
+
+iQGcBAABAgAGBQJgiAJwAAoJEGU8IwH+pJPu2B0L/32oiru1/8CHI4z0LJujuTSW
+DRxSuRq2Cf8gq/ef/rC33zTcbD5Ru0wPCDkb+QbXOuwRTgAlR3jJwSaWW4+nMyrb
++SCRC/mIpT+v7gipQd9SNfrJFCr11eaA8T3TXRnaOVw6gRhBGGfzE9cU+zpMyO/b
+KpjqHbDPrM3Ryb0XnIZ9ZtNqAJPwIRmhYRTD2MeS92ckTE9UtmIhJOD8DsRJws4r
+AIM8DQRgHPDPybmlzPAc5Xzu/5H7Bv8uDnBovuMoLHDbI6dJV+g1UFFS3A5xcQBc
+SsNs2IDsMsOS1twPLg8jQXnk8wFYI1dU1D5o8WqQ+UbD9tDezwu1nSkQLotUCtX1
+ULKv7gSDn5nfrfb19ywMHA2bfG10DJMG6WYEF/2OMpO0zXc4QJq1LVYxFCxpNCQE
+WCtjPkTcqmkc7DplojTcxL9QI6DNnvG5pQTdUDF5foXQnRu4kWFoUsrs442hK3Sq
+hyYjH9tL99P1+jLkjeYEFjbDq4enBir1pakcTl/Piw==
+=WqzB
+-END PGP SIGNATURE-

Added: dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.sha512
==
--- dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.sha512 (added)
+++ dev/spark/v2.4.8-rc3-bin/SparkR_2.4.8.tar.gz.sha512 Tue Apr 27 15:27:18 2021
@@ -0,0 +1,3 @@
+SparkR_2.4.8.tar.gz: 078DC5AF CAB43332 CF4B06F8 E76E6826 BADF9017 ECDE2E16
+ CCB29C35 C3585317 B79C2C90 A3E5E5FA 2F4BF2FB 9F56279B
+ 1D4916F6 C76A4233 4061F49B 750195E1

Added: dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.asc
==
--- dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.asc (added)
+++ dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.asc Tue Apr 27 15:27:18 2021
@@ -0,0 +1,14 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1
+
+iQGcBAABAgAGBQJgh+wJAAoJEGU8IwH+pJPuSc0MAIsrCgKJVJ4wpsypCc9WMTN7
+gKBxSGxV+V+tfR2nme+8fWIz3IQSGTTJpwgXpSFfUgREcGolV12itsAu48ubk/Ie
+9OTIcffygAFxPn9NLOR8noJZK2OAJX7XxzMRUBuxCPsjTSZm6zD3SL2ZIJu4ZnQm
+tMZXuUl67A2QzdX4sEI+LNHnhwN+yz8fP/llMBkFAcYOQErmeNuFCqyjLi4S0wVB
+tqYIEyEvuQlfzUrvMxBb9R6QqZ3II6aO7zHkk90mJBn27JTDEMTXEXJP0kt/AXEU
++ns8WKmIgAo2WQTINZTO/zmiABmZ/No7b2sGw52v76tiN4ZJ+QJ7BlEXt5miM88P
+/NihVg9qGaQ3K6eCMGQQ/+djSqIPIQVWfiQpiHRNbfDTegEmDfBjJzsZM0g+tanf
+laYpsiK3SzWxWWUBXbiUXBIfA3AagcKxheCUC/qQwyj8IpQTaPaEtNWuK17tfHLa
+nG7WxnY56MnKMLFzHEK9FlbnEov7/qAeAXGx4lIX5Q==
+=A8Jp
+-END PGP SIGNATURE-

Added: dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.sha512
==
--- dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.sha512 (added)
+++ dev/spark/v2.4.8-rc3-bin/pyspark-2.4.8.tar.gz.sha512 Tue Apr 27 15:27:18 
2021
@@ -0,0 +1,3 @@
+pyspark-2.4.8.tar.gz: DBED9ED2 75DFFB6B A8902C9E AD611BB

[spark] branch master updated (592230e -> 26a8d2f)

2021-04-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 592230e  [MINOR][DOCS][ML] Explicit return type of array_to_vector 
utility function
 add 26a8d2f  [SPARK-35238][DOC] Add JindoFS SDK in cloud integration 
documents

No new revisions were added by this update.

Summary of changes:
 docs/cloud-integration.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (16d223e -> 592230e)

2021-04-27 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 16d223e  [SPARK-35091][SPARK-35090][SQL] Support extract from ANSI 
Intervals
 add 592230e  [MINOR][DOCS][ML] Explicit return type of array_to_vector 
utility function

No new revisions were added by this update.

Summary of changes:
 python/pyspark/ml/functions.py | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (4ff9f1f -> 16d223e)

2021-04-27 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 4ff9f1f  [SPARK-35239][SQL] Coalesce shuffle partition should handle 
empty input RDD
 add 16d223e  [SPARK-35091][SPARK-35090][SQL] Support extract from ANSI 
Intervals

No new revisions were added by this update.

Summary of changes:
 .../catalyst/expressions/datetimeExpressions.scala |  26 ++-
 .../catalyst/expressions/intervalExpressions.scala | 104 +++---
 .../spark/sql/catalyst/optimizer/expressions.scala |   2 +-
 .../spark/sql/catalyst/util/IntervalUtils.scala|  34 +--
 .../expressions/IntervalExpressionsSuite.scala |  56 -
 .../test/resources/sql-tests/inputs/extract.sql|  31 +++
 .../resources/sql-tests/results/extract.sql.out| 227 -
 7 files changed, 410 insertions(+), 70 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (55dea2d -> 4ff9f1f)

2021-04-27 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 55dea2d  [SPARK-34837][SQL][FOLLOWUP] Fix division by zero in the avg 
function over ANSI intervals
 add 4ff9f1f  [SPARK-35239][SQL] Coalesce shuffle partition should handle 
empty input RDD

No new revisions were added by this update.

Summary of changes:
 .../adaptive/CoalesceShufflePartitions.scala   | 29 ++
 .../execution/adaptive/ShufflePartitionsUtil.scala |  4 +++
 .../adaptive/AdaptiveQueryExecSuite.scala  | 15 +++
 3 files changed, 37 insertions(+), 11 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (2d2f467 -> 55dea2d)

2021-04-27 Thread maxgekk
This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 2d2f467  [SPARK-35169][SQL] Fix wrong result of min ANSI interval 
division by -1
 add 55dea2d  [SPARK-34837][SQL][FOLLOWUP] Fix division by zero in the avg 
function over ANSI intervals

No new revisions were added by this update.

Summary of changes:
 .../spark/sql/catalyst/expressions/aggregate/Average.scala|  8 ++--
 .../scala/org/apache/spark/sql/DataFrameAggregateSuite.scala  | 11 +--
 2 files changed, 15 insertions(+), 4 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c4ad86f -> 2d2f467)

2021-04-27 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c4ad86f  [SPARK-35235][SQL][TEST] Add row-based hash map into 
aggregate benchmark
 add 2d2f467  [SPARK-35169][SQL] Fix wrong result of min ANSI interval 
division by -1

No new revisions were added by this update.

Summary of changes:
 .../catalyst/expressions/intervalExpressions.scala |  56 +--
 .../test/resources/sql-tests/inputs/interval.sql   |  14 +++
 .../sql-tests/results/ansi/interval.sql.out| 106 -
 .../resources/sql-tests/results/interval.sql.out   | 106 -
 4 files changed, 270 insertions(+), 12 deletions(-)

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org